url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://approximability.blogspot.com/2008/02/lecture-13-hardness-endgame.html
## Tuesday, February 26, 2008 ### Lecture 13: The Hardness Endgame Here's the final steps to the hardness of Priority Steiner tree. The details are in the paper On the Approximability of Network Design Problems, J. Chuzhoy, A. Gupta, J. Naor and A. Sinha, SODA 2006. What we saw Today in Class. Recall that we assumed that for some $h$, we had set cover instances with universe of size $u$, $s$ sets, each set of size $z$, where • the Yes instances had a solution using at most $X$ sets, but • the No instances had the property that even picking $h\cdot X$ sets would leave a $2^{-c \cdot h}$-fraction of the universe uncovered. (Let us call this the $h$-factor property.) The Yes instances mapped to instances of PST where the solution cost was at most $X$. In the No instances, we proved a claim that if the total cost of edges bought in levels $1, 2, \ldots, j-1$ was at most $hX/2$, then the cost incurred at level $j$ itself would be at least $(u/4z)\cdot 2^{-ch}$. In particular, if $k$ were (at least) $(4zhX/u)\cdot 2^{ch}$, this claim implies that the total cost in the No instance is at least $hX/2$. And hence we have a gap of $h/2$ between the Yes and the No instances. Parameters (The Questions). So how do we set the parameters so that 1. the set cover instance satisfies the $h$-factor property. 2. The number of levels $k$ is at least $(4zhX/u) \cdot 2^{ch}$. 3. the size of the construction $N \approx (2z)^k \cdot s$ is reasonable. Our Friend the Set Cover Instance. The construction of the set cover instance is the usual thing as discussed in Lecture 11 (or in Homework 3): note that we can start from a SAT instance of size $n$, and get a label cover instance with $n'$ nodes in each partition, each node participating in $d'$ constraints, label set $L'$, key set $K'$, where $n' = n^{- \log \eta}$, and $d', |L'|, |K'|$ are all $poly(1/\eta)$. In turn, this gives us a set cover instance with universe size $u = n'd'2^{|K'|}$, number of sets $s = n'|L'|$, size of each set being $z = d' 2^{|K'| - 1}$, the optimal solution size is $X = 2n'$, and it satisfies the $h$-factor property above if $1/h^2 > O(\eta)$. 1. the set cover instance satisfies the $h$-factor property as long as $h^2 < O(1/\eta)$. This is satisfied if we set $\eta \approx h \approx \Theta(\log \log n)$. 2. Note that the parameters derived above satisfy $zX/u = 4$, and hence $k = 16h 2^{ch} \approx \Theta(\log n)$ is sufficient. 3. the size of the construction is $N \approx s (2z)^k = n^{- \log \eta} \cdot poly(1/\eta)^{(2^h)} \approx n^{\log \log \log n}$. Finally, the gap between the Yes and No instances is $h = \Omega(\log \log n) = \log \log N$ as well. This completes the proof.
2017-05-27 02:20:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8970452547073364, "perplexity": 304.4403176721888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608765.79/warc/CC-MAIN-20170527021224-20170527041224-00068.warc.gz"}
https://www.physicsforums.com/threads/hartree-fock-symmetric-energy-expression.606918/
# Hartree Fock Symmetric Energy Expression 1. May 17, 2012 ### arsc Hello. I just wonder why the energy expression of Hartree Fock method is symmetric. I tried to find out the reason on the Internet but I could only find that: since the Hartree Fock energy expression is symmetric, it is variational. In Hartree Fock method, the repulsive energy between different electrons is averaged, so does that mean the amount of contribution of different electrons to the repulsive energy is indistinguishable which leads to the symmetric energy expression? 2. May 18, 2012 ### cgk What do you mean by "symmetric"? The HF energy equation is $$E=\langle\Phi|H|\Phi\rangle$$ with Phi being a Slater determinant. It is simply the expectation value of the trial wave function (if you evaluate this expression in terms of matrix elements, you end up with the usual sum over h_ii + 2<ij|ij>-<ii|jj> for a closed-shell determinant). HF is not variational because of the form of its energy expression, but because it is defined to be the method giving the wave function of this form (=determinant) which produces the lowest energy. While there are plenty of many body methods with non-symmetric energy expressions of the form $$E=\langle\Phi|H|\Psi\rangle$$ (e.g., standard coupled cluster), and this form indeed prevents a variational solution of the equation (it is obviously not bounded for variations of Psi and fixed Phi), it is perfectly possible to make theories with symmetric expectation values which are *not* variational.
2018-03-17 17:42:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.740949809551239, "perplexity": 698.397310639324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645248.22/warc/CC-MAIN-20180317155348-20180317175348-00174.warc.gz"}
http://www.statsblogs.com/tag/books/
# Posts Tagged ‘ books ’ ## amazing Gibbs sampler February 18, 2015 By When playing with Peter Rossi’s bayesm R package during a visit of Jean-Michel Marin to Paris, last week, we came up with the above Gibbs outcome. The setting is a Gaussian mixture model with three components in dimension 5 and the prior distributions are standard conjugate. In this case, with 500 observations and 5000 Gibbs […] ## Numbersense, in Chinese and Japanese February 9, 2015 By This is a cross-post on my two blogs. The new year brings news that my second book, Numbersense: How to Use Big Data to Your Advantage has been translated into Chinese (simplified) and Japanese. Here are the book covers: In... ## Numbersense, in Chinese and Japanese February 9, 2015 By This is a cross-post on my two blogs. The new year brings news that my second book, Numbersense: How to Use Big Data to Your Advantage has been translated into Chinese (simplified) and Japanese. Here are the book covers: In Chinese, the title reads: ... ## simulation by inverse cdf January 13, 2015 By $simulation by inverse cdf$ Another Cross Validated forum question that led me to an interesting (?) reconsideration of certitudes! When simulating from a normal distribution, is Box-Muller algorithm better or worse than using the inverse cdf transform? My first reaction was to state that Box-Muller was exact while the inverse cdf relied on the coding of the inverse cdf, like […] ## DDH Now in Chinese January 8, 2015 By For my Chinese readers:A Chinese version of the Diebold-Doherty-Herring risk management book just appeared. Interesting surprise. I knew nothing about it until it arrived in the snail mail, just as with the earlier Chinese version of the Dieb... ## top posts for 2014 December 29, 2014 By Here are the most popular entries for 2014: 17 equations that changed the World (#2) 995 Le Monde puzzle [website] 992 “simply start over and build something better” 991 accelerating MCMC via parallel predictive prefetching 990 Bayesian p-values 960 posterior predictive p-values 849 Bayesian Data Analysis [BDA3] 846 Bayesian programming [book review] 834 Feller’s shoes […] ## amazonish thanks (& repeated warning) December 8, 2014 By As in previous years, at about this time, I want to (re)warn unaware ‘Og readers that all links to Amazon.com and more rarely to Amazon.fr found on this blog are actually susceptible to earn me an advertising percentage if a purchase is made by the reader in the 24 hours following the entry on Amazon […] ## the Grumble distribution and an ODE December 2, 2014 By $the Grumble distribution and an ODE$ As ‘Og’s readers may have noticed, I paid some recent visits to Cross Validated (although I find this too addictive to be sustainable on a long term basis!, and as already reported a few years ago frustrating at several levels from questions asked without any preliminary personal effort, to a lack of background material to […] ## Le Monde puzzle [#887quater] November 27, 2014 By And yet another resolution of this combinatorics Le Monde mathematical puzzle: that puzzle puzzled many more people than usual! This solution is by Marco F, using a travelling salesman representation and existing TSP software. N is a golden number if the sequence {1,2,…,N} can be reordered so that the sum of any consecutive pair is a […] ## an ABC experiment November 23, 2014 By In a cross-validated forum exchange, I used the code below to illustrate the working of an ABC algorithm: Hence I used the median and the mad as my summary statistics. And the outcome is rather surprising, for two reasons: the first one is that the posterior on the mean μ is much wider than […]
2015-03-02 15:00:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25757285952568054, "perplexity": 3308.707801260994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462839.81/warc/CC-MAIN-20150226074102-00200-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.leeloo-dev.com/the-best-rowing-machines-for-2019/
Rowing machines remain considered just a little niche in comparison with stationary bicycles or treadmills, but they’ve preserved their positions in both industrial and home gyms lengthy enough that it’s apparent that they are not a fad. Unlike therefore many other workout routines out there, rowing has an incredibly extensive workout that targets many muscles at a high degree of strength. In the same breath, additionally it is a low-impact workout which can be great for those in the center of rehab/recovery. If you’re not used to rowing, here’s what you ought to know: • Set Your Own Speed: Unlike other actions /cardio machines, training on a rowing machine really enables you to exercise at your very own pace. The more you lean into a stroke, the more resitance you’ll have to overcome. Certain rowers will have resistance settings, but these aren’t essential. • Posture/Form Is Key: To make the most of a rowing workout, pay close attention to form. There are sufficient video clips out there that break down the key methods of each stroke, and faltering from these immediately takes away part (if not all ) of the benefits of your workout. • Faster Does Not Equal Better: Unlike additional cardio activities, increasing your quantity of strokes per minute does not correlate to a harder/more effective work out. With your resistance all becoming on the drive motion of a row, you will get better split instances (pacing per 500m) and not necessarily be hitting as high of a stroke rate. Again concentrate on form, and ensure to get a good release / pull into your stroke-that’s where you’ll see the greatest benefit(s). water rower ### Calorie Crushing Cardio Before getting to the muscle groups being targeted by a rowing workout, be prepared to break into a good sweat during your new rowing routine. Because you’re engaging both your lower and upper body, the calorie burn from a rower is typically a little higher than a treadmill machine or indoor cycle. You’ll note on most rowers that your pacing will become measured both in range and in “Strokes Per Minute ”. These will become your metrics to remember as you spend more time on your rowing machine. ### Hitting (Nearly) All The Muscle Groups Thinking of the mechanics of rowing, it’s easy to see why it is viewed while one of the more comprehensive exercises out there. An initial push-off with your legs engages your calves, hamstrings, quads, and glutes, but from there you’re only half done the motion. Your deltoids, spine, triceps, lats, and also forearm muscles are involved in completing the draw, and through the whole process your primary should remain involved. You could possibly be lacking a few assorted items, but as a jumping off stage for muscles gain, rowing ticks a huge amount of boxes right from the gates. ### Selecting The Most Appropriate Machine Stamina rowing machine Seeing that a starter rowing machine on a spending budget, the Stamina X Surroundings Rower gets the work done. Many reviewers make be aware of noise, and sometimes have problems about general sturdiness, or too little padding in its chair, however this is virtually par for the training course if you’re capping your spending budget at the sub-$500 tag. Those that acknowledge that it’s a spending budget rower are quick to compliment its efficiency for the buck. Surroundings resistance means no configurations to fiddle with, and its own minimal display control is only going to indicate kilometers rowed. On the plus aspect it’s a foldable device, therefore at least you won’t be burning an excessive amount of space in your house gym. #### Greatest Rowing Machine Under$800 - NordicTrack RW200 Rowing machineBasically doubling your spending budget gets you a far greater machine, from among the best known brands in the exercise equipment space. Unlike the Stamina, the RW200 uses magnetic resistance similar from what the brand uses in every of its stationary bicycles and elliptical devices. It’ll be kilometers quieter than other devices in the category, and it actually comes with a yr of iFit Coach membership included. #### Greatest Rowing Machine Under \$1, 500 - WaterRower Natural Rowing machineWater rowers quickly obtained popularity after appearing upon “ Home of Cards”, nonetheless they aren’t a few flash-in-the-pan fad. These rowers depend on water to create artificial resistance, providing them with a much more organic rowing experience than any additional setup out there. Apart from feel and the entire selling point of their aesthetics-there’s no arguing these specific things look fairly damn cool- consuming water rowers are also smaller sized than the majority of their competition. This type of unit includes WaterRower’s Series IV efficiency monitor, complete with built-in applications, data logging, and the ability to save custom programs of your own. Its dual rail seating system makes it feel much more sturdy than some of the affordable/ entry level competition, and because it can be stored upright, it will only take up a 21″ x 24″ footprint when you aren’t using it.
2019-12-11 23:11:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21532154083251953, "perplexity": 3809.827797113313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540533401.22/warc/CC-MAIN-20191211212657-20191212000657-00249.warc.gz"}
http://science.sciencemag.org/content/326/5958/news-summaries
# News this Week Science  04 Dec 2009: Vol. 326, Issue 5958, pp. 1328 1. Agricultural Research # International Centers and Donors Warily Eye Sweeping Changes 1. Dennis Normile The Consultative Group on International Agricultural Research (CGIAR) is facing what could be the biggest shakeup in its 38-year history as members meeting in Washington, D.C., on 7 and 8 December vote on far-reaching reforms. The delegates—representing donors, countries, international foundations, and development organizations—will likely vote to convert CGIAR from a voluntary association into a legal entity with power over a trust fund so it can enforce systemwide priorities. But the success of the venture hinges on deciding how to set and evaluate the research agenda without simply adding a layer of bureaucracy to the system. “There is still a lot of work to do on these changes,” says Elizabeth Woods, an agricultural economist with the Queensland government in Brisbane, Australia, who chairs the board of the International Rice Research Institute (IRRI) in Los Baños, Philippines. Dealing with the CGIAR centers “was getting more and more complicated” for donors, says Jonathan Wadsworth, an adviser at the United Kingdom's Department for International Development, one of the major supporters. Although a CGIAR science council set priorities, centers could ignore them, Wadsworth says. So donors increasingly funded specific projects. Such tied funding grew from about 30% of total CGIAR funding a decade ago to about 70% now. And no one is happy. Wadsworth says donors evaluate programs and sometimes meddle in center management, tasks for which they are ill-suited. Center scientists have been spending more time accounting for hundreds of small projects. “We are torn in 64 different directions by different donors and their agendas,” says Carlos Seré, director of the International Livestock Research Institute in Nairobi. And with core funding shrinking, center directors find it hard to plan for the long term. “It is extraordinarily difficult to accumulate the amounts needed for infrastructure investment,” says Woods. Several formal studies led to an action plan, says CGIAR Director Ren Wang. If approved, CGIAR will become a legal entity with two parts: a consortium representing the centers and a fund to bring the donors together. The intent “is to agree on roles for the funders and roles for the doers,” says Shey Tata, CGIAR's lead financial officer. The consortium will be governed by a board, likely comprising scientists and development experts, and the fund will be run by donors. The consortium and the fund will together decide on a so-called strategy and results framework, which will set research objectives through a number of megaprograms expected to involve multiple centers. Two of the seven megaprograms are “genomics and global food crop improvements” and “agriculture, nutrition, and health.” The restructuring could create efficiencies. “Instead of 15 centers negotiating with 65 donors, it will boil down to much higher-level but reduced interactions between one big consortium and one pooled source of funds,” Wadsworth says. Wang says donors have indicated that they will rely on standardized program evaluations by the consortium, reducing reporting requirements for numerous small research projects. Sharing procurement, human resources, and other services could save up to $130 million a year, Wang says. Center directors and board members generally support reform. “Anything that makes [CGIAR] nimbler, more efficient, and able to respond with the best science to the serious food security problems facing the world would be welcome,” says Robert Zeigler, director general of IRRI. And donors are embracing the idea. The United Kingdom intends to at least double contributions to CGIAR to £40 million annually by 2014, depending on the progress of reforms, Wadsworth says. Wang says Australia, the Netherlands, the United States, and Switzerland have all pledged to increase funding significantly. He says the target is an annual income of$1 billion by 2013, roughly double the 2008 figure. The sticking point is the strategy and results framework and its megaprograms. “The process went ahead too hastily without [sufficient] consultation with researchers,” says Ryotaro Suzuki, director of international research at Japan's Ministry of Agriculture. Wang agrees that the megaprograms are more thematic than specific. At the same time, the framework envisions precisely measuring results from each project, such as percentage increases in productivity and the number of people lifted out of poverty. “We can't be accountable for things beyond our control,” says Seré. He is also concerned that the programs focus on cereals and slight the vegetable crops and livestock that generate income for small holders and food for local communities. These areas “are much less addressed by public investment,” he says. Woods worries that the cost of another administrative layer, at least during a multiyear transition period, could swallow a lot of the rising contributions. But the upside could be greater impacts, says William Dar, director general of the International Crops Research Institute for the Semi-Arid Tropics in Patancheru, India. If, after 5 years, there is more long-term support for agricultural research with the majority in unrestricted core funding, “I will give a very positive verdict on this reform process,” he says. 2. Science and Society # Stolen E-mails Turn Up Heat on Climate Change Rhetoric 1. Eli Kintisch The theft and unauthorized release last month of 1000 private e-mail messages from the servers of the Climatic Research Unit (CRU) at the University of East Anglia in the United Kingdom has provided a glimpse into the fractious world of climate science. The public airing of frank conversations among powerful scientists about sensitive topics such as possible holes in their data and the use of contrarian papers in major reports comes at a pivotal time for climate science, just days before a meeting of world leaders in Copenhagen. The messages—whether hacked or released by a disgruntled insider—have raised thorny questions about the proper behavior of researchers who feel under siege for their science. How willing should they be to share their raw data with their staunchest critics? “It's very difficult to admit that your data are not as strong as you wish it were, especially if you know that will be used against you,” says Nicholas Steneck, an expert on research integrity at the University of Michigan, Ann Arbor. And yet the “circle the wagons” mentality conveyed in numerous messages could inflict lasting “damage to the public credibility of climate research,” warns climate scientist Judith Curry of the Georgia Institute of Technology in Atlanta. But openness just leads to twisted interpretations, says NASA climate researcher Gavin Schmidt. “You can't have a spelling mistake in a paper without it being evidence on the floor of the Senate that the system is corrupt,” says Schmidt. Four e-mail exchanges have received most of the media attention. The first regards a research finding considered by most scientists as a canonical fact: that the globe warmed by roughly 0.7°C in the 20th century. That fact derives in large part from global temperature data recorded by stations on land and sea, as analyzed independently by groups at East Anglia, NASA, and the U.S. National Oceanic and Atmospheric Administration. Referring to requests for climate data from critics, CRU Director Phil Jones wrote in 2005 that “I think I'll delete the file rather than send to anyone.” In May 2009, Jones told Michael Mann of Pennsylvania State University, University Park, to “delete any emails” to a colleague about their work on the Intergovernmental Panel on Climate Change (IPCC) report and to ask a third colleague to do the same. (Mann says he conveyed the message but deleted no messages himself.) Through a spokesperson, Jones declined an interview request. But in a statement he said that “no record” has been deleted amid a bombardment of “Freedom of Information requests.” CRU acknowledged in August that it deleted old data on digital tapes to make space for a move. A second message relates to a chapter in the 2007 IPCC report that Jones edited. In 2004, he suggested that two recent papers on temperature trends didn't deserve to be published in a peer-reviewed journal. “I can't see either of these papers being in the next IPCC report,” he wrote Mann. “Kevin [Trenberth] and I will keep them out somehow - even if we have to redefine what the peer-review literature is.” But Trenberth, of the National Center for Atmospheric Research in Boulder, Colorado, says the papers were indeed considered. Thomas Karl, director of the National Climatic Data Center in Asheville, North Carolina, an official reviewer for the chapter, says the IPCC's peer-review procedures “were sacrosanct.” Both papers wound up being cited. A third message is viewed by critics as an acknowledgement that global warming has ceased. “The fact is that we can't account for the lack of warming at the moment and it is a travesty that we can't,” wrote Trenberth in October. Contrarians have noted the lack of record new highs in global temperature since 1998 (Science, 2 October, p. 28). But Trenberth was actually bemoaning something else. “The observing system we have is inadequate for tracking energy flow through the climate system,” he observed, affecting the forecasting of year-to-year climate changes. A fourth message, about assembling a diagram for a 1999 World Meteorological Organization report, has been misinterpreted, says Trenberth (see graphic). Scientists believe proxy data such as tree rings are valuable for reconstructing past climates, but certain tree-ring data became unreliable midway through the century. So scientists used proxy data for all but the final 40 years of the millennium before switching to instrumental data in 1961. “Reasonable people,” writes Stephen McIntyre, a retired industry consultant and prominent blogger, might conclude that the decision not to show the divergence of the two data sets was “simply a trick” to avoid giving fuel to skeptics. Whatever their meaning, the messages have emboldened opponents. Some are calling for congressional hearings and, possibly, lawsuits. Penn State says that it is “looking into” the matter, and the University of East Anglia has announced an investigation into the theft and contents of the e-mails. Scientists know they will need every bit of credibility to defend their findings from future attacks. But Curry suggests that it would be better to bring the skeptics into the fold than to keep them out. That way, she says, the critics will “quickly run out of steam and become irrelevant.” 3. Mantle Dynamics # Sea-Floor Study Gives Plumes From the Deep Mantle a Boost 1. Richard A. Kerr Earth's interior is like a pot of boiling water—very viscous, very slowly churning water. The great debate about how Earth's interior operates to shed internal heat and shape the surface began with disagreements over whether two layers in the pot—the upper and lower mantle—always remain separate, like oil and water. That debate ended when seismologists imaged Earth's cold, brittle surface scum, the tectonic plates, and saw some of them diving all the way into the lower mantle. For the past decade, geoscientists have been focusing on the opposite question: whether plumes of hot, buoyant rock from the lower mantle are rising to the surface to fuel volcanic hot spots. On page 1388, eight researchers weigh in with the most detailed seismic imaging yet beneath the world's most iconic hot spot, the island of Hawaii. “I do think it's a strong case” for a deep plume, says lead author Cecily Wolfe of the University of Hawaii, Manoa. Most in the often-contentious field of seismic imaging don't go quite so far. But the quality of the data and the apparent Hawaiian plume's resemblance to theorists' expectations has won some cautious support for the work. “They're doing their best. It looks promising,” says seismologist Barbara Romanowicz of the University of California, Berkeley. But the question of deep plumes “is still a little open,” she says. To gather information about the mantle beneath Hawaii, researchers had to cast a wide net. Seismologists image mantle features by compiling records of seismic waves that have passed from an earthquake source through the feature of interest and on to a seismometer. Warmer-than-average rock slows a wave down; colder rock speeds it up. Seismic tomographers imaging the mantle combine wave travel times in much the way radiologists combine x-rays to create computed tomography scans of the human body. In one tomographic study, seismologists imaging the whole mantle reported seeing a couple of dozen deep plumes scattered around the globe (Science, 22 September 2006, p. 1726). Geoscientists have long recognized that a single hot spot of persistent volcanic activity created the Hawaiian island chain as the Pacific plate moved over the hot spot. But tomographers trying to see how deep-seated the hot spot's source is face a special challenge: Because of the remoteness of large earthquakes around the Pacific's Ring of Fire, seismometers on the Hawaiian Islands receive few seismic waves that would have passed through any deep plume. So researchers in the Hawaiian Plume-Lithosphere Undersea Melt Experiment (PLUME) stepped away from their subject. Deploying 10 conventional seismometers on the Hawaiian Islands and ocean-bottom seismometers at another 73 sites in waters as much as 5500 meters deep, the team created a seismic “eye” centered on the island of Hawaii and 1000 kilometers across. The network could pick up both seismic shear waves (S waves) that had passed through the upper mantle beneath Hawaii and—vital to imaging any deep plume—SKS waves that had passed upward from Earth's core. The PLUME images, the authors write, “suggest that the Hawaiian hotspot is the result of an upwelling, high-temperature plume from the lower mantle.” They show a hotter-than-average column of rock extending downward at least 1500 kilometers, topped by a “pancake” of hot rock where a plume would spread outward after hitting the cold, rigid tectonic plate. There's also a parabola-shaped feature of high-wave-speed material where computer models of plume behavior show a curtain of cold, descending rock. The plume's inferred temperature of 300°C above its surroundings at 900 kilometers' depth fits expectations. And, perhaps most telling, the apparent plume tilts downward toward the southeast—the way computer models show the churning mantle “blowing” a plume, like smoke rising from a chimney. Reaction to the PLUME imaging is varied. “The tomography is pretty good,” says marine geophysicist and regional tomographer Donald Forsyth of Brown University. “It's not an absolute slam dunk, [but] I'm fairly convinced there's an anomaly [in seismic velocities] going down on the order of 1500 kilometers, though it's hard to say if it's continuous.” Tomographer Jeannot Trampert of the University of Utrecht, the Netherlands, is more skeptical. The signature of a deep plume “is so weak it's hard to say” if it's real, he says. He suspects that the apparent plume may be just an echo of some nearby deep-mantle feature that lies just outside the tunnel view of PLUME. The obvious candidate for a plume imposter is the edge of the nearby Pacific “large low-shear-wave-velocity province,” more familiarly called a superpile. To test that possibility, Wolfe and colleagues reconstructed their deep-plume signal under the assumption that it was created entirely by the superpile in the nearby lowermost mantle. If the superpile were entirely responsible, the resulting image should resemble the edge of a superpile. “To me, it doesn't look like the proposed superpile,” says Wolfe. “It looks like a doughnut.” The superpile could still be the source, the researchers write, but a deep plume “remains a more straightforward solution.” Trampert remains unconvinced. “They do not address it in a satisfactory way,” he says. Everyone does agree that the PLUME observations remain to be fully mined for information on any plume, and that global data might be profitably merged with the regional data. But just where or when the slam-dunk evidence might emerge is anyone's guess. 4. Science Policy # European Union Selects Unknown For Top Science Post 1. Martin Enserink The two women tapped to head the European Union's efforts on science and climate over the next 5 years have a lot in common. Both were elected to parliament in their mid-20s—one in Denmark and the other in Ireland—but left politics later on. Both wrote for national newspapers and had stints in television broadcasting. Both are described as strong-willed and smart. The difference is that one is virtually unknown to scientists and science policymakers, and the other is almost an international celebrity. Danish energy and climate minister Connie Hedegaard, nominated last week to become the first European commissioner for climate action, was picked as one of the world's 100 most influential people by Time magazine in April and this month will host the Copenhagen climate talks. In contrast, Máire Geoghegan-Quinn, the proposed new commissioner for research and innovation, has spent the past 9 years examining the E.U.'s finances as the Irish representative of the less-than-glamorous Court of Auditors in Luxembourg. The nominations, announced on 27 November by European Commission President José Manuel Barroso, are the outcome of delicate backroom talks in which E.U. member states jockey for posts in Brussels. The entire slate of 27 proposed commissioners—one from each country—is subject to hearings and a vote by the European Parliament, scheduled for 26 January. Before being nominated as the boss of E.U. science, Geoghegan-Quinn, 59, held various posts in the Irish government, including minister of state for European affairs between 1987 and 1991. She left politics in 1997 and joined the Court of Auditors 3 years later. Several European science leaders Science contacted said they could not comment on her nomination simply because they had never heard of her. Frank Gannon, director-general of the Science Foundation Ireland and a former head of the European Molecular Biology Organization, does know, and admires, Geoghegan-Quinn—he once lived across the road from her in Galway. She's an “intelligent and straightforward person,” he says. “I think she will bring a lot of qualities to the job.” Gannon points out that Janez Potočnik, the Slovenian economist who currently holds the post, was new to science as well in 2004 and “was an excellent commissioner.” Despite her lack of sciencepolicy experience, Geoghegan-Quinn may have a head start on important decisions regarding Framework Programme 8 (FP8), the next of Europe's gargantuan research funding programs, which is slated to start in 2014. The Court of Auditors, Geoghegan-Quinn's former outpost, said in a highly critical report in October that FP6, which ran from 2002 through 2006, failed to meet some of its key objectives; for instance, large international networks funded to foster innovation and collaboration often fell apart after funding dried up, the report said. (A spokesperson for the court says Geoghegan-Quinn had no personal involvement in the report.) Geoghegan-Quinn would also help decide whether to increase the budget of the European Research Council (ERC), the new funding agency through which some 15% of FP7's ε50 billion is spent. The ERC rewards individual investigators, rather than networks, and uses excellence as a criterion instead of political and economical considerations. Potočnik will stay on as a commissioner but move to the environment post, now arguably diluted by the creation of a separate post for climate. Hedegaard, 49, who will fill that post, has earned the respect of climate advocates for her efforts to make Denmark's economy greener and for her “great personal commitment” to the Copenhagen summit, says Joris den Blanken, Greenpeace's E.U. climate and energy policy director at its Brussels office. 5. ScienceInsider # From the Science Policy Blog President Barack Obama will attend the Copenhagen climate meeting and probably announce a U.S. commitment, contingent on congressional agreement, to a 17% cut in greenhouse gas emissions relative to 2005 by 2020. The Presidential Commission for the Study of Bioethical Issues will be chaired by Amy Gutmann, a political scientist and the president of the University of Pennsylvania. Bioethicists expect the new commission to be more policy-oriented and pragmatic than its predecessor, which focused on philosophical and moral issues in biomedical research. In one of the first signs that HIV prevention efforts have begun to make a dent on a global scale, new infections appear to have dropped by 17% over the past 8 years, according to a new report by the Joint United Nations Programme on HIV/AIDS and the World Health Organization. India and the United States signed a deal on 24 November that includes a full suite of technical cooperation agreements, including shared work on food, wind power, extreme weather, and nuclear energy. The neurologist and biomechanics expert in charge of the National Football League's committee on mild traumatic brain injury resigned last week. The league appears to be changing its attitude toward growing evidence that head injuries suffered on the field can lead to personality changes, dementia, and other problems later in life. The world's largest atom smasher, the Large Hadron Collider, has set a new record for accelerating subatomic particles to high energy. On 30 November, protons whizzed around the 27-kilometer-long accelerator at an energy of 1.18 tera–electron volts—20% higher than the previous standard. For more science policy news, visit blogs.sciencemag.org/scienceinsider. 6. Stem Education # Web Site Matches U.S. Scientists With Teachers Looking for Help 1. Jeffrey Mervis Kate Lievens and Jack Hidary live in very different worlds. But the elementary school teacher and the neuroscientist-turned–serial entrepreneur have something in common: a new, interactive Web site designed to match scientists and classroom teachers from across the United States in projects aimed at improving learning. The site (nationallabday.org) is one element in a White House initiative to encourage private-private partnerships in STEM (science, technology, engineering, and mathematics) education. Hidary has agreed to run the site, and Lievens is one of the first teachers to participate. The initiative, dubbed Educate to Innovate, doesn't involve any new federal dollars. But it got a boost last week from President Barack Obama, who praised the private sector's promised investment of $260 million in a variety of projects, some new but many with a long track record, ranging from after-school robotics competitions to educational video games, and from science-themed television shows to better professional development for teachers. “The success we seek [in improving STEM education] is not going to be attained by government alone,” the president told scientists, educators, business leaders, and philanthropists at a 23 November rally in a federal office building next door to the White House. “[I] encourage folks to think of new and creative ways of engaging young people in science and engineering.” Hidary says the idea for the Web site was hatched less than 3 months ago in a meeting with officials from the White House Office of Science and Technology Policy (OSTP). “A number of us were involved in TechNet Day,” says Hidary, referring to an effort to promote the role of information technology in society. His background seemed perfect for launching what Hidary describes as “eHarmony for science”: He began his career as a neuroimaging fellow at the National Institutes of Health in the early 1990s before making it big in financial information services. In 1995, he started his first Internet company, EarthWeb/Dice, and a decade later he sold a second company, Vista Research, to McGraw-Hill and turned to community philanthropy. “He is a passionate and extremely hard-working advocate,” notes Rick Weiss, senior policy analyst and director of strategic communications at OSTP. Within weeks, Hidary had won promises from the American Chemical Society, the National Science Teachers Association (NSTA), and other professional groups to enlist their members. He's raised more than$1 million from various organizations and has borrowed Jan Cuny, a program officer from the National Science Foundation's computing directorate, to manage the project's Washington, D.C., office. National Lab Day (NLD) is a misnomer for the project, admits Hidary. “It's actually a year-long series of activities,” he explains. “We're not interested in another boutique program. We want something that will really galvanize people on a national scale.” Obama said he expects the partnerships formed through the Web site to “reach 10 million young people with hands-on learning” by next spring, when organizers hope the president will keynote a second event to celebrate its success. Lievens, a veteran teacher at Earl Hanson Elementary School in Rock Island, Illinois, signed up the first day the site went live. A former physical education teacher who became a reading specialist, Lievens took an environmental sciences course this summer at a nearby college that rekindled her latent interest in science. She joined NSTA and began thinking about how to work the Mississippi River, only a few blocks from the school, into her science classes. What about having her fifth- and sixth-graders test the quality of the water from that mighty river, which they drink every day? “The kids really get fired up when they can connect what they are learning to their everyday lives,” says Lievens. She also thought it would be fun. “Anytime you can make it a little messy, they're more likely to remember the lesson.” Lievens imagined her students taking water samples and examining them under a microscope, visiting the local water-treatment plant, and maybe even learning a little hydrology and environmental chemistry. But with no formal training in science, she knew that she'd need assistance. Last week, it arrived in the form of an NSTA e-mail alert describing the new Web site. “I'm eager to see who responds,” she says. Rebecca Smith knows more than a little about what it takes to pull off a successful STEM partnership program. A biochemist at the University of California, San Francisco, she's co-director of the Science and Health Education Partnership (SEP), which since 1987 has been matching area scientists with San Francisco schoolteachers (biochemistry.ucsf.edu/programs/sep). This year they expect to deploy 80 such teams, representing more than 200 scientists. Smith, whose program can tap an embarrassment of riches from academia and industry, applauds NLD for trying to reach areas that lack such a large talent pool. She also offers Hidary some pointers: Take the time to make good matches, don't expect too much, monitor the partnerships closely, and stick with it. But the most important ingredient, she says, may be mutual respect and trust. Hidary acknowledges the good work of SEP and many other programs and hopes that NLD can build on their successes. “We already know that project-based, hands-on learning works,” he says. “The challenge now is to scale up. And the only way to do that is through massive partnerships.” 7. India # Stem Cell Center to Rise in Biology Hub 1. N. N. Sachitanand* BANGALORE—India's fledgling stem cell R&D effort is set to receive a major boost. Construction began here last month on the Institute for Stem Cell Biology and Regenerative Medicine (inStem), a $50 million center to be built alongside an existing biology powerhouse—the National Centre for Biological Sciences (NCBS)—and a planned$12 million technology center that will seek to commercialize the biocluster's findings. “We hope that this intertwined environment can be transformative,” says NCBS Director K. VijayRaghavan, who will serve as inStem's first director. Initial plans include launching an international collaboration using stem cells to probe the molecular mechanisms of cardiovascular diseases. inStem will also link up with the Centre for Stem Cell Research at Christian Medical College in Vellore, which specializes in translational and clinical research. “inStem should help both in human resources and capacity building on one hand and accelerate progress in therapeutic possibilities on the other,” says D. Balasubramaniam, who led a government task force that recommended setting up inStem. inStem's team includes two deans—Jyotsna Dhawan of NCBS and S. Ramaswamy of the University of Iowa, Iowa City, who is returning to his home country after 18 years in the United States. “Bioscience research in India is in an exponential growth phase,” says Ramaswamy. “The excitement of being able to … shape this growth was irresistible.” inStem expects to ramp up to 40 researchers after its new facility, shared with NCBS, opens in June 2011. • * N. N. Sachitanand is a writer in Bangalore. 8. ScienceNOW.org # From Science's Online Daily News Site Coral Reefs Act Like Sunscreen Living on a coral reef is a bit like living in a tanning bed. As the sun's rays shine through the water and reflect off the reef, they strike corals, their symbiotic photosynthetic algae, and other inhabitants from above and below. So what keeps these creatures from being fried? A new study suggests that coral acts as a sunscreen, absorbing UV light and limiting the harm it inflicts on the reef's denizens. Milky Way Grew by Swallowing Other Galaxies The motto “E Pluribus Unum” (“out of many, one”) could be applied to the Milky Way. Astronomers have obtained new evidence that our home galaxy contains pieces of many former galaxies. The findings strengthen the idea that large galaxies don't emerge whole from single, gigantic clouds of dust and gas. Rather, they grow by swallowing their neighbors. Americans' Eating Habits Grow More Wasteful After their biggest meal of the year, Americans might reflect on the fate of those moldering Thanksgiving leftovers. Nearly 40% of the food supply in the United States goes to waste, according to a new study, and the problem has been getting worse. Why Suffocating Is Scary Breathe too much carbon dioxide (CO2), and you'll suffocate. That's why people begin to panic if they breathe air enriched with the gas. One reason this happens, according to a new study in mice, is because breathing CO2 triggers chemical sensors in a crucial part of the brain's fear circuitry. The findings could point the way to new treatments for anxiety disorders. Titan Lakes Migrate South for the Winter Imagine if all of the water in the Great Lakes evaporated, moved to the Southern Hemisphere, and rained down to form new lakes in Argentina. Then thousands of years later, the process repeated and the water returned north. That's what researchers say could be happening on Titan, Saturn's largest moon. Understanding the process could shed light on how long-term climate cycles operate on other worlds. 9. Origins # On the Origin of Tomorrow 1. Carl Zimmer* What is the future of evolution? In the final essay in Science's series in honor of the Year of Darwin, Carl Zimmer explores the subject of human-driven evolution. In the final words of the final sentence of On the Origin of Species, Charles Darwin gave a nod to the future. “There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.” Darwin recognized that as long as the ingredients for the evolutionary process still exist, life has the potential to change. He didn't believe it was possible to forecast evolution's course, but he did expect humans would have a big effect. In his day, they had already demonstrated their power with the triumphs of domestication, such as breeding dogs from wolves. Darwin recognized that we humans can also wipe out entire species. He knew the dodo's fate, and in 1874 he signed a petition to save the last surviving Aldabra giant tortoises on the Seychelles Islands in the Indian Ocean. Darwin also expected that our own species would change. As Western powers colonized other parts of the world, he predicted that some populations would become extinct. But Darwin also felt a cautious optimism. “Looking to future generations,” he wrote in The Descent of Man, “there is no cause to fear that the social instincts will grow weaker, and we may expect that virtuous habits will grow stronger.” And unlike other species, humans could bring about this change consciously, through cultural evolution. As the world celebrates the 150th anniversary of the publication of On the Origin of Species this year, scientists continue to think deeply about what comes next. But the complexity of evolution still makes forecasting hard. “As Yogi Berra once said, ‘Prediction is very difficult. Especially about the future,’” says Stephen Stearns, an evolutionary biologist at Yale University. Yet evolutionary biologists also feel a new sense of urgency about understanding what lies ahead. Since Darwin's day, humans have gained an unprecedented influence over our own evolution. At the same time, our actions, be it causing climate change, modifying the genomes of other organisms, or introducing invasive species, are creating new sources of natural selection on the flora and fauna around us. “The decisions we and our children make are going to have much more influence over the shape of evolution in the foreseeable future than physical events,” says Andrew Knoll, a paleontologist at Harvard University. ## Shaping our genome If there's one thing that's certain, it's that humans, like other living things, will continue to evolve. “Evolution is unstoppable,” says Lawrence Moran of the University of Toronto in Canada. But that doesn't mean that humans are marching on a path toward becoming giant-brained, telepathic creatures out of Star Trek. All it means is that the human genome will continue to change from generation to generation. A background mutation rate guarantees this process. Each baby's DNA carries about 130 new mutations. Most of them have no effect on our well-being. People can pass these neutral mutations down to their offspring without harm, and over time, a small fraction of them will end up spreading across entire populations, or even the entire species, thanks to random luck. Natural selection can cause mutations that help individuals survive and reproduce to spread much faster than neutral ones. Exactly which mutations natural selection will favor, however, depends on the environment in which we live. And over the past 10,000 years, we humans have dramatically changed that environment. We have fostered new diseases to which humans have adapted, for example. But in other cases, civilization has shielded us from the environment, weakening the power of natural selection. One of the best known examples of human-driven evolution is malaria. Early farmers cleared forests and created fields where malaria-carrying mosquitoes could lay eggs in pools of water. As malaria spread, natural selection favored those humans with defenses against the disease. One such defense comes from a variant of a hemoglobin gene that makes it hard for parasites to reproduce in blood cells. One copy of the gene reduces your chance of contracting malaria. Two copies cause sickle cell anemia. On the other hand, civilization has also blunted some of natural selection's power over humans, particularly in the 150 years since Darwin published On the Origin of Species. Back then, for example, some children had the misfortune to be born with defective copies of a gene for an enzyme that breaks down amino acids in the food they ate. This disorder, known as phenylketonuria, generally led to severe brain damage. Few people with severe phenylketonuria were able to pass on their genes. But today, now that scientists know what causes the disease, people with phenylketonuria can enjoy fairly normal lives simply by being careful about the foods that they eat, and they pass their genes on to their children. Other medical advances, from eyeglasses to antibiotics, may also allow some potentially detrimental genes to become more common than in the past. Yet medical advances and other changes to human life have not stopped natural selection, nor will they in the future. HIV, for example, first evolved into a human pathogen in the early 1900s and today takes a devastating toll in many parts of the world. Genes that provide some resistance to the virus may be favored by natural selection in places where HIV is particularly common. Even in affluent parts of the world like the United States, natural selection has not stopped. Subtle differences in people's health influence how many children they have and thus can gradually change entire populations. In a report published online 26 October in the Proceedings of the National Academy of Sciences, Stearns and his colleagues documented natural selection in 2238 U.S. women. The women were subjects in the Framingham Heart Study, which has tracked the health of thousands of people in Framingham, Massachusetts, since 1948. The scientists searched for traits that were correlated with having a higher number of children. Then they checked to see whether those traits tended to be passed down from mother to child—in other words, whether they were genetically based. The scientists discovered that a handful of traits are indeed being favored by natural selection. Women with a genetic tendency for low cholesterol, for example, had more children on average than women with high cholesterol. A greater body weight was also linked with greater reproductive success, as was shorter height, lower blood pressure, an older age at menopause, and having one's first child at an earlier age. Stearns and his colleagues now know which traits are selected in the women of Framingham, but they have yet to determine exactly what advantage each trait confers—a situation that evolutionary biologists often face when documenting natural selection. Nevertheless, based on the strength of the natural selection they have measured, the scientists predict that after 10 generations, the women of Framingham will give birth, on average, a few months younger than today, have 3.6% lower cholesterol, and will be 1.3% shorter. Of course, even this prediction is subject to change. Women with higher cholesterol may eventually be able to enjoy higher fertility rates thanks to cholesterol-lowering drugs, says Stearns, wiping out the differences in reproductive rates. “Selection is always operating,” says Stearns, “but the traits on which it operates shift with ecology and culture.” Along with natural selection, it's also conceivable that one day genetic engineering will change human DNA directly. In September, scientists at the Oregon National Primate Research Center reported that they could replace the DNA in the mitochondria of a monkey embryo with mitochondrial DNA from another monkey. In July, scientists at the Center for Regenerative Medicine in Barcelona, Spain, reported that they had repaired human stem cells carrying genes for an inherited blood disorder. Both studies hint that eventually scientists will be able to alter the genes of future generations. But even if a child was born with engineered genes in our lifetime, that milestone wouldn't mean much for the evolution of our species. Those engineered genes would be swamped by the billions of mutations that emerge naturally in the babies born every year. Yet although engineered genes aren't likely to provide enough reproductive advantage to spread on their own, they may still become common. John Hawks, an anthropologist at the University of Wisconsin, Madison, speculates that if genetic engineering becomes cheap enough and provides an attractive trait—such as staying thin—economics could spread a gene even if natural selection can't. “I think people would buy it,” says Hawks. ## Human-powered evolution Genetically engineered humans may still be science fiction, but genetically engineered animals, plants, and microbes are all here already. In 2008, farmers planted 125 million hectares of genetically modified crops. Many of these crops carry genes from other species. Corn, cotton, and other plants have been engineered to carry a gene produced by bacteria, for example, so that they can make an enzyme that can kill insects. With big countries such as China and India dramatically ramping up their use of genetically modified crops, this evolutionary trend will likely continue. In the near future, scientists may start to engineer life in a more profound way, manufacturing new species from scratch. The idea would be to design a microbe on a computer, combining genes with different functions into genetic networks. Scientists could then synthesize the new genome from raw DNA and insert it into an empty microbial cell that would come to life. J. Craig Venter and his colleagues at the J. Craig Venter Institute in Rockville, Maryland, have taken a series of key steps toward that goal, such as performing a “genome transplant” on a microbe. If Venter succeeds, his artificial would be a triumph of human ingenuity, but it would probably be a minor blip on the biosphere's radar. Synthetic biologists want to make microbes to serve our own ends, such as making fuel and medicines. Burdened with genes for these functions, the microbes will likely be ill equipped to compete in the wild against species that have adapted for millions of years. For the foreseeable future, synthetic microbes will probably survive only in the refuge of a laboratory or a fermentation tank. “I will venture a prediction,” says Adam Wilkins, a biologist at the University of Cambridge in the United Kingdom. “This kind of biotech engineering might succeed in creating some rather weird and wonderful organisms. But the net effect on evolution will be nil—that is, outside the laboratory.” But humans, Wilkins is quick to point out, don't need synthetic biology to have a big effect on the evolution of life. Chainsaws, fishing lines, and smokestacks do just fine. Many fisheries, for example, have established rules for keeping fish only above a certain size. As a result, natural selection has favored fish that become sexually mature at smaller sizes. On land, hunters have had a similar effect by going after big game. Bighorn sheep, for example, now grow horns 25% smaller than they did 30 years ago. Humans have also triggered bursts of evolutionary change by introducing species to new habitats. In Australia, for example, cane toads brought in from South America in 1935 have became a continent-wide pest. They're devouring some small native species, and their poisonous skin is killing off some of their predators. Scientists have discovered that the toads are evolving in their new home: Toads at the leading edge of the invasion are growing longer legs and moving faster than their ancestors, speeding up the invasion. The native species are responding as well. Australian snakes are evolving resistance to the cane toad poison. Stephen Palumbi, a biologist at Stanford University in Palo Alto, California, expects that human-induced natural selection will become much stronger in the future. “In the last century, we were having a big impact, but it wasn't everywhere,” says Palumbi. “But global climate change is an ‘everywhere’ impact, and that's different.” Plants and animals are already responding to the warming climate by shifting their ranges to find the most comfortable temperatures. But moving won't be a solution for many species, which will face barriers such as deserts or cities. They will have to adapt to survive—a process scientists have already detected in some species, such as red squirrels in Canada, which have evolved to breed earlier in the spring. Extra carbon dioxide is creating a second worldwide evolutionary pressure as it dissolves into the ocean. There it is turning into carbonic acid and lowering the pH. Continued acidification will make it more difficult for corals and other marine animals to build skeletons and shells from calcium carbonate. Organisms will need to adapt to survive in these new conditions. “We know that things can evolve quickly, but can they evolve fast enough?” asks Palumbi. He and many other scientists suspect that for many species the answer is no. Unless we can ease up on the biosphere, they warn that the biggest feature of evolution in the near future will be extinctions. Knoll points out some disturbing parallels between today's crisis and a pulse of mass extinctions that occurred 252 million years ago, wiping out an estimated 96% of species in the oceans and 70% of species on land. A rapid increase in carbon dioxide in the atmosphere led, among other things, to ocean acidification. For animals that depended on calcium carbonate, “you had about a 90% chance of going extinct,” says Knoll. “Corals, sponges, brachiopods, they all kicked the can.” Knoll doesn't expect human-driven mass extinctions to be as bad as that ancient one. But they could still be unimaginably huge. “If we lose half the species on the planet, our grandchildren are not going to see them restored,” says Knoll. “It will take millions of years.” A drop in biodiversity may bring with it a collapse of many ecosystems. Coupled with a rapid increase in global temperatures, ocean acidification, and other changes, we may be pushing the environment into a state we've never experienced as a civilization. Such a stress could put our species under intense natural selection as well. ## Taking the long view One way or another, life will survive this current crisis. But where is life headed in the very distant future? To find out, planetary scientist King-Fai Li of the California Institute of Technology in Pasadena and his colleagues built a model of Earth and the sun and watched it evolve for billions of years. In their simulation, the sun gets brighter, as it has since it first formed. The extra energy speeds up the rate at which carbon dioxide is drawn out of Earth's atmosphere, cooling it off. But after about 2 billion years, this cooling mechanism breaks down, and Earth heats up, ending up like its lifeless neighbor, Venus. But Li's model does not include a clever species like our own, which can use its brain to influence the planet. Would it be possible to extend the life span of Earth's biosphere? “I am not going to rule out any talented civilizations that will be able to do that,” says Li. • * Carl Zimmer is the author of The Tangled Bank: An Introduction to Evolution (Roberts and Co., 2009). 10. Neurodegeneration # Could They All Be Prion Diseases? 1. Greg Miller Recent studies have renewed interest in the idea that many neurodegenerative diseases may involve prionlike mechanisms. The idea that proteins can be agents of disease was once heretical, but two Nobel Prizes later all but the most die-hard skeptics have been convinced that misfolded proteins called prions are the cause of several neurodegenerative disorders in humans and other animals. In disorders such as scrapie, mad cow disease, and Creutzfeldt-Jakob disease, misfolded molecules of a naturally occurring protein act like bad role models, encouraging normally folded proteins to misfold and clump together. As aggregates of misfolded proteins spread through the brain, nerve cells stop working properly and eventually die. A recent flurry of papers has revived interest in the idea that such mechanisms may play a role in an even wider range of neurodegenerative disorders, including two of the most dreaded scourges of old age: Alzheimer's and Parkinson's diseases. Such diseases almost certainly aren't contagious like true prion diseases are, at least in ordinary circumstances, but they may propagate through the nervous system in much the same way. The idea is actually decades old and seems to have originated with Daniel Carleton Gajdusek, who won a share of the 1976 Nobel Prize in physiology or medicine for his work on kuru, a prion disease he claimed was transmitted by ritualistic cannibalism among the Fore people of New Guinea. But until very recently, there was little experimental evidence for prionlike mechanisms in other neurodegenerative disorders, says Lary Walker, a neuroscientist at Emory University in Atlanta. “It's an old idea with new legs,” Walker said in his introduction to a recent online seminar on this topic hosted by the Alzheimer Research Forum (Alzforum). Evidence from recent animal studies suggests that many of the misfolded proteins thought to play a central role in a wide range of neurodegenerative disorders can, like prions, “seed” the misfolding and aggregation of their normally folded kin. In some cases, these pathological protein clusters appear to propagate from cell to cell. Such a mechanism could help explain several puzzles—such as why some neurodegenerative disorders tend to spread from one part of the nervous system to another in a characteristic pattern, and why some researchers have found pathological protein deposits in fetal stem cells transplanted into the brains of Parkinson's patients (Science, 11 April 2008, p. 167). “Twenty, 30 years ago, when people were proposing these links, we didn't know that networks degenerate [in characteristic patterns], and we didn't have fetal transplants,” says Marc Diamond, a neurologist at Washington University School of Medicine in St. Louis, Missouri. The prion concept helps integrate much of what's known about neurodegenerative diseases, Diamond says. “The reason it's catching on is that it makes a lot of sense.” Like a growing number of researchers, Diamond thinks the prion concept may not only help researchers gain a better understanding of neurodegenerative diseases but also point to treatment strategies they might not have considered otherwise. ## Killer proteins The high prevalence of kuru in the Fore people is one of the great medical mystery stories of all time. The disease spread in a manner that suggested infection, yet it caused no fever or other inflammatory response. Gajdusek won the Nobel for his work suggesting that kuru was transmitted by cannibalism practiced as part of funeral rites among the Fore. But the infectious agent remained a puzzle. In laboratory experiments with infected brain tissue, the infectious agent survived heat, chemicals, and ultraviolet light that destroy the infectivity of viruses and bacteria. In the early 1980s, Stanley Prusiner of the University of California (UC), San Francisco, proposed that proteins could be the infectious agent. It was a radical notion: All infectious agents known at the time contained DNA or RNA, the genetic blueprints for replication. But Prusiner proposed that infectious proteins, or prions, spread disease not by replicating themselves but by encouraging other proteins to undergo a conformational change. He won the 1997 Nobel Prize (some thought prematurely) for work supporting the prion hypothesis (Science, 10 October 1997, p. 214). Prusiner's theory explained the kuru puzzle, but both Gajdusek and Prusiner were interested in applying the idea to a variety of other disorders. After all, autopsy studies commonly found suspicious clumps of protein in the brains of people who died of Alzheimer's, Parkinson's, and other neurodegenerative diseases. As early as the 1960s, Gajdusek tried injecting extracts of brain tissue from Alzheimer's patients into monkeys and chimps. But these efforts, and later attempts by other researchers, yielded inconsistent results. Disease can develop decades after exposure to prions in humans, and researchers had to wait years to see whether experiments in primates had any effect, says Walker. Enter the transgenic mouse: In a study published in 2000 in The Journal of Neuroscience, Walker and colleagues injected extracts from the brains of Alzheimer's patients into genetically engineered mice susceptible to the disease (normal mice are not susceptible). They injected one side of the brain in each animal. Within a few months, the mice developed widespread plaques made up of β-amyloid peptide, a hallmark of Alzheimer's disease, on the injected side of the brain. That indicated that something in the brain extracts can seed plaque formation, although whether the seed is β-amyloid peptide itself remained unclear. More recent work led by Walker and Mathias Jucker at the University of Tübingen in Germany bolsters the case that β-amyloid is the culprit. In one experiment, the researchers found that brain extracts treated with antibodies to remove β-amyloid did not seed aggregation of β-amyloid when injected into mice (Science, 22 September 2006, p. 1781). And in the 4 August issue of the Proceedings of the National Academy of Sciences (PNAS), they reported that stainless steel wires coated with brain extract and then heated to kill microbes still caused β-amyloid deposits to form when implanted into the brains of mice. After 6 months, deposits had spread to neighboring brain regions. To Walker and others, such findings suggest that β-amyloid can induce deposits to form and spread through the brain—much as prions do. Walker says his group is working to create synthetic β-amyloid for a more def initive experiment: If a synthetic peptide can seed plaques, that should rule out the possibility that a microbe or some other factor in the brain extracts is to blame. Other researchers have been finding similar hints of prionlike behavior in other proteins associated with neurodegenerative disorders. Diamond and colleagues have found that aggregates of misfolded tau, a protein that forms pathological tangles in the brains of people with Alzheimer's disease and frontotemporal dementia, can be taken up by cultured mouse cells. Then, once inside the cells, the misfolded tau appears to encourage normally folded tau to misfold and aggregate, they reported 8 May in The Journal of Biological Chemistry. In July, European researchers reported similar findings in vivo: Injecting brain extracts containing misfolded tau into the brains of mice triggered tau misfolding and aggregation that spread from the injection site to nearby brain regions, they reported in Nature Cell Biology. Another suspect protein, α-synuclein, the main component of the “Lewy bodies” found in the brains of people with Parkinson's disease and certain types of dementia, also appears to propagate from cell to cell. In the 4 August issue of PNAS, researchers led by Eliezer Masliah of UC San Diego and Seung-Jae Lee of Konkuk University in Seoul reported that rogue aggregates of α-synuclein can pass from cell to cell and spur the formation of Lewy body–like aggregates in cultured human neurons. Experiments with cultured rat and mouse cells, reported in the same paper, suggested that α-synuclein triggers cell death in neurons and neural stem cells. “Cells that take it up form new aggregates, and they get sick and eventually die,” Masliah says. If α-synuclein spreads from neuron to neuron in the intact human brain, that might explain findings from two research groups that reported last year that fetal cells transplanted into the brains of Parkinson's patients contained deposits of α-synuclein—something that's unheard of in such young cells, the oldest of which had survived for 16 years before the patient died. (A third team found no pathology in transplanted cells.) The findings surprised many researchers who had assumed that deposits build up inside cells over many decades and don't jump from cell to cell. Cell-to-cell transmission of α-synuclein wouldn't necessarily doom stem cell therapies for Parkinson's disease, but it may present yet another obstacle, Masliah says. “We'd like to engineer those fetal cells to be resistant to the aggregates,” he says. One possibility, he suggests, would be to engineer them to overexpress enzymes that can break down aggregates. ## Is it contagious? The list goes on. Misfolded huntingtin protein, the culprit in Huntington's disease, can find its way from the extracellular fluid to the inside of cultured cells and trigger aggregation, according to a report by Stanford University cell biologist Ron Kopito and colleagues in the February issue of Nature Cell Biology. And at the Alzforum seminar, Neil Cashman of the University of British Columbia, Vancouver, in Canada described unpublished findings from his group that hint at prionlike behavior in SOD1, a protein thought to be central to neurodegeneration in amyotrophic lateral sclerosis. “We're getting a lot of hints from a lot of diseases,” Kopito says. “Together, it adds up to an emerging picture that deserves some pretty close attention.” These recent studies “expand the prion concept to other proteins … [and] show that under certain conditions the process of protein aggregation can be transmissible” from cell to cell, says Claudio Soto, a molecular biologist who studies neurodegenerative disease at the University of Texas Medical School at Houston. “What remains to be seen is whether or not this occurs in real life,” Soto says. So far there's virtually no evidence that proteins other than prions can transmit disease from one individual to another, notes Adriano Aguzzi, a prion researcher at the University Hospital of Zurich in Switzerland. One exception, Aguzzi says, may be amyloid A amyloidosis, a protein misfolding disorder that affects the spleen, liver, and other organs. Japanese researchers reported in PNAS in 2008 that misfolded amyloid A can be transmitted from one captive cheetah to another via feces. (The disease is a major cause of illness and death in these endangered cats.) A 2007 paper in PNAS suggested that foie gras prepared from duck or goose liver can transmit amyloidosis when fed to mice. Most researchers say it's unlikely that diseases like Alzheimer's and Parkinson's are contagious in the usual sense of the word. “I think what's special about prion diseases is that prions are indestructible,” says Walker. “There's practically nothing you can do to get rid of them within the realm of what we consider normal sterilization.” Most protein aggregates are more fragile, which may limit their ability to jump from one person (or animal) to another. All the same, Walker says the issue merits closer study. His experiments with the stainless steel wires, he notes, suggest at least a theoretical possibility that surgical instruments could transmit the disease. ## Window of opportunity Even if most neurodegenerative diseases don't spread from individual to individual like true prion diseases do, the possibility that they may spread from cell to cell in an analogous way opens up new options for treating them, say some researchers. If aggregates of tau jump from cell to cell to spread disease instead of building up slowly inside cells, for example, that presents an opportunity to cut them off with antibodies or other molecules that can't get inside cells, says Diamond. His group has been designing antibodies that specifically target misfolded forms of tau. Cashman's group has been taking a similar approach for SOD1. Both presented promising preliminary results from animal experiments at the Alzforum seminar. Another approach is to use small molecules designed to latch on to specific parts of a protein and prevent it from misfolding, says Jeffrey Kelly, a biochemist at The Scripps Research Institute in San Diego, California. In July, FoldRx Pharmaceuticals, a company Kelly cofounded, announced encouraging results from a phase II/III clinical trial of a compound that prevents protein misfolding and aggregation in people with a rare but fatal disease called transthyretin amyloid polyneuropathy. The disease affects the peripheral nerves, causing loss of function in the hands and feet, before spreading to the autonomic nervous system, which regulates digestion and other essential functions. Untreated, the disease causes drastic weight loss, but patients who took the drug for 18 months reversed course and started gaining weight, Kelly says. That suggests that the drug slows the disease's impact on the autonomic nervous system, Kelly says. “We're pretty excited about this, and I think it will energize efforts on other amyloid diseases that focus on preventing this process.” 11. Neurodegeneration # Acting Like a Prion Isn't Always Bad 1. Greg Miller If misfolded proteins are so dangerous, why hasn't evolution selected against them? One possibility is that it hasn't had to: Many of the diseases caused by protein misfolding strike late in life, after the reproductive years are over. It's also possible that this type of protein folding isn't always bad. In mad cow disease and related disorders, misfolded proteins called prions cause normal proteins to misfold and clump together, spreading havoc through the nervous system. Virtually any protein can misbehave like this when the conditions are right, and some researchers now suspect that prionlike mechanisms of protein misfolding and propagation may underlie a wide range of neurodegenerative disorders (see main text). But if this type of behavior in proteins is so dangerous, why hasn't evolution selected against it? One possibility is that it hasn't had to: Many of these diseases strike late in life, after the reproductive years are over. It's also possible that this type of protein folding isn't always bad, says Adriano Aguzzi, a prion researcher at the University Hospital of Zurich in Switzerland. “Having a protein that can exist in an on-and-off state where the on state is infectious is a wonderful way of transmitting information,” Aguzzi says. “Nature would be very stupid if it didn't utilize this system in order to solve specific problems during evolution.” Indeed, recent studies suggest that proteins that behave like prions play important roles in a wide variety of normal biological functions in organisms ranging from bacteria to humans. In some bacteria, prionlike proteins create a fibrous matrix that helps cells adhere to surfaces and stick together to form colonies. In some fungi that live in wet environments, such proteins form a film that reduces the surface tension at the water's surface, enabling spores or fruiting bodies to form. In insects they help strengthen eggshells and may lend strength to spiders' silk. In 2006, researchers led by biochemist Jeffrey Kelly of The Scripps Research Institute in San Diego, California, reported in PLoS Biology that in bovine cells a protein called PMel17 forms self-propagating aggregates that play a role in the synthesis of melanin, a pigment in the skin and eyes that protects against ultraviolet rays. More recently, researchers led by structural biologist Roland Riek of the Swiss Federal Institute of Technology in Zurich reported that several peptide and protein hormones pack into prionlike aggregates inside secretory granules in endocrine cells from several species, including humans (Science, 17 July, p. 328). These aggregates have several advantages as a storage system: They are stable, densely packed, and exclude other proteins, helping to keep the granules' contents pure. Even in the nervous system, prionlike proteins may have beneficial roles. In 2003, neuroscientist Eric Kandel of Columbia University and colleagues reported in Cell that a protein with prionlike properties plays a role in longterm memory in the sea slug Aplysia californica. At October's meeting of the Society for Neuroscience in Chicago, Illinois, Kandel described new work extending these findings to mice and bolstering his group's argument that self-propagating aggregates of these proteins may be involved in “tagging” specific synapses for strengthening when a long-term memory is created. “Prions and prionlike phenomena are much more common than we realized,” Aguzzi says. “These things that are cropping up now are the tip of the iceberg.” 12. Fishery Management # Can Science Keep Alaska's Bering Sea Pollock Fishery Healthy? 1. Virginia Morell The pollock fishery in the chill waters of the eastern Bering Sea is said to be the best managed in the world. But a surprising decline in numbers has scientists worried. Every January along the continental shelf in the eastern Bering Sea, a great mass of spawning, olive-green fish surge through the nutrient-rich waters. These are walleye pollock (Theragra chalcogramma), social fish that tip the scales at 700 grams when mature. The pollock spawn in waters north of the Aleutian Islands, where the bounty serves as food for marine mammals, seabirds, fish—and humans: The eastern Bering Sea pollock fishery is the largest and most lucrative in North America. Each year it brings in $1 billion and supplies millions of meals in the form of fish sticks, fast-food fish fillets, imitation crabmeat, and roe. The fishery is remarkable not only for its size but also, to date, for its sustainability: It's certified as sustainable by the London-based Marine Stewardship Council, and catch limits are recommended by scientists who judge the state of the fishery with surveys and state-of-the-art models; there's even a major ecosystem study funded by the U.S. National Science Foundation (NSF). This year's data are sparking concern, however. Previous predictions of a sizable uptick in pollock numbers weren't borne out by recent surveys. Instead, there are dramatically fewer pollock than scientists had estimated just a year ago. The stock is at its lowest level since 1980, and Greenpeace has put the fishery on its red list of unsustainable harvests. Last week in Seattle, Washington, responding to the lower numbers, scientists advising the North Pacific Fishery Management Council (NPFMC) recommended that the catch, rather than being raised from this year's low as expected, stay low again in 2010. Given the uncertainty, some argue that the harvest should be cut even further. The council itself will vote on the recommendation next week. “This time last year we said the stock was going up,” says marine biologist Lowell Fritz of the Alaska Fisheries Science Center (AFSC) in Seattle, who argued for a lower quota. “But it didn't. And that is cause for concern.” Fritz and others say that the revised estimates and dwindling numbers raise questions about how well-managed the fishery really is, and whether the researchers' reams of data and calculations can produce what all parties want: a long-term sustainable fishery in a healthy ecosystem. “It's tricky,” says Douglas DeMaster, science and research director of AFSC, who's based in Juneau. “How far can we knock down a single species before impacting the ecosystem? We don't know yet.” But even though the fishery is at a low point, “it is not overfished,” he says. “And we're working hard to make sure it never is.” ## The big haul Pollock are found across the North Pacific from Puget Sound to the Sea of Japan. But they are especially abundant in the waters of the Bering Sea's continental shelf. Once regarded as commercially worthless, pollock gained value after Japanese trawlers developed a process for reducing its white meat into a protein paste called surimi. And after the North Atlantic cod fishery collapsed in the 1990s, pollock fishing surged. The Bering Sea pollock fishery is now the world's largest single-species fishery, averaging more than 1 million metric tons annually. But pollock are not immune to overfishing: Other formerly abundant pollock fisheries in the region, including an exceptionally rich one called the Donut Hole (see graph, p. 1341), were heavily fished in the 1980s and 1990s and have never recovered. In the eastern Bering Sea, fishing boats hauled up an average of 1.33 million metric tons of pollock each year between 2001 and 2007. In 2007, however, surveys showed that pollock numbers were down. So scientists recommended—and fishers abided by—a reduced quota of 1 million metric tons in 2008. NPFMC reduced the 2009 catch by another 18% to 815,000 metric tons. But the scientists' models predicted better news ahead, and last fall they estimated that in 2010 the stock could sustain a catch close to previous levels. To assess the health of the pollock population, AFSC researchers gather data about the sex, size, weight, and condition of the fish from scientist-observers aboard the fishing vessels and take abundance data from annual bottom-trawl and acoustic midwater-trawl surveys. The scientists track the fish in age classes. Pollock reproduce prolifically, live nearly 11 years, and are mostly fished beginning at age 4, as many fishing vessels selectively target areas preferred by older, larger, and more valuable fish. So when the next season opens in January 2010, fish that hatched in 2006 will be included in the catch. And it is this 2006 class that has surprised and disappointed scientists. As 1-year-olds and 2-year-olds, this age group appeared particularly plentiful. “It looked like an above-average class,” says James Ianelli, an AFSC fisheries biologist in Seattle who heads the modeling team and is lead author of the pollock assessment report released on 17 November. But this year's bottom-trawl survey found fewer fish, and the midwater acoustic trawl was even worse, down 30% from last year's estimate. “There were fewer 3-year-olds than our model predicted,” says Ianelli, and older fish were largely absent. Why were 3-year-old pollock relatively scarce? “It could be they had poor survival rates as 1-year-olds, or maybe they are staying more toward the bottom. We've also had 4 years of record-cold bottom temperatures,” which could affect the fish's distribution, says Ianelli. Whatever the reason, the discrepancy between the previous and the most recent surveys, coupled with the continuing decline, has set off alarm bells. “It surprised people because we [scientists] almost always get the trend right,” says Fritz. “But we really missed this one; we were off by 30%. And that makes me think we're in new territory.” In November 2008, scientists had estimated that the population could sustain a catch of 1.23 million metric tons in 2010. But last week the 15 members of the NPFMC Bering Sea Groundfish Plan Team, after much debate, voted instead for a catch of 813,000 metric tons. The vote was split, with seven scientists recommending a further cut to 738,000 metric tons. ## Driving the system Despite the lower number, Ianelli and DeMaster say the fishery remains healthy. There's enough of a buffer built into the model to assure that the spawning stock never drops below 20% of its estimated unfished numbers, says DeMaster. “It's a conservative approach,” he says, meant to ensure that enough young fish will be produced every year to replace those caught. This year's stock is at 27%. To better understand the pollock's cycles, and what factors in the ecosystem affect them, scientists are busy incorporating more data about the ecosystem—on ocean temperature, zooplankton production, pollock predators, and climate—into their models. “We're in the third year of our Bering Sea Project,” says marine biologist Mike Sigler at AFSC in Juneau, referring to a 6-year,$52 million NSF and North Pacific Research Board–funded study of the eastern Bering Sea's ecosystem. “It's already helping us understand these changes in pollock.” Researchers have found that many poor age-class years seem to be tied to less sea ice and warmer ocean waters. “That's the pattern for the [pollock] classes from 2001 to 2005,” says Sigler. However, the drop in the 2006 class remains puzzling, because there was plenty of sea ice and colder water that year. Sigler hopes to have an explanation by 2012, when the project ends. Regardless of the cause, the spawning mass has declined, and AFSC's harvest rules have in turn limited the catch, says fisheries scientist Steven Martell of the University of British Columbia, Vancouver, in Canada. “There's been a 45% reduction in catches over the last 4 years,” he says. But if those reductions aren't sufficient and the stock doesn't recover as projected in the next few years, he warns that the fishery “will certainly be in trouble” and could be closed. Others think that the catch should be reduced even further now. “The pollock fishery is the most valuable fishery in the U.S.,” says Jeremy Jackson, a marine ecologist at the Scripps Institution of Oceanography in San Diego, California. “More is known about it than any other fishery in the world. Yet despite all the wonderful data and fancy models, they've failed to protect the pollock or the Bering Sea ecosystem. We need to call ‘Time out!’” Jackson says that although including ecosystem effects in pollock management is worthwhile, the researchers also need to look at “the effects of pollock fishing on the ecosystem.” He notes that pollock is a staple for northern fur seals and endangered Steller sea lions, both of which are struggling (Science, 4 April 2008, p. 44). Critics say worries about the fishery stem from a fundamental issue: It's managed primarily to get the maximum sustainable yield from a single species. “It can't make a legitimate claim to [being] an ecosystem-based fishery, as long as it maintains this single-species focus,” says Timothy Ragen, director of the Marine Mammal Commission in Bethesda, Maryland. Jackson and others say that when faced with questionable data, as scientists were this year, it would be better to reduce the quota even further. Fritz says he tried to persuade the Plan Team to do that but failed. “Can we recognize the danger signals and react appropriately and in time, if the fishery is really in trouble?” he asks. But the fishery is required by law to also consider the socioeconomic effects of its decisions. The pollock fishery is one of Alaska's largest employers, and former Alaska Senator Ted Stevens once brought the entire U.S. government to a halt to protest (and eventually overturn) restrictions upon it. “There's a lot of policy in this process,” says DeMaster. “It's not entirely science.” Still, many say that the pollock fishery continues to be one of the best-managed in the world, largely because the Fishery Council, unlike some other big fisheries, follows scientists' guidance. “The North Pacific Fishery Council relies the most on science,” says fisheries biologist Daniel Pauly of the University of British Columbia, Vancouver. That's “in stark contrast” to other councils or governing bodies such as the European Commission, he says, where fisheries sometimes ignore scientific advice and adopt high quotas, “and the stocks [such as bluefin tuna] suffer accordingly.” So when fishing-fleet representatives and others gather in Anchorage next week at the Fishery Council's meeting to set next year's quotas, the scientists' recommendations likely will be adopted. “We know the scientists are concerned,” says Donna Parker of the Seattle-based fishing firm, Arctic Storm Management Group. “They treat our fishery like a cultivated field, and we expect they will manage it well into the future.” 13. Particle Physics # Seeking a Shortcut to the High-Energy Frontier 1. Adrian Cho An accelerator that smashes exotic particles called muons promises more bang from a smaller accelerator—if physicists can actually build it. BATAVIA, ILLINOIS—When you fall behind, you need a comeback plan, and physicists here at the Fermi National Accelerator Laboratory (Fermilab) think they have a dandy. They're losing their title as keepers of the world's highest-energy particle smasher, but they have an idea for a wild new one that might vault them back to the energy frontier. They're hoping the U.S. Department of Energy (DOE) will give them enough money to find out whether their idea is a dream machine—or a technological nightmare. For 24 years, Fermilab's Tevatron collider has held the energy record for particle collisions, firing protons into antiprotons at a maximum of 2 tera–electron volts (TeV). But researchers at the European particle physics lab, CERN, near Geneva, Switzerland, are finally revving up the 27-kilometer-long, $5.5 billion Large Hadron Collider (LHC), which aims to blast protons into other protons at 14 TeV. With the Tevatron facing obsolescence, Fermilab physicists hope to build a beast called a muon collider. The new machine, the topic of a workshop here last month, would smash muons, which are heavier cousins of electrons, into antimuons. In principle, it could reach energies as high as rivals already in the planning stages—the 30-kilometer-long straight-shot International Linear Collider (ILC) that would fire electrons into positrons and a higher-energy electron-positron collider called the Compact Linear Collider (CLIC) being developed at CERN. But a muon collider would be much smaller. As cost scales with size, it could also be cheaper than the other machines. That's if a muon collider can be built. Unlike electrons or protons, muons are radioactive. So the facility would have to generate the particles, accelerate them, and smash them together in the fraction of a second before they decay. Physicists would also have to protect their equipment from the intense radiation emanating from the muons and limit the amount flowing out of the lab. Interest in the exotic machine has grown because the more conventional plans hatched by U.S. physicists have stalled. Three years ago, they were hoping to host the ILC, with construction to start as early as 2012. But DOE officials blanched when researchers estimated that it would cost at least$7 billion (Science, 9 February 2007, p. 746). DOE officials now think the price of the ILC could top $20 billion, including inflation and contingency, and have said the project cannot be realized until the mid-2020s. That delay has created an opening for supporters of the muon collider, and they seem to have DOE's ear. At an advisory panel meeting in October, William Brinkman, head of DOE's Office of Science, told Science: “I'd like to see Fermilab do something with a muon accelerator. That would be something novel, rather than spending time beating our brains out building the next biggest accelerator.” But is a muon collider a machine impossible? “The problems range from hard to very hard to ultrahard,” says Fermilab Director Pier Oddone. Researchers have requested$16 million per year over 5 years just to determine if a muon collider can be built, he says. Even that may be asking too much, says Daniel Schulte, an accelerator physicist at CERN. “That's—oh God, how do I put this?—an ambitious goal.” ## The best of both worlds Particle colliders generally come in two types: those that smash protons or antiprotons and those that smash electrons and positrons. A muon collider might combine the advantages of proton and electron machines. For revving particle beams to the highest energies, nothing beats a proton accelerator. At 14 TeV, the LHC will blast out massive new particles or even open new dimensions of space, or so researchers hope. However, as with any proton collider, the LHC will reveal those things in a messy way. Protons contain other particles called quarks and gluons. So when one proton tears through another, debris flies every which way. “It's like two cans of Campbell's soup,” says Vladimir Shiltsev, an accelerator physicist at Fermilab. “You collide them and soup splashes everywhere. But God knows what actually happened.” Within the mess, typically only one gluon scores a direct hit on another, so only 1/6 to 1/10 of the protons' energy goes to make new particles. In contrast, electrons and positrons have no internal parts. So they make clean collisions in which all the energy can go into making new particles. That's why physicists say that the logical successor to the LHC is an electron-positron collider. The ILC would generate collisions at 0.5 TeV, which might be enough to map the terrain the LHC will open. If not, the 40-kilometer-long CLIC would make 3-TeV collisions by using a lower-energy beam to drive a higher-energy one. There is a catch, however. Because every action has a reaction, charged particles radiate when their paths are forced to curve. Responding readily to a sideways shove, lightweight electrons and positrons give off copious x-rays that sap their energy and prevent them from reaching TeV-level energies in a circular accelerator, or “synchrotron.” So both the ILC and CLIC would use two huge linear accelerators facing each other. That's an inefficient arrangement, as the electrons and positrons collide only once instead of repeatedly, as happens in a circular machine. Muons share the better features of protons and electrons. Like protons, they are heavy, weighing 207 times as much as electrons. So they radiate little energy as their paths bend and could reach high energy in a relatively small synchrotron. Like electrons, however, muons have no parts, so their clean collisions permit all the energy to go into new particles. “Basically, we can do very efficient acceleration and reach a higher energy with a much smaller machine than any electron collider or proton collider,” Shiltsev says. A muon collider would fit comfortably on Fermilab's 2750-hectare campus, he says. The machine might also fit into Fermilab's other plans. Sometime in the next decade, lab officials hope to build a proton source, known as Project X, to generate neutrinos and study their interactions and pursue other subjects on the “intensity frontier” (Science, 31 August 2007, p. 1155). A muon collider would also need a proton source like Project X. ## Challenges galore A muon collider poses daunting challenges, however. Physicists must generate the muons and antimuons by blasting protons into a metal target. They must gather the particles into bunches and “cool” them so that they nestle together. The bunches of muons and bunches of antimuons must then pass through a series of accelerators and into a final synchrotron, where they would circulate in opposite directions and collide. All of this must happen before the muons decay. If a muon zings along at 1.5 TeV, the time dilation of special relativity stretches its lifetime to 30 milliseconds—up from 2 microseconds when it's still. That's time enough for 500 circuits in the final ring, says Michael Zisman of Lawrence Berkeley National Laboratory in California, if everything goes smoothly. Accelerator physicists' biggest challenge will be cooling the muons. Cooling reduces the relative motion of particles within a bunch, and because electrons radiate so easily, physicists can cool them by simply sending them around a synchrotron for a while. As the electrons in a bunch radiate, the bunch contracts like a balloon shoved into a refrigerator. Muons radiate very little, so that approach won't work for them. Instead, physicists are considering a scheme called ionization cooling. In it, the muons would run alternately through liquid hydrogen, to slow their motion in all directions, and through chambers filled with radio waves, to speed them up in just one direction. An incredibly high magnetic field—50 tesla, or 1 million times Earth's field—is needed to keep the muons from flying away. “The techniques we need are beyond state-of-the-art,” says Fermilab's Stephen Geer. Researchers in the United Kingdom are currently trying to demonstrate the technique in small-scale bench tests. Radiation will also cause headaches for accelerator designers. Muons decay into electrons and neutrinos, and the electrons would convey enough power to overheat the superconducting magnets that would guide the muons around the ring. So researchers are considering novel designs with slots to let the radiation out. The radiation would also generate particle “backgrounds” that might overwhelm the desired signals from the collisions. Simulations suggest that 100,000 neutrons and 10 million photons would flood each square centimeter of a particle detector during each bunch crossing, Fermilab's Marcel Demarteau said at the workshop. All that extraneous particle pollution might render the muon collider irrelevant as a follow-up to the LHC, says Barry Barish, a physicist at the California Institute of Technology in Pasadena, who leads the ILC design team: “The whole point of going to electron-positron collisions is to get an environment that's clean enough to do all the things you can't do at the LHC,” he says. Perhaps most problematically, the muon collider will send radiation beyond the lab's boundary. Neutrinos hardly interact with matter, so those from the muon decays would pierce Earth in all horizontal directions and emerge from its curved surface tens of kilometers away. There they would be numerous enough to knock loose other particles and create interacting radiation. Scientists say they can keep such radiation at safe and legal levels by building the collider 200 meters underground and limiting the muons in the beam. Convincing the public, however, may be more difficult. ## Which collider, when? Such obstacles led physicists to write off a muon collider in the 1990s. But recent advances have prompted a reconsideration. For example, physicists once lacked a suitable source of muons, but researchers have recently demonstrated a mercury target that can provide enough of them, says Fermilab's Nikolai Mokhov. All agree that before any decision can be made on the next collider, the LHC has to reveal at what energy new particles will emerge, which will take 3 or 4 years. If the answer is in the neighborhood of 0.5 TeV, then the ILC would be the way to go, says Fermilab's Oddone. If the range is higher, then the options would be a muon collider or CLIC. “If we need a higher-energy machine, we want to have put the muon collider into contention” by that time, Oddone says. But even that goal strikes some as optimistic. Jean-Pierre Delahaye, an accelerator physicist at CERN, says researchers plan to publish a report next year that shows that CLIC is feasible. That is 24 years after they came up with the idea. “I would bet a bottle of champagne—a case—that [a feasibility study] will take at least as long for a muon collider,” he says. If so, 5 years won't be nearly long enough to tell whether the idea of a muon collider is too good to be true.
2018-09-26 09:31:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2263721525669098, "perplexity": 4075.7364758609006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267164469.99/warc/CC-MAIN-20180926081614-20180926102014-00485.warc.gz"}
http://www.thisisthegreenroom.com/2009/deconstructing-the-gaussian-copula-part-ii/
Deconstructing the Gaussian copula, part II July 9, 2009 in (Parts I II and a half and III of this series are also available.) Recently, I addressed a great deal of misinformation regarding the Gaussian copula and it's role in the 2008 crisis. I would like to try and follow that up with a succinct description of the copula and its use in CDO pricing. (This may seem a defense of the math behind the process, but you know I'm just setting it up for a fall.) Introduction David Li's contribution to quantitative finance was the rapidly-standardized "single factor Gaussian copula" CDO pricing framework. The real crux of the problem was the "single factor" part - not the Gaussian copula itself (though we won't pull any punches here). In an extraordinarily broad sense, a copula is a mathematical function that describes how two or more random variables interact. "Correlation" is a simple way of describing the copula, which should give the function some intuitive grounding. But let's back up a second and figure out why we even need a copula in the first place. Aside: Why Copulas? If you try to model the behavior of many random variables, you need a multivariate distribution. The most mathematically friendly distributions are from the Gaussian family, including the familiar bell (or normal) curve. This is why such models are prevalent in all manners of statistics. For most purposes, the model is not only easy to work with but asymptotically correct (which is a nice feature, to put it mildly). However, there are some areas where the model choice is more for pragmatic reasons than justified ones - finance being prime among them. Indeed, financial distributions do not behave normally, but only recently have tools been developed that can describe them - and even there large joint distributions are daunting. So, it is unsurprising that the Gaussian copula arose as a natural choice for modeling the joint distribution inherent to CDOs - which are essentially just collections of many intercorrelated credits. But I'm getting ahead of myself. (This is much easier to discuss than to write about, I think, because you can guage your audience's comfort which each boldfaced section before moving on. I hope, brave reader, that you are still there.) Lets talk about CDOs. CDOs A CDO is nothing more than a collection of various bonds, all held together in a basket. The principal risk of a CDO is default: the chance that one or more of the bonds will not survive to maturity. To isolate this risk, it is instructive to think of the CDO as a basket of sold CDS contracts, rather than a basket of purchased bonds (and indeed, "synthetic CDOs" are nothing more than CDS portfolios and have rapidly gained market share from bond portfolios). Thus, the buyer of a CDO needs to draw two conclusions regarding the basket: 1. Will any of the credits default? 2. When will all of those defaults occur? The first point is obvious; the second gets at the heart of the problem. Both the timing and the correlation of defaults matter. If the CDO basket is comprised disproportionately of financial companies, then default by one may imply a greater likelihood of default for the others; a more diversified basket may not exhibit such dependencies. This issue is compounded by the introduction of tranches - a staple of the CDO industry. Again, it is helpful to consider a CDO as a basket of sold CDS. The most junior (or "equity") tranche has, by definition, sold insurance on the first few issuers to default - say, the first 3. The next tranche does not experience a loss until the 4th issuer defaults. The key here is that when a portfolio is tranched, investors have not sold CDS on specific issuers by name, but rather by time of default. They can not know ahead of time which issuers they are effectively responsible or on the hook for. Bathtub Correlation To understand why tranching compounds the correlation problem, think of the CDO as a rectangular bathtub interspaced with mines that represent each issuer's default. The CDO investors are aboard a boat on one side of the bathtub, and need to cross to the other side. If the boat hits a mine, that issuer defaults, and the explosion of the mine will damage the boat. The equity tranche has an extremely thin hull and will sink quickly; the senior tranche has a thick hull and can withstand many blasts without taking damage. Finally, the boat moves across the bathtub via geometric brownian motion - which is to say, randomly. In a low-correlation world, the mines are dispersed uniform randomly across the bathtub; hitting one mine does not imply or necessitate hitting any other. With high correlation, the mines cluster somewhere in the water; hitting one mine makes it relatively certain that another will be hit. As a consequence, equity investors prefer high correlation. They are indifferent to hitting just a few mines or many, as they are wiped out in both situations. Therefore, they prefer the mines to be clustered, as this leaves more clear paths across the bathtub. In contrast, senior investors prefer low correlation - they can withstand glancing off a few mines, but hitting a cluster would wipe them out. From this intuitive example, it should be clear that not only the timing of the defaults, but also their expected clustering (i.e. correlation) is important when valuing a CDO tranche. Correlation in the Guassian Copula Let us first draw the connection I've sketched out already: CDOs are composed of many issuers that may interact with each other; and a multivarite normal distribution is a common method of describing such behavior. So far, so good. Like any Gaussian multivariate model, the Gaussian copula takes as parameters the correlation of every pair of variables under consideration. (In other words, to make the model work, you need to "explain" to it how every issuer interacts with every other issuer - these are the parameters.) Thus, the number of parameters increases with the square of the number of variables being considered - specifically, there are $\frac{N(N-1)}{2}$ parameters. If you had a CDO of 100 names, you would need to compute 4,950 parameters to describe their behavior! It doesn't take a statistical degree to appreciate the flimsiness of a model which relies on such assumptions - it's just too many to estimate reliably. Clearly, the traditional model simply won't do. Enter David Li, whose principal contribution to this field is to boil 4,950 parameters down to just one. Shocking! Dastardly! The decision that caused the 2008 crisis! Well, not really. Though I am full of doubts about the validity of the Gaussian copula for this task in the first place, I do not think that the compression of its parameter space is the chief culprit by any means. What Li was suggesting amounted to this: instead of modeling the intricate inter-corporate correlation structure, in which financials are highly correlated to each other but bear little semblance to utilities, which themselves are very similar, he said why not just model everything at the average correlation of the CDO names? Actually, he just said that one correlation level will be enough to describe the CDO price - he did not say it was the average (I just added that to make the notion more tolerable at first glance). He didn't care if you chose a higher or lower correlation than any pair in the whole CDO exhibited; his claim was that there was some single number that would get the model to output a price that matched the market. Before we get up in arms about this let's remember that most financial instruments are priced this way. One or more variables of the equation are left free to change, such that for some level the model will output the "correct" (or market-observed) price. With options, this is called volatility; with swaps this is the fixed rate; with bonds this is the yield - I particularly like the last example because most people assume this is limited to derivatives. It's not, "real" securities exhibit this problem too --  for stocks, it's called a P/E ratio. So, we've boiled correlation down to one parameter which can take any value, but forces all issuers to have the same correlation to each other AND (this is a much more important caveat) exhibit a Gaussian dependance structure. Now What? This Is Getting Boring. Ok, let's price a CDO. If I have CDS prices for all the issuers in my CDO, I can back out the probability of each issuer defaulting. (That's a whole other lecture, but please take my word that if we have the price of default insurance, we can calculate the probability of default. Otherwise I'll go on for another 2000 words...) This answers my first question: will defaults occur? Combine that with a correlation number and I can answer the second question: when will all the defaults occur? So now I can price the CDO, right? Unfortunately, no. The default probabilities backed out of the CDS data are conditional default probabilities, meaning they have the market's 4,950 correlation factors baked into them. Company A may be doing fine, but it's very correlated to company B which is not so healthy. The result is that company A's CDS will exhibit a relatively high default probability even though that's more B's fault than A's. In statistics, we like to deal with independent or unconditional probabilities, because the math becomes dramatically easier. So the conditional probabilities extracted from the CDS are not so useful, and must be transformed into independent probabilities. To achieve this goal, we do something that I think is very clever: We set up a model in which defaults are driven by a shared "market factor" and an idiosyncratic factor, similar to a regression with one dependent variable and an error term, hence the name "single factor model." Now, I know I just said there are two factors, but one is specific to each individual issuer, so it doesn't count as one of the model factors -- if this troubles you, chalk it up to statistical nuance. Anyway, the two drivers are weighted by a correlation term; as correlation increases the market factor dominates, and as it decreases the idiosyncratic factor dominates. Now, suppose for a moment we knew the value of the [random] market factor. In this case, default would be driven solely by the idiosyncratic factor (since the market factor is fixed, and we have chosen it such that all names either are - or are not - in default). The idiosyncratic factor is, by definition, independent across all issuers. Therefore, we have artificially created a scenario in which defaults are independent for each issuer by conditioning the market factor on a certain level. More specifically, we have generated a set of conditionally-independent default probabilities. Now, repeat the process for every issuer and every market factor level. The result is a complete picture of how every issuer behaves in every possible situation. From this, the unconditionally independent probabilities can be extracted. (If that isn't quite clear, suffice to say there's a bit of math behind it. Interestingly, the math is surprisingly simple, but with the exception of the number of factors in a Gaussian model I have promised not to write out any equations in this post, so in the absence of symbols I hope you will accept my reasoning.) So now, we have the probability of every issuer independently defaulting at any given time - with that information, it is relatively straightforward to figure out the expected loss on the portfolio. In fact, it's mainly arithmetic at this point: the value of the portfolio is just the probability-weighted average payoff of all the issuers. And that's really it - that's how the Gaussian copula is used to price a CDO, or a collection of sold CDS on many issuers. We calculate the default probabilities from the CDS, then we use the Gaussian copula to tell us how they relate to each other. You'll notice that I never actually mentioned the copula when discussing the probability model - that's because you don't really need it. It happens that the copula math simplifies nicely into something that is almost, but not quite, entirely unlike a copula (hey! a Douglas Adams reference!). However, the copula-based approach is more informative, even if copula-specific math per se doesn't enter the picture. And why is this so bad? A few of the modeling decisions I've described above are unquestionably poor ones, though it may not be obvious how to improve them. Here is my brief rundown: • The Gaussian dependence structure - what's wrong with it? What alternatives are there? Why are they better? • The single factor - is it really sufficient to describe the behavior? • The single correlation number - is it sufficient to describe the behavior? Can we reliably estimate more relationships? Is correlation the right metric in the first place? I'll attempt to answer all these and more in part III...
2016-12-03 02:26:41
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6270695328712463, "perplexity": 973.0357667542725}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540804.14/warc/CC-MAIN-20161202170900-00414-ip-10-31-129-80.ec2.internal.warc.gz"}
https://gharpedia.com/ultrasonic-testing-procedure/
## Ultrasonic Testing Procedure For Concrete Structure Ultrasonic Testing is one of the non-destructive test method based on the transmission of the ultrasonic pulse in the component or materials like concrete, steel, etc. Ultrasonic testing is known as UT or Ultrasonic Pulse Velocity Test or UPV Test. The ultrasonic testing method is based on the use of equipment composed of transducers which produce and receive the ultrasonic wave of 0.01 to 60 MHz. The pulse (wave) depends on the density and the elastic properties of the materials of RCC structure. ###### Ultrasonic Testing for Concrete Courtesy - RK Infra Techno Here is the procedure of ultrasonic testing and general guidelines for assessing the quality of concrete based on the results. It also provides you information about the identification of corrosion prone locations depending on the results of ultrasonic testing and rebound hammer test. ### The Procedure of Ultrasonic Testing for the Compressive Strength of Concrete: Ultrasonic testing consists of measuring the travel time of an ultrasonic pulse or wave of 25 to 60 kHz. The ultrasonic pulse or wave is produced and received by an electro-acoustical transducer. The transducer is held in contact with one surface of the concrete member and receiving the same by a similar transducer in contact with the surface at the other end. The speed of the pulse or wave is the function of the density of the material. It allows the estimation of the porosity and the detection of discontinuities like cracks in the house. Once the distance between two probes (path length) and time of travel is known, it is possible to determine the average pulse velocity by the following equation. Pulse velocity= Distance between the two probes (Path Length)/ Time of travel The higher pulse velocity indicates higher elastic modulus, density and integrity of the concrete. Pulse velocity also depends on the method of propagation and the arrangement of transducers. There are three primary ways in which the transducers may be arranged. #### 01. Opposite Faces (Direct Transmission): If one transducer is placed at one end, and the other one is placed exactly at the opposite end, it is the direct method. #### 02. Adjacent Faces (Semi-Direct Transmission): In this method, both transducers are placed on the same surface of the concrete. The receiver receives the ultrasonic pulse coming after striking the molecules of the concrete. #### 03. Same Face (Indirect Transmission): It is mostly used for corners of the concrete members. The maximum pulse energy is transmitted at right angles to the face of the transmitter. The direct transmission method is considered to be the most reliable way. ### The Relation between Pulse Velocity of Ultrasonic Testing and the Compressive Strength of Concrete: The pulse velocity basically does not directly measure compressive strength. The compressive strength of concrete is influenced by many variables. It is influenced by the types of aggregates, age, moisture content, and mix proportion of concrete. It is also influenced by curing of concrete and others factors. The factors affecting the compressive strength have an influence on the pulse velocity. As a result, a statistical study is essential to relate the pulse velocity and the compressive strength. Bad concrete (which has been poorly compacted concrete or, there is segregation of materials, internal cracking, or flaws) has the lower pulse velocity although the same materials and mix proportions are used. Good concrete has high pulse velocity thought the materials and mix proportions may be same. Estimation of the concrete strength can be done by establishing the suitable correlation between the pulse velocity and the compressive strength of concrete specimen made with same mix proportions of materials and in same environmental conditions similar to that of the structure. Ultrasonic Testing Graph shows the correlation between the pulse velocity and the compressive strength of the concrete. ###### Ultrasonic Testing Graph The estimated strength may differ from the actual by 20 percent or more. The correlation so obtained may not be applicable for concrete of another grade or made with different types of materials. General Guidelines for Concrete Quality based on the UPV Test Results: Pulse Velocity in Concrete (Km/Sec) Concrete Quality (Grade) > 4.0 Very Good to Excellent 3.5 – 4.0 Good to Very Good, Slight Porosity may Exist. 3.0 – 3.5 Satisfactory but Loss of Integrity is Suspected < 3.0 Poor and Loss of Integrity Exist The table shows the qualitative assessment of concrete based on UPV test result. ### The Realistic Assessment of the Condition of Concrete Surface: The realistic assessment of the condition of the concrete surface can be done by the combination of the results of UPV test and rebound hammer test. Following table identifies the location of corrosion in concrete by combining the results of ultrasonic testing and rebound numbers (rebound Index). #### Identification of Location  According to Readings Interpretations • High UPV value & high rebound number • Not corrosion prone • Medium range UPV values & high rebound numbers • Surface delamination & low quality of surface concrete & corrosion prone • Low UPV & high rebound numbers • Not corrosion prone however to be confirmed by chemical tests, carbonation & pH • Low UPV values & low rebound numbers • Corrosion prone – requires chemical and electrochemical tests. ### Conclusion: If the ultrasonic testing shows satisfactory or poor concrete, it indicates that the strength of concrete is not as per the mix design. Therefore two inferences can be drawn: • The original concrete itself was not having the designed strength due to any reason whatsoever. • If the records prove that original strength was as per the designed strength, then there is a loss of strength due to various reasons. Thus strengthening, or rehabilitation is needed depending upon loss of strength, the age of the structure, the importance of structure, future life, the extent of strength increase required, etc. Therefore first carry out non-destructive concrete tests like rebound hammer test which is suggestive of comparatively stronger and weaker areas. The doubtful areas than can be tested with ultrasonic testing or with the combination of tests. The core test would become a must where even the ultrasonic testing suggests poor concrete. ## Material Exhibition Explore the world of materials.
2019-01-22 15:23:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5317683815956116, "perplexity": 1771.9540656144122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583857913.57/warc/CC-MAIN-20190122140606-20190122162606-00591.warc.gz"}
https://physics.stackexchange.com/questions/692438/what-is-ohms-law-for-induced-current
# What is Ohms law for induced current? In this book it is written on pg $$313$$ in the last paragraph that Ohms law i.e. $$R (constant)=\frac{\epsilon_{ind}}{I}$$ is valid for induced current in a circuit. They define $$R$$ to be the sum of the resistance of all the resistive elements part of the circuit, $$I$$ to be the current and $$\epsilon_{ind}$$ to be the induced EMF. I have two doubts related to the meaning of the terms $$I$$ and $$\epsilon_{ind}$$. 1. What current does $$I$$ represents? Does it represent induced or net current through the circuit? Suppose in a circuit there is a battery connected as well, the circuit is kept in a region where its magnetic flux changes. The battery and changing magnetic flux will both produce current. Does $$I$$ represent net current through the circuit i.e. the net sum of currently produced by battery and flux or just induced current? 2. Around what loop is $$\epsilon_{ind}$$ calculated? There can be infinitely many closed lines along which we can calculate $$\epsilon_{ind}$$, then for which loop does $$\epsilon_{ind}$$ corresponds? See the diagram. The two black lines represent curves passing on the surface of a wire, the green and blue lines represent a loop inside the wire, and the red lines represent an uniform magnetic field that is increasing. I can calculate $$\epsilon_{ind}$$ along blue, green and also along the two black loops. But whose $$\epsilon_{ind}$$ is to be used in the formula? Given the loop $$\partial S$$ that is the boundary curve of a simple otherwise arbitrary surface $$\mathcal S$$ and calculate the magnetic flux $$\Phi$$ through this surface defined by $$\Phi = \int_{\mathcal {S}} \mathbf B \cdot d\mathbf{S}$$. According to Faraday's induction there is an emf $$\mathcal V$$ induced along the perimeter$$\partial \mathcal {S}$$ : $$\mathcal V = \oint_{\partial S} \mathbf E \cdot d\mathbf{\ell} = -\frac{\partial \Phi}{dt}$$. This $$\mathcal V$$ exists as a contour integral irrespective whether there is any current anywhere but if $$\partial \mathcal S$$ is along an honest to goodness conductor, a metal wire, then this emf will make the charges move inside the wire, so if the wire's resistance is $$R$$ then there will be a current $$I$$ flowing such that $$I=\frac{\mathcal V}{R}$$. So the current will depend not only on its resistance and by the time varying magnetic field but also on the shape of the wire and its disposition relative to the field. Important to note that since the $$\mathbf {B}$$ field is solenoidal, $$\nabla \cdot\mathbf{B}=0$$, the flux $$\Phi$$ itself is independent of the details of the spanning surface, instead geometrically the flux $$\Phi$$ depends only on the boundary curve $$\partial \mathcal {S}$$. With different loops you get different flux and emf. According to the book, there is an AC current in the circuit, induced in the coil II by a coil I. The differential equation for the circuit is: $$L\frac{\partial I}{\partial t} + RI = emf$$ If L (the inductance of the coil) is too low compared to R, the circuit can be considered as basically resistive, and $$emf = RI$$. The current is always all current of the circuit.
2022-08-12 09:50:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 33, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7883968949317932, "perplexity": 157.38226599131715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571597.73/warc/CC-MAIN-20220812075544-20220812105544-00176.warc.gz"}
https://software.acellera.com/docs/latest/htmd/htmd.simlist.html
# htmd.simlist module¶ HTMD can handle a large amount of simulations. Simulation lists allow to create a simple list containing all relevant information about the simulations to later perform any type of analysis. class htmd.simlist.Frame(sim, piece, frame) Bases: object Class used for referencing a specific frame of a Sim object object. Parameters • sim (Sim object) – The simulation. • piece (int) – Which trajectory piece the frame corresponds to. • frame (int) – The frame of the specific trajectory piece. frame piece sim class htmd.simlist.Sim(simid, parent, input, trajectory, molfile, numframes=None) Bases: object Information class for a single simulation. Do not use directly. Objects of this class are constructed by the simlist() and simfilter() functions. This class is used for storing information on simulations. This information includes the path to the simulation, the path to a structure file (pdb) which corresponds to the simulation, the folder containing the input files used to generate the simulation (useful for adaptive), the parent of the simulation (if it was filtered it will point to the original simulation) and a unique simulation id. simid A unique simulation ID Type int parent The parent of the simulations Type Sim object input The path to the input folder which generated this simulation Type str trajectory A list of trajectory files Type list molfile The path to the structural information about the simulation. Usually a PDB file Type str numframes Number of frames in trajectories Type list copy() htmd.simlist.simfilter(sims, outfolder, filtersel, njobs=None) Filters a list of simulations generated by simlist() This function takes as input a list of simulations produced by simList and writes new trajectories containing only the desired atoms in a new directory. Parameters • sims (list) – A simulation list produced by the simList function • outfolder (str) – The folder in which to write the modified trajectories • filtersel (str) – Atom selection string describing the atoms we want to keep. See more here • njobs (int) – Number of parallel jobs to spawn for filtering of trajectories. If None it will use the default from htmd.config. Returns fsims – A list of filtered simulations Return type np.ndarray of Sim objects Example >>> sims = simlist(glob('data/*/'), glob('input/*/structure.pdb')) >>> fsims = simfilter(sims, 'filtered', filtersel='not water') htmd.simlist.simlist(datafolders, topologies, inputfolders=None) Creates a list of simulations Parameters • datafolders (str list) – A list of directories, each containing a single trajectory • topologies (str list) – A list of topology files or folders containing a topology file corresponding to the trajectories in dataFolders. Can also be a single string to a single structure which corresponds to all trajectories. • inputfolders (optional, str list) – A list of directories, each containing the input files used to produce the trajectories in dataFolders Returns sims – A list of simulations Return type np.ndarray of Sim objects Examples >>> simlist(glob('./test/data/*/'), glob('./test/input/*/'), glob('./test/input/*/')) >>> simlist(glob('./test/data/*/'), glob('./test/input/*/*.pdb'), glob('./test/input/*/')) htmd.simlist.simmerge(simlist1, simlist2) Merges two simlists by updating their simid fields Parameters Returns newlist – A new list containing all simulations Return type np.ndarray of Sim objects
2021-06-19 13:25:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3240801990032196, "perplexity": 6407.261246382725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487648194.49/warc/CC-MAIN-20210619111846-20210619141846-00619.warc.gz"}
https://forum.allaboutcircuits.com/threads/phase-shift-for-single-section-rc-filters.14116/#post-86880
# Phase Shift for single section RC Filters Can someone prove the rule of thumb for single section RC filters is that the phase shift is ≈6° from its asymptotic value at .1f$$_{3dB}$$ and 10f_{3dB}?
2022-10-06 18:03:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5237460732460022, "perplexity": 2955.0550207339656}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00154.warc.gz"}
http://numerics.kaust.edu.sa/RK-opt/started.html
# Installation¶ This section describes how to obtain RK-opt and test that it is working correctly. ## Dependencies¶ • MATLAB 7.X or greater • MATLAB optimization toolbox ## Obtaining RK-opt¶ • Or clone: \$ git clone https://github.com/ketch/RK-opt.git You can test your RK-opt installation with xUnit. ## Installing xUnit¶ The MATLAB xUnit test framework can be downloaded for free at http://www.mathworks.com/matlabcentral/fileexchange/22846 (after the link, click on the button in the upper right corner). An easy way to install xUnit without setting any environment variables is to add the following lines to your startup.m file: addpath /path/to/matlab/xunit/matlab_xunit ## Running the tests¶ To run the tests, do the following in MATLAB: >> cd /path/to/RK/Opt/general/test >> runtests If everything is set up correctly, this will run several tests, and inform you that the tests passed. At present the tests are not very extensive.
2019-03-19 17:55:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2770784795284271, "perplexity": 6734.300233439192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202003.56/warc/CC-MAIN-20190319163636-20190319184810-00068.warc.gz"}
https://zbmath.org/?q=an%3A0389.47005
## Asymptotic completeness for quantum mechanical potential scattering. I: Short range potentials.(English)Zbl 0389.47005 ### MSC: 47A40 Scattering theory of linear operators 35P25 Scattering theory for PDEs 81U20 $$S$$-matrix theory, etc. in quantum theory Full Text: ### References: [1] Agmon, S.: Ann. Sc. Norm. Sup. Pisa, Serie IV,2, 151–218 (1975) [2] Amrein, W. O., Georgescu, V.: Helv. Phys. Acta46, 635–658 (1973) [3] Kato, T.: Perturbation theory for linear operators. Berlin-Heidelberg-New York: Springer 1966 · Zbl 0148.12601 [4] Kuroda, S. T.: Nuovo Cimento12, 431–454 (1959) · Zbl 0084.44801 [5] Reed, M., Simon, B.: Methods of modern mathematical physics. In: Fourier analysis, self-adjointness, Vol. II. New York: Academic Press 1975 · Zbl 0308.47002 [6] Ruelle, D.: Nuovo Cimento61A, 655–662 (1969) [7] Simon, B.: Commun. math. Phys.55, 259–274 (1977) · Zbl 0413.47008 [8] Deift, P. J., Simon, B.: Commun. Pure Appl. Math.30, 573–583 (1977) · Zbl 0354.47004 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2022-07-06 01:27:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.660766065120697, "perplexity": 6869.315021939259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104655865.86/warc/CC-MAIN-20220705235755-20220706025755-00316.warc.gz"}
https://imagej.github.io/Creating_Imglib2_images_in_MATLAB
# Creating Imglib2 images in MATLAB This page recenses experiments with creating ImgLib2 images from MATLAB, and then calling ImgLib2 algorithm from MATLAB. We aim first at showing how to build ImgLib2 types from MATLAB types, then to do that efficiently. By this we mean having to share a single, massive low level data piece between ImgLib2 and MATLAB, which is not doable simply due to MATLAB memory model. All snippets listed here are to be run from MATLAB. We rely on Miji to set up class path, so you have to start every MATLAB session with the command Miji(false) ## Creating a new ImgLib2 image in MATLAB In MATLAB, we are limited to native type images (float, uint8, uint16, …) represented as native arrays. The matching ImgLib2 container for this is the ArrayImg . Because ImgLib2 authors wrote nice static utilities, our work is relatively easy. The class ArrayImgs has all the methods you need, one per native type. >> load clown >> img = net.imglib2.img.array.ArrayImgs.doubles(X(:), size(X)); >> net.imglib2.img.display.imagej.ImageJFunctions.show(img); % ImageJ display >> imshow(X,[]) % [[MATLAB]] display We note that the ImageJ display is rotated and flipped regarding the MATLAB image. This is because MATLAB arrays are expected to be arranged along columns, whereas Java arrays are arranged along lines. We would need to permute dimension 0 and dimension 1 to display the data in ImageJ as expected in MATLAB. Note also that the raw data was cast from 64-bit double data to 32-bit float for display. But the source img has the expected type. ## MATLAB to ImgLib2 bridge functions The exacts method in ArrayImgs depend on the native type you want to use, so you would have to deal with all possible cases. But there is already some MATLAB functions in Fiji that does that in the scripts folder of your Fiji installation: copytoImg and copytoImgPlus . You need therefore to add scripts to your MATLAB path, but this is most likely already done since it also contains Miji which you need to call already. The first function generates a plain Img. The second one generates and ImgPlus which allows you specifying the spatial calibration, image name and axis types. Check the help of these functions for details. ## Example usage Let’s put ImgLib2 to work to filter a source image using anisotropic diffusion: load clown Miji(false); img = copytoImg(X); for i = 1 : 10 % do it 10 times, in place net.imglib2.algorithm.pde.PeronaMalikAnisotropicDiffusion.inFloatInPlace(img, 0.15, 10); end net.imglib2.img.display.imagej.ImageJFunctions.show(img); ## Retrieving the content of an ImgLib2 image in MATLAB Now we want to get the result back in MATLAB. Since we are using ArrayImg, we can always access the underlying java primitive array that the Img wraps, but we still have to keep in mind the X and Y dimension permutation. Also: the wrapped array is a 1D, very long array, that can be looked upon as the row-by-row concatenation of the image content. We have to reshape it in MATLAB to give the image back its aspect: % Retrieve a copy (see below) of the java primitive array >> I = img.update([]).getCurrentStorageArray; % Reshape it to match the initial aspect. Careful, we need to remember we permuted X & Y. >> J = reshape(I, size(X')); % X' not X % Display it with X & Y permuted >> imshow(J', []) This is all and nice and worked as expected. But it worked because we were using doubles for this image. Let’s try with a more memory-saving type. First, let’s create a uint8 image from a MATLAB array of this type: >> clear >> Y = ind2gray(X, map); >> Z = uint8(255*Y); >> imshow(Z) >> img = copytoImg(Z); >> net.imglib2.img.display.imagej.ImageJFunctions.show(img); This just builds an acceptable MATLAB uint8 image and a UnsignedByteType ImgLib2 image. Let’s suppose we modified this image, keeping its type, and want to retrieve the content in MATLAB. We do just like before: >> I = img.update([]).getCurrentStorageArray; >> J = reshape(I, size(X')); % X' not X >> imshow(J', []) What happened here? The gray levels are all messed up. Checking the class of the returned array gives a clue: >> class(J)  ans =  int8 Aha! So we gave to ImgLib2 an uint8 array, but it gives us back an int8 array, with all values wrapped. This is actually something that should have been expected: There is no unsigned byte type in Java, only signed byte type. This is a language design choice we could discuss for hours, but in Java there just isn’t uint8 or uint16[1]. ImgLib2 developers managed to deal with it elegantly. Since the library can abstract about everything, having an image type which is not directly backed up by an existing primitive type is not a problem. The uint8 is represented internally by something Java can handle, and ImgLib2 makes sure the unsigned byte type arithmetics are respected whenever the image content is retrieved or display. But when we call the getCurrentStorageArray method, we retrieve this internal representation, and it just happens that it is of type int8, that is signed byte. The values are a bit mixed, since int8 ranges from -128 to 127, while uint8 range from 0 to 255. MATLAB has a built-in function to put it back right: >> I = img.update([]).getCurrentStorageArray; >> J = typecast(I, 'uint8'); >> K = reshape(J, size(X')); >> imshow(K') But of course, there is a MATLAB function that does all of this for you, and that you can also find in the scripts folder of your Fiji installation: copytoMatlab . ## MATLAB arrays are not shared We expect the underlying raw data - an array of doubles - to be shared between MATLAB and ImgLib2. Unfortunately, it isn’t so. Let’s try to turn the first column entirely white >> close all >> X(:,1) = 255; >> imshow(X,[]) >> net.imglib2.img.display.imagej.ImageJFunctions.show(img); It did not work: the ImgLib2 image did not see the change. This means that it does not wrap the MATLAB array, but a copy of it. This is a shame and this is of crucial importance. Not only we might have some very large data to process we wish not to duplicate in memory, but we might want to take advantage of some ImgLib2 algorithms that run in place and modify the source image. This is by construction, and there is no workaround, at least for Java[2]. MATLAB passes all the data per-value, not per-reference and this is what happened here. ## Using Java arrays in MATLAB A first attempt to solve this would be to try and use Java arrays in MATLAB, which is made possible by the function javaArray. As explained in the [MATLAB](MATLAB) docs, you can use this function to instantiate proper Java arrays, which we could then use to create an ImgLib2 image, and play with the same data both on the MATLAB side and on the ImgLib2 side. But this would unsatisfactory as well. The javaArray function allows the creation of Java objects, but not of primitive types. As suggested on the MATLAB docs, and noted here:  MATLAB can pass Java objects when calling a Java function, and modifications to  these objects are afterwards available in MATLAB - except when the Java   object is an array of a primitive data type. In this case automatic conversion  between MATLAB and Java kicks in, making a Java array-of-primitive-double  correspond directly to a double matrix in MATLAB - which is by MATLAB  conventions a thing "passed as value" so no return values are possible. So this means that we could create an array of java.lang.Double[] and use its reference, but we cannot have and manipulate a plain native double[] array without MATLAB shadowing any change because it operates on a copy. An array of java.lang.Double[] is not acceptable for most of our use cases. We expect to deal sometimes with very large images - the main reason for trying to escape duplicating data in memory - and a Double object adds some overhead on the primitive it wraps we would like to avoid. As of now (MATLAB 2013a), this answer seems to be definitive: There is no workaround if we are to stay with a native array in MATLAB. ## Using ImgLib2 types as primary data holder Therefore, a solution implies a change of approach. We will not use MATLAB matrices as data holder, but use ImgLib2 structures. We can access the raw data through ImgLib2 facilities (cursor, randomAcess, …). The changes made are then done in place, and will be visible from both ImgLib2 and MATLAB, provided the data is accessed from the ImgLib2 container. We also already saw that ArrayImgs wrap a native array, that we can copy to MATLAB shall we need to quickly get the whole dataset. With this strategy, MATLAB steps aside a bit, since we use ImgLib2 for basically all data manipulation. It takes the role of a scripting language like Jython, from which you make plain call to Java classes. Duplicating the native array wrapped in an ArrayImg allows you still make the best our of MATLAB easily, but you must design a good tactic in your script to avoid these local copies to exist for too long. [1] Check here for the details of unsigned types story. [2] For C++, you can write a Mex wrapper that will force MATLAB to operate on the reference of an array.
2021-03-03 21:21:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45130062103271484, "perplexity": 1438.6714331839128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367790.67/warc/CC-MAIN-20210303200206-20210303230206-00360.warc.gz"}
https://www.crosswordsclue.com/___-sum/
# ___ sum This time we are looking on the crossword puzzle clue for: ___ sum. it’s A 7 letters crossword definition. Next time when searching the web for a clue, try using the search term “___ sum crossword” or “___ sum crossword clue” when searching for help with your puzzles. Below you will find the possible answers for ___ sum. We hope you found what you needed! If you are still unsure with some definitions, don’t hesitate to search them here with our crossword puzzle solver. ### Random information on the term “___ sum”: In category theory, the coproduct, or categorical sum, is a construction which includes as examples the disjoint union of sets and of topological spaces, the free product of groups, and the direct sum of modules and vector spaces. The coproduct of a family of objects is essentially the “least specific” object to which each object in the family admits a morphism. It is the category-theoretic dual notion to the categorical product, which means the definition is the same as the product but with all arrows reversed. Despite this seemingly innocuous change in the name and notation, coproducts can be and typically are dramatically different from products. Let C {\displaystyle C} be a category and let X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} be objects of C . {\displaystyle C.} An object is called the coproduct of X 1 {\displaystyle X_{1}} and X 2 , {\displaystyle X_{2},} written X 1 ∐ X 2 , {\displaystyle X_{1}\coprod X_{2},} or X 1 ⊕ X 2 , {\displaystyle X_{1}\oplus X_{2},} or sometimes simply X 1 + X 2 , {\displaystyle X_{1}+X_{2},} if there exist morphisms i 1 : X 1 → X 1 ∐ X 2 {\displaystyle i_{1}:X_{1}\to X_{1}\coprod X_{2}} and i 2 : X 2 → X 1 ∐ X 2 {\displaystyle i_{2}:X_{2}\to X_{1}\coprod X_{2}} satisfying the following universal property: for any object Y {\displaystyle Y} and any morphisms f 1 : X 1 → Y {\displaystyle f_{1}:X_{1}\to Y} and f 2 : X 2 → Y , {\displaystyle f_{2}:X_{2}\to Y,} there exists a unique morphism f : X 1 ∐ X 2 → Y {\displaystyle f:X_{1}\coprod X_{2}\to Y} such that f 1 = f ∘ i 1 {\displaystyle f_{1}=f\circ i_{1}} and f 2 = f ∘ i 2 . {\displaystyle f_{2}=f\circ i_{2}.} That is, the following diagram commutes: ### Random information on the term “DIM”: Dim is the fourth studio album by Japanese rock band the Gazette. It was released on July 15, 2009, in Japan. It includes the three lead up singles: “Guren”, “Leech”, and “Distress and Coma”. The album scored number two on the Oricon Daily Charts and number five on the Oricon Weekly Charts, selling 37,797 copies in its first week. All lyrics are written by Ruki. DVD (limited edition only) The Limited Edition sold at Tower Records also came packaged with a Car Bumper Sticker, 5 Postcards and a Poster This 2009 rock album–related article is a stub. You can help Wikipedia by expanding it.
2023-02-06 10:07:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8438615798950195, "perplexity": 3160.844949956435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500334.35/warc/CC-MAIN-20230206082428-20230206112428-00309.warc.gz"}
https://tex.stackexchange.com/questions/469706/remove-space-between-bracket-and-array-environment
# Remove space between bracket and array environment How can I removing space between bracket and array environment and obtained results as used cases environment. Thank you very much! I use code $\left\{\begin{array}{l} x + y = 1\\ x - y = 2 \end{array}\right.$ $\begin{cases} x+y=1 \\ x-y=2\end{cases}.$ • Following the \left\{, add \mkern-8mu or (approximate) \!\!\!. It is one reason why amsmath is preferred to array. – Steven B. Segletes Jan 11 at 14:52 • or \begin{array}{@{}l}? – Troy Jan 11 at 14:54 • It works well, thank you! – An Chu Văn Jan 11 at 14:55 Two options: One with dedicated aligned environment. And, other by playing with the dedicated \mkern<space>. An example is as follows: \documentclass[10pt,a4paper]{article} \usepackage[latin1]{inputenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{graphicx} \begin{document} \left\{ \begin{aligned} x + y = 1\\ x - y = 2 \end{aligned}\right. $\left\{\mkern-10mu\begin{array}{l} x + y = 1\\ x - y = 2 \end{array}\right.$ \end{document} which would give you: • @Circumscribe you are very fast in commenting, than my typing speed :D – Raaja Jan 11 at 15:28 Here are two ways: \documentclass[10pt,a4paper]{article} \usepackage{mathtools, empheq} \newtagform{eqt}{(eqt\,}{)} \begin{document} $\left\{\begin{array}{@{}l} x + y = 1 \\ x - y = 2 \end{array}\right.$ \begin{empheq}[left=\empheqlbrace]{align*} x + y & = 1\\ x - y & = 2 \end{empheq} $\begin{cases} x+y=1 \\ x-y=2\end{cases}.$ \end{document}
2019-01-19 02:29:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6017647385597229, "perplexity": 6317.569148052694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583661083.46/warc/CC-MAIN-20190119014031-20190119040031-00398.warc.gz"}
http://faculty.etsu.edu/gardnerr/planetarium/relat/general.htm
# Relativity and Black Holes General Relativity Einstein was bothered by what he saw as a dichotomy in the concept of "mass." On one hand, by Newton's second law (F=ma), "mass" is treated as a measure of an objects resistance to changes in movement. This is called inertial mass. On the other hand, by Newton's Law of Universal Gravitation, an object's mass measures its response to gravitational attraction. This is called gravitational mass. As we will see, Einstein resolves this dichotomy by putting gravity and acceleration on an equal footing. Consider an observer in a windowless elevator which is accelerating upward. Suppose a ray of light enters near the top of the elevator on one side. Then while the light travels from one side of the elevator to the other, the elevator will accelerate upward and therefore the light will hit the opposite wall at a point lower than the point at which it entered. Next, suppose the elevator is stationary, but in a gravitational field. Then if we put gravity and acceleration on an equal footing, we would expect the same outcome of our experiment involving light rays. That is, if a ray of light enters near the top of the stationary elevator on one side, we expect the light to hit the opposite wall at a point lower than the point at which it entered the elevator. In a sense, the light "falls" under the influence of gravity. Notice, then, that an occupant of the elevator could not tell if he were in a rapidly accelerating elevator, or if he were in a stationary elevator in a gravitational field. Einstein summarizes this in his Principle of Equivalence: There is no way to distinguish between the effects of acceleration and the effects of gravity - they are equivalent! In our technical exploration of general relativity, we will adopt Einstein's summation notation in which an index of summation appears in each term both as a subscript and as a superscript and the sigma summation symbol is omitted. Therefore, the sum becomes just . The double sum becomes simply . The range of the index of summation will be clear from the context (usually 0 to 3). In our standard 3-dimensional Euclidean space, we measure the distance between two points as If instead of using x, y, z to represent coordinates, we use x1, x2, x3, then with Einstein's summation notation, distance is calculated as where is the Kronecker delta which takes on the value 1 if i and j are equal and the value 0 if i and j are different. Instead of simply measuring the distance between points in spacetime, we will be interested in differentials of the interval and the coordinates of spacetime. In a "curved" spacetime, we will measure the quantity as . The curvature of the space will be determined by the gij (which are called metric coefficients) and the gij will be functions of space, time, and mass distribution. As we have seen, in the absence of matter and with (t,x,y,z) = (x0,x1,x2,x3) the metric coefficients determine the matrix of the Lorentz metric seen here. Now let's introduce some of the notation required to understand Einstein's field equations. First, we define the entries of the inverse of the matrix of metric coefficients as gij. The gij and gij and their partial derivatives are used to define the Christoffel symbols, . Notice that these are functions of the gij's and partial derivatives of the gij's: The Ricci tensor is defined, in turn, in terms of the Christoffel symbols as seen here. Einstein chose as his field equations of general relativity, the following: That is, he chose for his field equations to set the Ricci tensor equal to zero. Since mu and nu range from 0 to 3, there are 16 different equations here. We can simplify this notationally as: So we see that Einstein's field equations are a system of 16 second order partial differential equations in the 16 unknown functions gij. Here we see Einstein posing with his field equation. The gij determine the metric form of spacetime and therefore all {\it intrinsic} properties of the 4-dimensional spacetime in which we live. In particular, curvature is determined by the Riemann-Christoffel curvature tensor. Notice that mu, nu, lambda, and gamma range from 0 to 3 and therefore the curvature tensor consists of 256 entries, only 20 of which are independent. Go to next section. 
2022-07-03 05:33:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9469571709632874, "perplexity": 279.13888081853537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215790.65/warc/CC-MAIN-20220703043548-20220703073548-00464.warc.gz"}
https://engineering.stackexchange.com/questions/35741/solving-for-stiffness-matrix-numerically-by-a-set-of-measurements
# Solving for stiffness matrix numerically by a set of measurements Is it in general possible when you have a mechanical structure (in this case I am referring to the structure in this post) and you want to determine the global stiffness matrix of this structure, to solve for the matrix based on measurements? More specifically, I have a structure where parts are connected by springs. I apply a known force to one of the parts and measure the displacement. In this way I get $$n$$ force/moment - displacement/rotation pairs. Is it possible to solve for a matrix $$K$$, for $$F = KX$$, with $$F =\begin{bmatrix}F_x & F_y & F_z & M_x &M_y&M_z\end{bmatrix}$$ and $$X= \begin{bmatrix}dx & dy & dz & \theta_x &\theta_y&\theta_z\end{bmatrix}$$? Is this feasible from a mechanical point of view, because e.g. this "stiffness matrix" $$K$$ may not be unique according to the load applied? Are there any constraints or requirements regarding the measurements? P.S. Applied forces are rather small (within the elastic range of the spring) and the displacement is restricted, so no large deflections should occur. This is why I assumed a linear approximation. ## 1 Answer This is feasible and can be used to modify a theoretical stiffness matrix calculated by the Finite Element method to match experimental results more accurately. The FE model can then be used to calculate things which would be impractical to measure directly. The simple approach you suggest is possible but not necessarily the best practical method. It may appear paradoxical that it is more practical to create both the stiffness and mass matrix by measuring the dynamic response of the structure. The reason is that if you excite the structure by an impact at one point and measure the time history of the response at other points (which is straightforward to do using an accelerometer instead of direct measurements of displacement, and a computer to record the real-time response at a sufficiently high sampling rate), you can obtain information about the motion in many different modes of vibration from "one measurement" of the dynamic response. This can be done effectively on a "lab bench" scale using simple hand-held equipment, but with more sophisticated measuring devices, for example a scanning laser doppler vibrometer, it is possible to measure the response at hundreds of points on a structure "simultaneously" without any physical contact, including measurements under real operating conditions (e.g. at high temperatures, or the behaviour of rotating machinery while it is operating). Most of this is not covered in a typical first engineering degree. Google for phrases like "experimental modal analysis" "model updating", or "system identification" for more information, both practical and theoretical. • Thanks for the detailed and fast response! What I actually wanted to do is to simulate the structures in Ansys, add a force and measure the displacement to obtain a dataset. I have to admit I couldn't analytically solve the structure until now.´, would it be enough to use experimental/ simulated data to obtain the final stiffness matrix? – rfn123 May 13 '20 at 19:22
2021-01-28 09:04:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6790971159934998, "perplexity": 419.4676791304044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704839214.97/warc/CC-MAIN-20210128071759-20210128101759-00630.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3774905
## CIA Weapon/Science Analyst Position Does anyone know anything more about this position other than what is posted on the CIA website? https://www.cia.gov/careers/opportun...s-analyst.html PhysOrg.com science news on PhysOrg.com >> Leading 3-D printer firms to merge in $403M deal (Update)>> LA to give every student an iPad;$30M order>> CIA faulted for choosing Amazon over IBM on cloud contract Recognitions: Science Advisor Seriously - if anybody here does know more about it, would they risk their own future career by posting what they knew on the internet? Recognitions: Gold Member Just speculating, but since they are an intelligence agency I have a feeling it has a lot to do with analyzing other countries' weapon technology progression (nuclear, biological, chemical, etc) to determine the threat level they pose. I mean, who really knows what they do there, but I really doubt they're spending their funding on weapons development when there's other government departments/private contractors for that purpose. ## CIA Weapon/Science Analyst Position I hope you are aware of the process one must take in order to become a federal employee at an organization like the CIA. What exactly do you want to know? Weapons/technology analysts in the intelligence community pretty much try to figure out the state of other countries' technology, and the implications for the US. Three things to be aware of are that (1) everyone and their grandmother applies to the CIA online, so it's to your benefit to get human contact, at a job fair or preferably through a more personal connection; (2) the clearance process for CIA (and NSA) takes a ridiculously long time, so it's often not a great idea to look at these places straight out of school (unless you're comfortable with being unemployed for months to years while you wait for a final answer -- that could be "no"); and (3) intelligence work in general is nowhere near as cool as you probably think it is, though it can certainly be interesting depending on your inclinations. Mentor Quote by NucEngMajor Does anyone know anything more about this position other than what is posted on the CIA website? https://www.cia.gov/careers/opportun...s-analyst.html That set of jobs (it's not just one; look at the degrees) has TS/compartmentalized written all over it. If you don't already have a clearance at all you will be waiting for a long, long time before the offer comes through. Even if you have a secret clearance it will take a long time. These are the kinds of jobs you apply for when you already have a job. When you answer the phone at your work now you use typical business phone courtesy: "Hello! This is John Doe, of the Acme Corporation, Coyote Supply Division." If you get this job you will answer your phone with "Hello." The mindset is quite different. You can't tell your friends what you do, you can't tell your family, and you definitely can't tell that cutie at the bar. When you leave work, you leave work. This is a big plus, BTW. Your current job of supplying the coyote with diabolical roadrunner capture devices requires you to take your work home with you, to think about it over dinner, to think about it in the shower, even to think about it while you talk up that cutie at the bar. Leaving work behind when you leave work is a blue collar luxury, but with this job you are not saddled with a blue collar salary. You will be writing reports, lots and lots of reports. They make that pretty clear in the job statement. You had better enjoy technical writing. You will be seeing stuff that sometimes is very cool, sometimes very scary. But for the most part it will be rather boring. Quote by D H If you don't already have a clearance at all you will be waiting for a long, long time before the offer comes through. So I take the hiring is mostly internal to the CIA? If not, how does one typically get a security clearance? Prior military/defense contractor service/employment or prior national lab employment? Quote by cdotter So I take the hiring is mostly internal to the CIA? No. Quote by cdotter If not, how does one typically get a security clearance? Prior military/defense contractor service/employment or prior national lab employment? Yes. There's supposed to be reciprocity between the various government entities regarding clearances; e.g., if you have a Department of Energy Q clearance, you should be able to easily (after some paperwork) be granted a Department of Defense Top Secret clearance, modulo some requirements that can pop up when getting deeper into the intel world (e.g., polygraph requirements, proof of requirement of that level of clearance, etc.). But CIA and NSA are notorious for reinvestigating even people with a recently-completed investigation, and for having their own interpretations of the adjudication guidelines. Their processes are long and opaque, which is why it's generally a good idea to have another job to keep you fed while trying to apply to them. I've known a couple of people who had conditional offers from NSA upon graduating, but withdrew their applications and took other jobs because the process was drawn out so long. Mentor Quote by D H The mindset is quite different. You can't tell your friends what you do, you can't tell your family, and you definitely can't tell that cutie at the bar. "I'm a greeting card salesman". The lots and lots of reports is very clear. It is also likely you will be reading lots and lots of reports, because an obvious and practically universal question is "how have things changed between then and now".
2013-06-20 12:00:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19888240098953247, "perplexity": 1740.0895446511638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711515185/warc/CC-MAIN-20130516133835-00033-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.physicsforums.com/showthread.php?t=621049
# How to calculate the transition rate by Nastenka Tags: kets, quantum field theory, quantum mechanics P: 1 1. The problem statement, all variables and given/known data I have the usual equation for the transition amplitude: A=< k|exp(-iHt)| j>, while my Hamiltonian in Dirac notation looks like: H=\sum E_a|a><0|) In order to find the transition rate I should take a derivative as: dA/dt, so that I will get something like: A=< k|-iH*exp(-iHt)| j> 2. Relevant questions Now, my question is: how to treat it further? 3. The attempt at a solution I know I should expand it with series for the exponent, but then I obtain in the middle this: -iH(1+H)=-iH-iH2 that confuses me. I feel I am stucked in such an easy task. But the proble is that I cannot expand it with eigenvalues and eigenvectors which would simplify a lot my task.... Related Discussions Biology, Chemistry & Other Homework 4 Advanced Physics Homework 4 Atomic, Solid State, Comp. Physics 0 Advanced Physics Homework 0 Introductory Physics Homework 2
2014-04-18 00:33:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8222602009773254, "perplexity": 794.7082127786132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
https://crypto.stackexchange.com/questions/95666/how-to-find-out-what-the-order-of-the-base-point-of-the-elliptic-curve-is
# How to find out what the order of the base point of the elliptic curve is? I wanted to use https://github.com/AntonKueltz/fastecdsa library and the function parameters for creating curve are: p, # (long): The value of p in the curve equation. a, # (long): The value of a in the curve equation. b, # (long): The value of b in the curve equation. q, # (long): The order of the base point of the curve. gx, # (long): The x coordinate of the base point of the curve. gy, # (long): The y coordinate of the base point of the curve. The curve I need is this one: https://docs.starkware.co/starkex-docs-v2-deprecated/crypto/stark-curve It gives info about $$p, a, b, gx, gy$$. But not the $$q$$ How to infer the $$q$$ parameter? Practically, one can use SageMath to find it; a = 1 b = 3141592653589793238462643383279502884197169399375105820974944592307816406665 p = 2^251 + 17*2^192 +1 E = EllipticCurve(GF(p), [0,0,0,a,b]) print(E) print(E.abelian_group()) card = E.cardinality() print("cardinality =",card) factor(card) G = E(874739451078007766457464989774322083649278607533249481151382481072868806602,152666792071518830868575557812948353041420400780739481342941381225525861407) print("Generator order q=", G.order()) This outputs Elliptic Curve defined by y^2 = x^3 + x + 3141592653589793238462643383279502884197169399375105820974944592307816406665 over Finite Field of size 3618502788666131213697322783095070105623107215331596699973092056135872020481 Additive abelian group isomorphic to Z/3618502788666131213697322783095070105526743751716087489154079457884512865583 embedded in Abelian group of points on Elliptic Curve defined by y^2 = x^3 + x + 3141592653589793238462643383279502884197169399375105820974944592307816406665 over Finite Field of size 3618502788666131213697322783095070105623107215331596699973092056135872020481 cardinality = 3618502788666131213697322783095070105526743751716087489154079457884512865583 Generator order q= 3618502788666131213697322783095070105526743751716087489154079457884512865583 Since the order of the curve is prime we have a prime curve, every element is a generator, therefore the order of the basepoint is equal to the order of the curve group. Also, the cofactor $$h$$ is 1 since the curve order is prime. Cofactor is defined as the number of the $$k$$ rational points of the curve $$h = \#E(k)/n$$ divided by the order of the base element $$n$$ I couldn't find any information about the magic number (nothing-up-in-my-sleeve). The reason for the choice of $$G$$ is not clear. Although it is psychological, one should provide it. SageMath uses sea.gp which is a fast implementation of the SEA algorithm. This library is implemented in pari/GP. A good slide about sea.gp is The SEA algorithm in PARI/GP.
2022-01-26 11:58:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7113514542579651, "perplexity": 671.1821123676158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304947.93/warc/CC-MAIN-20220126101419-20220126131419-00169.warc.gz"}
https://resinsight.org/3d-main-window/surfaces/
# Surfaces ResInsight is capable of mapping properties and simulation results onto 3D surfaces imported from GOCAD and Petrel as specified below. ## Import and appearance of surfaces 3D surfaces from GOCAD and Petrel are imported into ResInsight by the menu option File->Import->Import Surfaces in the 3D Main Window. Imported surfaces are listed under the entry in the Project Tree. You can click on a surface to change i.e. the name or color of the surface, or change the file to read the surface data from. You can also change the depth of the surface by adding a depth offset (positive or negative). This could be used i.e. to import the same surface file multiple times and assign each instance of the surface a separate depth offset. When a depth offset is set, the offset value will be shown as part of the surface name in the project tree. ## Using folders To make it easier to organize the surfaces you import into your project, ResInsight supports creating surface folders in the project tree. To create a new folder, right-click on the top level Surfaces folder in the project tree and choose Add Folder. ## Surface Results Per default, ResInsight maps current Cell Result on a given surface. If another result is to be displayed, specify the result under Surface Results as shown below. Clicking a surface of a View in the Project Tree activates the Property Editor for allowing view settings as shown below. To change result displayed on the surface, please specify the desired result in the Result Reference section of the Property Editor. If you have modified a surface file using an external program, you can easily load the changes into ResInsight by using the reload surface feature. Bring up the right-click menu for the surface you want to reload and choose Reload. The views will automatically update with the new data (could take a few seconds). ## Create a copy You can easily create a copy of an existing surface by choosing Create Copy in the surface right-click menu. A new, identical surface will show up at the same level in the project tree. You can now give it a new name, change depth offset etc. ## Grid Case Surfaces In addition to importing surfaces from file, ResInsight can also generate grid case surfaces. You do that by bringing up the right-click menu for the project tree entry and choose Create Grid Case Surfaces. A grid case surface has the same properties as a surface imported from file, but instead of choosing which file the data should come from, you choose a source case, a slice direction and a slice index. The slice direction and index will be added to the surface name shown in the project tree. ## Exporting Surfaces ResInsight can export surfaces to the GOCAD TSurf file format. Grid case surfaces can additionally be exported to Petrel Surface PTL format. Use the right-click menu for the surface you want to export and choose the export format you want to use. A file selection window will show up allowing you to choose where you want to save the exported data. ## Supported Surface Formats GOCAD is a computer application that allows you to import, create, modify, integrate, view, and export geological objects in 3D. The GOCAD export file format supported by ResInsight is TSURF (*.ts). A TSURF data file is a triangle based surface format containing vertex coordinates and triangle to vertex connectivities as exemplified below. ResInsight import vertex and triangle identifiers from the first TFACE section in such a file. GOCAD TSurf 1 name:MF_027_SU } NAME Default AXIS_NAME "X" "Y" "Z" AXIS_UNIT "m" "m" "m" ZPOSITIVE Depth END_ORIGINAL_COORDINATE_SYSTEM TFACE VRTX 1 458177.767090 7322538.712891 1643.655884 CNXYZ VRTX 2 458260.834961 7322392.890625 1596.685303 CNXYZ VRTX 3 457985.662109 7322783.783203 1542.060059 CNXYZ VRTX 4 459601.453125 7322511.427734 3639.000000 CNXYZ VRTX 5 459422.015625 7322689.230469 3639.000000 CNXYZ VRTX 6 459793.410156 7322338.230469 3639.000000 CNXYZ ... TRGL 2 61 98 TRGL 20 153 66 TRGL 152 19 65 END ### Petrel Surface PTL files ResInsight is capable of importing a surface defined by a PTL (*.ptl) file from Schlumberger Petrel. A PTL data file specifies the quads of a surface by x, y, z nodal coordinates and the i and j indices as exemplified below. As seen, # denotes comment lines. #Type: scattered data #Version: 6 #Description: No description #Format: free #Field: 1 x #Field: 2 y #Field: 3 z meters #Field: 4 column #Field: 5 row #Projection: Local Rectangular #Units: meters #End: #Information from grid #Grid_size: Not_avaiable #Grid_space: Not_available #Z_field: z #Vertical_faults: Not_available #History: No history #Z_units: meters 443479.500000 7305390.500000 -1000.000000 1 1 443479.500000 7305488.500000 -1000.000000 1 2 443479.500000 7305586.500000 -1000.000000 1 3 443479.500000 7305684.500000 -1000.000000 1 4 443479.500000 7305782.500000 -1000.000000 1 5 443479.500000 7305880.500000 -1000.000000 1 6 ... ### OpenWorks XYZ Surface files ResInsight is capable of importing a surface defined by a XYZ (*.dat) file from OpenWorks. A XYZ data file specifies the quads of a surface by x, y, z nodal coordinates organized in a regular grid. As seen, # and @ denotes comment lines. @File_Version: 4 @Coordinate_Type_is: 1 @Export_Type_is: 1 @Number_of_Projects 1 @Project_Type_Name: , 3,xxx, @Project_Unit_is: meters , xxx #File_Version____________-> 4 #Project_Name____________-> xxx #Project_Type____________-> 3 #Export_XY_Unit__________-> meters #OpenWorks_Project_______-> 'xxx' #Master_Project_______-> #Coordinate_type_________-> 1 #Number_of_points_in_hzd_-> 1 #Horizon_internal_id_____-> xxx #Horizon_extremes_are____-> xxx,xxx #Horizon_onset_is_Minimum_____-> 1 #Horizon_type_is_DEPTH_STRUCTURE______-> 2 #Horizon_color_is________-> 255 0 0 #Horizon_name____________-> xxx #Horizon_attribute_______-> DEPTH_STRUCTURE #Horizon_version_________-> UNKNOWN #Horizon_interp_status___-> defaultStat #Horizon_class___________-> defaultClass #Export_Z_Unit___________-> meters #Horizon_onset_type______-> Minimum #Horizon_data_domain_____-> DEPTH #Horizon_remark_size_____-> 50 Horizon from Grid on Fri Aug 14 13:42:10 CEST 2020
2022-09-24 23:36:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5104336142539978, "perplexity": 7775.187449429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00371.warc.gz"}
https://projecteuclid.org/euclid.aaa/1220969162
## Abstract and Applied Analysis ### Approximation of Generalized Left Derivations #### Abstract We need to take account of the superstability for generalized left derivations (resp., generalized derivations) associated with a Jensen-type functional equation, and we also deal with problems for the Jacobson radical ranges of left derivations (resp., derivations). #### Article information Source Abstr. Appl. Anal., Volume 2008 (2008), Article ID 915292, 8 pages. Dates First available in Project Euclid: 9 September 2008 https://projecteuclid.org/euclid.aaa/1220969162 Digital Object Identifier doi:10.1155/2008/915292 Mathematical Reviews number (MathSciNet) MR2407275 Zentralblatt MATH identifier 1160.47031 #### Citation Kang, Sheon-Young; Chang, Ick-Soon. Approximation of Generalized Left Derivations. Abstr. Appl. Anal. 2008 (2008), Article ID 915292, 8 pages. doi:10.1155/2008/915292. https://projecteuclid.org/euclid.aaa/1220969162 #### References • D. H. Hyers, “On the stability of the linear functional equation,” Proceedings of the National Academy of Sciences of the United States of America, vol. 27, pp. 222–224, 1941. • T. Aoki, “On the stability of the linear transformation in Banach spaces,” Journal of the Mathematical Society of Japan, vol. 2, pp. 64–66, 1950. • Th. M. Rassias, “On the stability of the linear mapping in Banach spaces,” Proceedings of the American Mathematical Society, vol. 72, no. 2, pp. 297–300, 1978. • P. Šemrl, “The functional equation of multiplicative derivation is superstable on standard operator algebras,” Integral Equations and Operator Theory, vol. 18, no. 1, pp. 118–122, 1994. • R. Badora, “On approximate ring homomorphisms,” Journal of Mathematical Analysis and Applications, vol. 276, no. 2, pp. 589–597, 2002. • D. G. Bourgin, “Approximately isometric and multiplicative transformations on continuous function rings,” Duke Mathematical Journal, vol. 16, no. 2, pp. 385–397, 1949. • R. Badora, “On approximate derivations,” Mathematical Inequalities & Applications, vol. 9, no. 1, pp. 167–173, 2006. • I. M. Singer and J. Wermer, “Derivations on commutative normed algebras,” Mathematische Annalen, vol. 129, pp. 260–264, 1955. • M. P. Thomas, “The image of a derivation is contained in the radical,” Annals of Mathematics, vol. 128, no. 3, pp. 435–460, 1988. • B. E. Johnson, “Continuity of derivations on commutative algebras,” American Journal of Mathematics, vol. 91, no. 1, pp. 1–10, 1969. • O. Hatori and J. Wada, “Ring derivations on semi-simple commutative Banach algebras,” Tokyo Journal of Mathematics, vol. 15, no. 1, pp. 223–229, 1992. • T. Miura, G. Hirasawa, and S.-E. Takahasi, “A perturbation of ring derivations on Banach algebras,” Journal of Mathematical Analysis and Applications, vol. 319, no. 2, pp. 522–530, 2006. • M. Amyari, C. Baak, and M. S. Moslehian, “Nearly ternary derivations,” Taiwanese Journal of Mathematics, vol. 11, no. 5, pp. 1417–1424, 2007. • M. S. Moslehian, “Ternary derivations, stability and physical aspects,” Acta Applicandae Mathematicae, vol. 100, no. 2, pp. 187–199, 2008. • M. S. Moslehian, “Hyers-Ulam-Rassias stability of generalized derivations,” International Journal of Mathematics and Mathematical Sciences, vol. 2006, Article ID 93942, 8 pages, 2006. • C.-G. Park, “Homomorphisms between ${C}^{\ast}$-algebras, ${\text{linear}}^{\ast}$-derivations on a ${C}^{\ast}$-algebra and the Cauchy-Rassias stability,” Nonlinear Functional Analysis and Applications, vol. 10, no. 5, pp. 751–776, 2005. • C.-G. Park, “Linear derivations on Banach algebras,” Nonlinear Functional Analysis and Applications, vol. 9, no. 3, pp. 359–368, 2004. • M. Amyari, F. Rahbarnia, and Gh. Sadeghi, “Some results on stability of extended derivations,” Journal of Mathematical Analysis and Applications, vol. 329, no. 2, pp. 753–758, 2007. • M. Brešar and J. Vukman, “On left derivations and related mappings,” Proceedings of the American Mathematical Society, vol. 110, no. 1, pp. 7–16, 1990. • P. Găvruţă, “A generalization of the Hyers-Ulam-Rassias stability of approximately additive mappings,” Journal of Mathematical Analysis and Applications, vol. 184, no. 3, pp. 431–436, 1994.
2019-06-20 23:44:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31918326020240784, "perplexity": 1466.2145211994518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999291.1/warc/CC-MAIN-20190620230326-20190621012326-00444.warc.gz"}
https://www.techwhiff.com/learn/a-boy-fires-the-02-lb-marble-at-the-wall-striking/349279
# A boy fires the 0.2-lb marble at the wall striking it at B. h 5ft, 0o 45°, and d 75 ###### Question: A boy fires the 0.2-lb marble at the wall striking it at B. h 5ft, 0o 45°, and d 75 ft. a) Plot the marble trajectory before and after it strikes the wall for the case where vA 50 ft/s and the coefficient of restitution is e NOTE: This requires solving two projectile motions, one before impact and one after impact. With vA- 50 ft/s, plot the trajectories after the impact for at least five different coefficients 0.5. b) of restitution c) How do the changes in the coefficient of restitution, e, affect the plots? VA #### Similar Solved Questions ##### Porphyrin is a pigment in blood protoplasm and other body fluids that is significant in body... Porphyrin is a pigment in blood protoplasm and other body fluids that is significant in body energy and storage. Assume porphyrin is normally distributed with mean (mu) = 38 milligrams per deciliter and standard deviation (sigma) = 12 milligrams per deciliter. a) What is the probability that x is le... ##### A plant grown from a [round, yellow] seed is crossed with a plant grown from a... A plant grown from a [round, yellow] seed is crossed with a plant grown from a [wrinkled, yellow] seed. This cross produces four progeny types in the F1: [round, yellow], [wrinkled, yellow], [round, green], and [wrinkled, green]. Use this information to deduce the genotypes of the parent plants.... ##### O Find the position and velocity of an object moving along a straight line with the... o Find the position and velocity of an object moving along a straight line with the given acceleration, initial velocity, and initial position a(t) = -0.06t, v(0) = 6, and s(0) = 0 v(t)=0 (Round to four decimal places as needed.) s(t)=0 (Round to four decimal places as needed.)... ##### Question 15 if c = .95, Z sub c = 1.96, Sigma = S = 5.0,... Question 15 if c = .95, Z sub c = 1.96, Sigma = S = 5.0, and E = 1, what would be the best choice for the sample size? Not yet answered Marked out of 3.00 P Flag question Select one: o a. 96.00 b. 96 O c. 96.04 O d. 97 e. 100 Question 16 Find the margin of error for mu, if X-bar = 52.8, n = 16, S = ... ##### 300 words please!! Essay 7 topic: Explain in your own words, why do you think so... 300 words please!! Essay 7 topic: Explain in your own words, why do you think so many people in present-day America believe that SCIENCE is not important, and that it is not worth the effort to learn. Is this negative opinion of SCIENCE socially harmful or not? Explain what you believe could be d... ##### An archer fires an arrow at a velocity of 40 ms^-1 and an angle of 45^o to the horizontal, from a point 1.5 m above the ground. Will the arrow reach a tree 400 m from the archer? An archer fires an arrow at a velocity of 40 ms^-1 and an angle of 45^o to the horizontal, from a point 1.5 m above the ground. Will the arrow reach a tree 400 m from the archer?... ##### Please provide the answer to all of the questions, I want to match up my work.... Please provide the answer to all of the questions, I want to match up my work. I have the incorrect adjusted and unadjusted trial Kelly Pitney began her consulting business, Kelly Consulting, on April 1, 20Y5. The accounting cycle for Kelly Consulting for April, including financial statements, was ... ##### How do you solve 78=-2(m+3)+m? How do you solve 78=-2(m+3)+m?... ##### Thiols and Sulfides Predict the outcome of the following reactions, an draw a complete mechanism Br... Thiols and Sulfides Predict the outcome of the following reactions, an draw a complete mechanism Br NaSH Br NaSH... ##### Please let it be legible, thanks in advance! Review Cons When you skid to a stop... Please let it be legible, thanks in advance! Review Cons When you skid to a stop on your bike, you can significantly heat the small patch of tire that rubs against the road surface. Suppose a person skids to a stop by hitting the brake on his back tire, which supports half the 90 kg combined mas... ##### You are the manager of a restaurant that delivers pizza to college dormitory rooms. You have... You are the manager of a restaurant that delivers pizza to college dormitory rooms. You have just changed your delivery process in an effort to reduce the mean time between the order and completion of delivery from the current 25 minutes. A sample of 36 orders using the new delivery process yields a... ##### Problem 11-8 Calculating Break-Even [LO3] In each of the following cases, find the unknown variable: (Do... Problem 11-8 Calculating Break-Even [LO3] In each of the following cases, find the unknown variable: (Do not round intermediate calculations and round your answers to the nearest whole number, e.g., 32.) Accounting Break-Even 121,000 $Unit Variable Cost Fixed Costs Depreciation Unit Price 54$ 950,... ##### Question 13 The pH of a solution is 2.62 at 25°C. Calculate the pOH of this... Question 13 The pH of a solution is 2.62 at 25°C. Calculate the pOH of this solution at 25°C. 4.88 11.4 8.33 Question 18 Calculate the pH of a 0.400 mol L-1 solution of formic acid, Kg = 1.8 x 10-4 2.29 2.07 4.00 3.44 3.71 Question 30 2 pts Which of the following is TRUE? A buffer has the be... ##### 3) (20 pts) Assign the indicated carbons as (R), (S), or achiral in each structure below.... 3) (20 pts) Assign the indicated carbons as (R), (S), or achiral in each structure below. ...11CH, HE HO... ##### How do you solve -42= 7 ( 7+ n )? How do you solve -42= 7 ( 7+ n )?...
2023-03-28 00:07:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38357168436050415, "perplexity": 2310.389153138609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00112.warc.gz"}
https://mathematica.stackexchange.com/questions/138478/extraction-of-specific-image-pixels-as-bytes
# Extraction of specific image pixels as bytes I have a large set (>10000) of uncompressed 8 bit images. Their size is 1600*1200 pixels. The first 24 pixels in the upper most line starting from left contain some important information (time stamp etc.) about the image. I want to read out all byte values of these 24 pixels. Finally I want to save all byte data as right aligned numbers into a text file, a single line with 24 numbers for each image. I CANNOT READ IN ALL IMAGES INTO THE MEMORY AT ONCE BECAUSE OF MEMORY PROBLEMS. For testing I have uploaded a set of 1000 cropped images, each having a dimension of 50*50 pixels. They can be downloaded from here: https://drive.google.com/open?id=0B9wKP6yNcpyfUUlsQU1iVWJ0b2M How can I improve (speed up) my code? ChoiceDialog[{FileNameSetter[Dynamic[imageDir], "Directory"], Dynamic[imageDir]}]; SetDirectory[imageDir]; fNames = FileNames["*.png"]; n = Length[fNames]; timeStampValues = Array[0 &, {n, 24}]; SetSharedVariable[timeStampValues]; ParallelDo[ byteValues = Import[fNames[[i]], "Data"]; timeStampValues[[i, All]] = Flatten@byteValues[[1, 1 ;; 24]]; , {i, 1, n} ]; // AbsoluteTiming {17.7253, Null} • Start by replacing Import with ImageData@First@ImageImportExportDumpImageReadPNG[fNames[[‌​i]]] This is the low level function called by Import on PNG files; calling it directly bypasses A TON of behind-the-scenes checking. On a 350-picture subset, the ParallelDo[Import...] alone was taking 70s on my laptop; the ImageReadPNG version took 4s. • The Flatten call within the ParallelDo loop also doesn't seem necessary, since byteValues[[1, 1;;24]] is already a flat vector. That won't probably affect timing quite as much though. In short, I propose to use: ParallelDo[ byteValues = ImageData@First@ImageImportExportDumpImageReadPNG[fNames[[i]]]; timeStampValues[[i]] = byteValues[[1, 1 ;; 24]], {i, 1, n}] On your sample data set, I get the following timings:
2019-07-18 16:05:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17772608995437622, "perplexity": 6419.8537486983605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525659.27/warc/CC-MAIN-20190718145614-20190718171614-00134.warc.gz"}
https://scoop.eduncle.com/pls-tell-mistake-in-explanation-i-attached-for-option-1
IIT JAM Follow November 25, 2021 5:30 pm 30 pts pls tell mistake in explanation i attached. . .. . . . .. . . . . . . . . . . . . . . . . . . .. . . . . . • 0 Likes • Shares • Navdeep goyal 1 Best Answer you take y=m-x which is wrong bca it not passee through origin
2021-12-03 22:40:23
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9415683746337891, "perplexity": 781.8696503402441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362919.65/warc/CC-MAIN-20211203212721-20211204002721-00357.warc.gz"}
https://socratic.org/questions/how-do-you-solve-the-following-system-2x-5y-20-x-4y-16
Dear friends, Please read our latest blog post for an important announcement about the website. ❤, The Socratic Team # How do you solve the following system: -2x + 5y = 20, x +4y = 16? Then teach the underlying concepts Don't copy without citing sources preview ? #### Explanation Explain in detail... #### Explanation: I want someone to double check my answer 1 Rachel Share Mar 21, 2018 Let's use substitution: $- 2 x + 5 y = 20$ $x + 4 y = 16$ We need to solve for $x$ in the second equation $x = 16 - 4 y$ Now we substitute $\left(16 - 4 y\right)$ for $x$ in the first equation $- 2 \left(16 - 4 y\right) + 5 y = 20$ distribute the $- 2$ $- 32 + 8 y + 5 y = 20$ Solve for $y$. Add $32$ to both sides $13 y = 52$ Divide by $13$ on both sides $y = 4$ $\cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot$ We have $y$, let's find $x$: $x = 16 - 4 y$ $x = 16 - 4 \left(4\right)$ $x = 16 - 16$ $x = 0$ $\cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot$ To check our work, let's plug our values for $x$ and $y$ into the first equation and see if it equals $20$: $- 2 \left(0\right) + 5 \left(4\right)$ $0 + 20$ $20$ EQUALS $20$! We were right • An hour ago • An hour ago • An hour ago • 2 hours ago • 2 minutes ago • 6 minutes ago • 7 minutes ago • 32 minutes ago • 42 minutes ago • 47 minutes ago • An hour ago • An hour ago • An hour ago • 2 hours ago
2018-07-23 09:55:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 29, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5997575521469116, "perplexity": 3114.3595753866375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596204.93/warc/CC-MAIN-20180723090751-20180723110751-00451.warc.gz"}
https://www.physicsforums.com/threads/gradient-vector-as-normal-vector.236323/
# Gradient vector as Normal vector 1. May 21, 2008 ### hotcommodity I'm trying to understand why the gradient vector is always normal to a surface in space. My textbook describes r(t) as a curve along the surface in space. Subsequently, r'(t) is tanget to this curve and perpendicular to the gradient vector at some point P, which implies the gradient vector to be a normal vector. My question is, how can r(t) be a curve? I thought the position vector was a straight vector that stems from the origin of the coordinate system. My textbook shows r(t) as a curved double arrow that lies on the surface in space. Any help is appreciated. 2. May 21, 2008 ### arildno Suppose that a surface S can be written as a level surface of a scalar function f, that is the equation $$f(x,y,z)=K$$ describes the surface S. But, a surface may also be parametrized, that is the the set of points (x,y,z) in S may exactly be described by a paremetrization: $$\vec{x}=(x,y,z)=\vec{S}(u,v)$$ Therefore, we have that for all u,v, the identity holds: $$f(\vec{S}(u,v))=K$$ Differentiating with respect to u yields: $$\nabla{f}\cdot\frac{\partial\vec{S}}{\partial{u}}=0$$ But $$\frac{\partial\vec{S}}{\partial{u}},\frac{\partial\vec{S}}{\partial{v}}$$ are TANGENT vectors to surface S, hence, the gradient of f must be the normal vector to S. 3. May 21, 2008 ### hotcommodity I understand what you're saying about the gradient being normal to the surface in space. My textbook follows the same reasoning that you've delineated above. I just don't understand why my book shows the position vector r(t) as a curved double-arrow. Maybe it's just a bad representation of the position vector..I don't know.. Also if you (or someone) could help me understand a related topic, I would greatly appreciate it. I've been studying Lagrange multipliers, where it's important to understand the grandient as a normal vector. In the context of functions of two variables, where the maximums and minimums of a function f(x, y) are subject to the constraint g(x, y) = 0, the extrema occur where grad(f) is parallel to grad(g) in the xy-plane, that is, where the level curves of f(x, y) = (some constant) and g(x, y) = 0 are tangent to one another. I understand how the geometry works out, but I don't understand why the two gradients being parallel implies extrema... 4. May 21, 2008 ### HallsofIvy The picture your book is showing is the range of the r(t) function. For each t, r(t) is a vector from the origin to some point (x(t),y(t)). The curve your book is showing is the set {(x(t),y(t))} for all t in the domain. 5. May 21, 2008 ### hotcommodity I see, that makes much more sense, thank you :) 6. May 23, 2008 ### arildno Sure. In the region where g(x,y)=0, then the function F(x,y,k)=f(x,y)+kg(x,y) coincides with f(x,y). Extrema for F requires $\nabla{F}=0$; note in particular that we get: $$\frac{\partial{F}}{\partial{k}}=g(x,y)=0$$ That is, ALL extrema of F will lie in that region in which F coincides with f, and therefore solve the problem of extremizing f under the constraint g=0 7. May 23, 2008 ### hotcommodity Thanks for the reply. I have a few questions..so F(x,y,k) is essentially a function that evaluates all of the points that satisfy g(x,y) = 0, correct? Is F(x,y,k) called the Lagrangian? And finally, how can one show that F(x,y,k) coincides with f(x,y) for points on and near the level curve? Thanks again. Last edited: May 23, 2008 8. Aug 26, 2011 ### msslowlearner i'm using a textbook that says nothing at all about a unit normal vector,except that the gradient is normal to the surface at the pt given. as arildno said, can i say $\partialS$/$\partialx$ and $\partialS$/$\partialy$ can be considered as the components of the unit normal vector?? but wouldn't that be just those of the gradient itself? i'm trying to get back to my books after a 3 year gap without an instructor, so please bear with my doubts, as stupid as they maybe. Pls help too . 9. Aug 26, 2011 ### msslowlearner sorry , my understanding of what arildno said was wrong. he actually meant to say how the gradient is normal to the surface, not anything about the unit normal vector. Can anybody tell me what the unit normal vector to a surface is ?? 10. Aug 26, 2011 ### LCKurtz Typically when you take the gradient at a point you don't get a unit normal, so you need to divide it by its length to get one. One example where it is used is in calculating flux integrals. There, you need the component of the flux vector which is normal to the surface in the integral, so you dot it with the unit normal, as in $$\iint_S \vec F \cdot \hat n\, dS$$ 11. Jan 18, 2013 ### medwatt Though this is now more than a year old, I would like to add an intuitive approach which I hope should make sense. The reason why the gradient vector seems hard to visualize to be a normal vector is because of our taking-it-for-granted attitude in introductory calculus that gradient and tangent are synonymous where as gradient and normal are not. Ok... Imagine you have a bowl which is a surface as you might expect... take any arbitrary point and try to tell me the tangent... Well, you can't because you don't know the direction... Imagine a pencil as the tangent line... It is obvious that because you can rotate the pencil in any direction while the pencil is still a tangent to the bowl at that point... this implies that the gradient can have a lot of values... As a remainder, gradient of z=f(x,y) will tell us how much is z changing with respect to both x and y. The gradient is given by dot product ∇f⋅u where u is the unit vector in any direction. From the idea of a dot product ∇f⋅u will be maximum when both vectors are parallel. Since -1≤cos($\theta$)≤1 then there can only be 2 directions where z is having a maximum ascent or maximum descent. From geometry we know that the shortest distance is a perpendicular between the two lines. What this means is that at any point we can have only one direction of max ascent or descent and that occurs along a perpendicular from the level curve where you are to the nearest level curve. Hence the gradient in the direction of the gradient is the shortest path to take to move from one level curve to the other which should be a perpendicular (normal) otherwise it is not the shortest distance. I hope I was able to convey the way I understand it... 12. Jan 19, 2013 ### lavinia The pointss in space where the function takes on a particular constant value from a surface. If you agree with this - proof? - the the function must have zero derivative along any curve on the surface (because the derivative of a constant is zero). This means that the inner product of the gradient with any tangent vector to the surface is zero - since any tangent vector is tangent to some curve on the surface. This follows from the Chain Rule.
2018-01-21 07:30:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7731724381446838, "perplexity": 335.0161001432746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890314.60/warc/CC-MAIN-20180121060717-20180121080717-00781.warc.gz"}
http://www.packtpub.com/building-machine-learning-systems-with-python/book?tag=dg/scikitlearningpython-abr1/1113
Building Machine Learning Systems with Python eBook: $29.99 Formats:$25.49 save 15%! Print + free eBook + free PacktLib access to the book: $79.98 Print cover:$49.99 \$49.99 save 37%! Free Shipping! UK, US, Europe and selected countries in Asia. Also available on: Select a store Amazon.com Amazon UK Barnes & Noble Safari Books Online O Reilly Overview Author Reviews Support Sample Chapters • Master Machine Learning using a broad set of Python libraries and start building your own Python-based ML systems • Covers classification, regression, feature engineering, and much more guided by practical examples • A scenario-based tutorial to get into the right mind-set of a machine learner (data exploration) and successfully implement this in your new or existing projects Book Details Language : English Paperback : 290 pages [ 235mm x 191mm ] Release Date : July 2013 ISBN : 1782161406 ISBN 13 : 9781782161400 Author(s) : Willi Richert, Luis Pedro Coelho Topics and Technologies : All Books, Big Data and Business Intelligence, Open Source, Python Preface Chapter 1: Getting Started with Python Machine Learning Chapter 2: Learning How to Classify with Real-world Examples Chapter 3: Clustering – Finding Related Posts Chapter 4: Topic Modeling Chapter 5: Classification – Detecting Poor Answers Chapter 6: Classification II – Sentiment Analysis Chapter 7: Regression – Recommendations Chapter 8: Regression – Recommendations Improved Chapter 9: Classification III – Music Genre Classification Chapter 10: Computer Vision – Pattern Recognition Chapter 11: Dimensionality Reduction Chapter 12: Big(ger) Data Index • Chapter 1: Getting Started with Python Machine Learning • Machine learning and Python – the dream team • What the book will teach you (and what it will not) • What to do when you are stuck • Getting started • Introduction to NumPy, SciPy, and Matplotlib • Installing Python • Chewing data efficiently with NumPy and intelligently with SciPy • Learning NumPy • Indexing • Handling non-existing values • Comparing runtime behaviors • Learning SciPy • Our first (tiny) machine learning application • Preprocessing and cleaning the data • Choosing the right model and learning algorithm • Before building our first model • Starting with a simple straight line • Stepping back to go forward – another look at our data • Training and testing • Summary • Chapter 2: Learning How to Classify with Real-world Examples • The Iris dataset • The first step is visualization • Building our first classification model • Evaluation – holding out data and cross-validation • Building more complex classifiers • A more complex dataset and a more complex classifier • Learning about the Seeds dataset • Features and feature engineering • Nearest neighbor classification • Binary and multiclass classification • Summary • Chapter 3: Clustering – Finding Related Posts • Measuring the relatedness of posts • How not to do it • How to do it • Preprocessing – similarity measured as similar number of common words • Converting raw text into a bag-of-words • Counting words • Normalizing the word count vectors • Removing less important words • Stemming • Installing and using NLTK • Extending the vectorizer with NLTK's stemmer • Stop words on steroids • Our achievements and goals • Clustering • KMeans • Getting test data to evaluate our ideas on • Clustering posts • Solving our initial challenge • Another look at noise • Tweaking the parameters • Summary • Chapter 4: Topic Modeling • Latent Dirichlet allocation (LDA) • Building a topic model • Comparing similarity in topic space • Modeling the whole of Wikipedia • Choosing the number of topics • Summary • Chapter 5: Classification – Detecting Poor Answers • Learning to classify classy answers • Tuning the instance • Tuning the classifier • Fetching the data • Slimming the data down to chewable chunks • Preselection and processing of attributes • Defining what is a good answer • Creating our first classifier • Starting with the k-nearest neighbor (kNN) algorithm • Engineering the features • Training the classifier • Measuring the classifier's performance • Designing more features • Deciding how to improve • Fixing high bias • Fixing high variance • High bias or low bias • Using logistic regression • A bit of math with a small example • Applying logistic regression to our postclassification problem • Looking behind accuracy – precision and recall • Slimming the classifier • Ship it! • Summary • Chapter 6: Classification II – Sentiment Analysis • Introducing the Naive Bayes classifier • Getting to know the Bayes theorem • Being naive • Using Naive Bayes to classify • Accounting for unseen words and other oddities • Accounting for arithmetic underflows • Creating our first classifier and tuning it • Solving an easy problem first • Using all the classes • Tuning the classifier's parameters • Cleaning tweets • Taking the word types into account • Determining the word types • Successfully cheating using SentiWordNet • Our first estimator • Putting everything together • Summary • Chapter 7: Regression – Recommendations • Predicting house prices with regression • Multidimensional regression • Cross-validation for regression • Penalized regression • L1 and L2 penalties • Using Lasso or Elastic nets in scikit-learn • P greater than N scenarios • An example based on text • Setting hyperparameters in a smart way • Rating prediction and recommendations • Summary • Chapter 8: Regression – Recommendations Improved • Improved recommendations • Using the binary matrix of recommendations • Looking at the movie neighbors • Combining multiple methods • Obtaining useful predictions • Association rule mining • Summary • Chapter 9: Classification III – Music Genre Classification • Fetching the music data • Converting into a wave format • Looking at music • Decomposing music into sine wave components • Using FFT to build our first classifier • Increasing experimentation agility • Training the classifier • Using the confusion matrix to measure accuracy in multiclass problems • An alternate way to measure classifier performance using receiver operator characteristic (ROC) • Improving classification performance with Mel Frequency Cepstral Coefficients • Summary • Chapter 10: Computer Vision – Pattern Recognition • Introducing image processing • Basic image processing • Thresholding • Gaussian blurring • Filtering for different effects • Adding salt and pepper noise • Putting the center in focus • Pattern recognition • Computing features from images • Classifying a harder dataset • Local feature representations • Summary • Chapter 11: Dimensionality Reduction • Selecting features • Detecting redundant features using filters • Correlation • Mutual information • Other feature selection methods • Feature extraction • About principal component analysis (PCA) • Sketching PCA • Applying PCA • Limitations of PCA and how LDA can help • Multidimensional scaling (MDS) • Summary • Chapter 12: Big(ger) Data • Reusing partial results • Looking under the hood • Using jug for data analysis • Using Amazon Web Services (AWS) • Installing Python packages on Amazon Linux • Running jug on our cloud machine • Automating the generation of clusters with starcluster • Summary Willi Richert Willi Richert has a PhD in Machine Learning and Robotics, and he currently works for Microsoft in the Core Relevance Team of Bing, where he is involved in a variety of machine learning areas such as active learning and statistical machine translation. Luis Pedro Coelho Luis Pedro Coelho is a Computational Biologist: someone who uses computers as a tool to understand biological systems. Within this large field, Luis works in Bioimage Informatics, which is the application of machine learning techniques to the analysis of images of biological specimens. His main focus is on the processing of large scale image data. With robotic microscopes, it is possible to acquire hundreds of thousands of images in a day, and visual inspection of all the images becomes impossible. Luis has a PhD from Carnegie Mellon University, which is one of the leading universities in the world in the area of machine learning. He is also the author of several scientific publications. Luis started developing open source software in 1998 as a way to apply to real code what he was learning in his computer science courses at the Technical University of Lisbon. In 2004, he started developing in Python and has contributed to several open source libraries in this language. He is the lead developer on mahotas, the popular computer vision package for Python, and is the contributor of several machine learning codes. Submit Errata Please let us know if you have found any errors not listed on this list by completing our errata submission form. Our editors will check them and add them to this list. Thank you. Errata - 45 submitted: last submission 04 Jul 2014 Errata type: Code | Page number: 30 The text says "Let fbt2 be the winning polynomial of degree 2" but it is not shown how it is declared. With fbt2 the author was referring to the polynomial of degree 2 and did not show the code explicitly as it is obvious after having shown how to fit the poly for 1d. In the code file analyze_webstats.py, you can see how it is calculated: train = sorted(shuffled[split_idx:]) fbt1 = sp.poly1d(sp.polyfit(xb[train], yb[train], 1)) fbt2 = sp.poly1d(sp.polyfit(xb[train], yb[train], 2)) fbt3 = sp.poly1d(sp.polyfit(xb[train], yb[train], 3)) fbt10 = sp.poly1d(sp.polyfit(xb[train], yb[train], 10)) fbt100 = sp.poly1d(sp.polyfit(xb[train], yb[train], 100)) Errata type: Code | Page number: 30 Line 8 in the first code snippet: clf.fit(X, Y) should be: clf.fit(X_train,Y_train) Errata type: Code | Page number: 35 # use numpy operations to get setosa features is_setosa = (labels == 'setosa') The variable labels is not defined earlier. Has to replace that code with: is_setosa = (target == 0) Another solution the author suggests is to add the following line before is_setosa = (labels == 'setosa') target_names = data['target_names'] labels = target_names[target] Errata type: Code | Page number: 54 Next to last code block; method body should be indented: >>> import scipy as sp >>> def dist_raw(v1, v2): ... delta = v1-v2 ... return sp.linalg.norm(delta.toarray()) Errata type: Code | Page number: 55 Missing line where "dist" is set to "dist_raw". The method "dist_raw" is defined on the bottom of page 54, but then "dist" is used on page 55. Confusing. In the downloaded code, you can see the omitted line. Errata type: Code | Page number: 58 This should have a line break before u'imagin': s.stem("imagination")u'imagin' Errata type: Code | Page number: 67 'comp.sys.ma c.hardware' should not have a space. This was in the results generated by the author: >>> print(len(train_data.filenames) 3414 The actual result is "4119" if the group name is fixed. Errata type: Code | Page number: 68 "We now have a pool of 3,414..." See previous error on Page 67. The errant space in the code causes the entire "comp.sys.mac.hardward" group to be skipped. Note from the authors: Code for Chapter 6 As you might know, Twitter changes its API from time to time. When the authors edited the book, they asked Twitter whether they would release the data to ease this foreseeable pain point of our users. Unfortunately, this was not possible. Therefore, they have put all the book's code examples on GitHub, where they actively maintain it. In the case of chapter 6, Twitter changed its API in version 1.1 to now require user authentication. Willi has just pushed an updated version to the code that correctly handles the new API 1.1: https://github.com/luispedro/BuildingMachineLearningSystemsWithPython/tree/master/ch06 Errata type: Typo | Page number: 261 In the Books section the line: If you are interested in that aspect of machine learning, then we recommend Pattern Recognition and Machine Learning, C. Bishop , Springer Apply Italics to this. Should be If you are interested in that aspect of machine learning, then we recommend Pattern Recognition and Machine Learning, C. Bishop, Springer. Errata type: Typo | Page number: 66 Line 18, For convenience, the dataset module also contains... Should be For convenience, the datasets module also contains... Errata type: Typo | Page number: 13 The Text on the bottom of the page says "In this case, it's a one-dimensional array of five elements." Should be In this case, it's a one-dimensional array of six elements. Errata type: Typo | Page number: 245 As we saw in Chapter 10, Computer Vision–Pattern Recognition Finding Related Posts, this can easily be done by changing the computation code feature. Should be: As we saw in Chapter 10, Computer Vision–Pattern Recognition, this can easily be done by changing the computation code feature. Errata type: Typo | Page number: 247 We will now look back at Chapter 10, Computer Vision–Pattern Recognition Finding Related Posts. Should be: We will now look back at Chapter 10, Computer Vision–Pattern Recognition. Errata type: Code | Page number: 194 X, Y = [],[] in read_ceps() function Should be X, y = [],[] Errata type: Graphics | Page number: 127 The third formula should have a log in the last term like this. C_{best}=arg max_{c \in C}log(P(C=c)) + \sum_klog(P(F_{k}|C=c)) Errata type: Graphics | Page number: 127 The second formula is as follows: C_{best}=arg max_{c \in C}log(P(C=c)) + log(P(F_{1}|C=c)) + log(P(F_{2}|C=c)) Suggestion | Page number: 126 One of our readers has suggested the second formula to be given as follows: log(P(C) \cdot P(F_{1}|C) \cdot P(F_{2}|C))=log(P(C))+log(P(F_{1}|C))+log(P(F_{2}|C)) The original formula given in the book is, however, correct. Errata type: Code | Page number: 76 Last line in code: corpus = corpora.BleiCorpus('./data/ap/ap.dat', '/data/ap/vocab.txt') should be corpus = corpora.BleiCorpus('./data/ap/ap.dat', './data/ap/vocab.txt') Errata type: Code | Page number: 14 In the codesnippet on the bottom array has five instead of six elements. So the result of the operations is different. Errata type: Code | Page number: 71 z = (len(post[0]), post[0], dataset.target_names[post[1]]) for post in post_group Has a syntax error and can be fixed like this: z = [(len(post[0]), post[0], dataset.target_names[post[1]]) for post in post_group] Errata type: Code | Page number: 111 last code line: precision_recall_curve(y_test, clf.predict(X_test) Should be precision_recall_curve(y_test, clf.predict(X_test)) Errata type: Code | Page number: 110 false negative(FN) -> True negative(TN) True negative(TN) -> False negative(FN) that instance is said to be a false negative Should be that instance is said to be a true negative Errata type: Code | Page number: 122 In the middle of page: P(F2=1|C="pos") = 2/4 = 0.25 Should be: P(F2=1|C="pos") = 2/4 = 0.5 Errata type: Code | Page number: 23 At the top of the page: print(res) should be print(residuals) Errata type: Code | Page number: 26 In the code snippet the before last line is : print("Error inflection=%f" % (fa + fb_error)) when it should be the summation of two errors fa_error + fb_error not a function and an error Correct code is: print("Error inflection=%f" % (fa_error + fb_error)) Errata type: Code | Page number: 36 Accuracy  is simply the fraction of examples that the model classifies correctly: best_acc = -1.0 for fi in xrange(features.shape[1]): # We are going to generate all possible threshold for this feature thresh = features[:,fi].copy() thresh.sort() # Now test all thresholds: for t in thresh: pred = (features[:,fi] > t) acc = (pred == virginica).mean() if acc > best_acc: best_acc = acc best_fi = fi best_t = t The correct code should be: Accuracy  is simply the fraction of examples that the model classifies correctly: best_acc = -1.0 best_fi = -1.0 best_t = -1.0 for fi in xrange(features.shape[1]) : thresh = features[:, fi].copy() thresh.sort() for t in thresh: pred = (features[:,fi] > t) acc = (labels[pred] == 'virginica').mean() if acc > best_acc: best_acc = acc best_fi = fi best_t = t Errata type: Typo | Page number: 37 If we run it on the whole data, the best model that we get is split on the petal length. Should be: If we run it on the whole data, the best model that we get is split on the petal width. Errata type: Code | Page number: 38 | Location 818 "Training error" should be "Training accurancy" The author has also updated the code available on github Errata type: Code | Page number: 22-23 Change from: >>> print(res) to >>> print(residuals) Errata type: Technical | Page number: 123 The last element in the first column should "text" instead of "awesome text". However the value of F1 will change to "1" from "0" because the tweet contains "awesome" and F1 denotes the number of times "awesome" appears in the tweet text. Errata type: Code | Page number: 35 The first line in the last code snippet should change from " if features[:,2] < 2: print 'Iris Setosa' " to " if fs[2] < 2: print 'Iris Setosa' " Errata type: Code | Page number: 174 in first code, line 5, use the following : dataset = [[int(tok) for tok in line.strip().split()] for line in GzipFile('retail.dat.gz')] dataset = [[int(tok) for tok in ,line.strip().split()] for line in GzipFile('retail.dat.gz')] Errata type: Code | Page number: 177 use the following : for item in itemset: consequent = frozenset([item]) antecendent = itemset=consequent base = 0.0 acount = 0.0 ccount = 0.0 for item in itemset: antecendent = itemset-consequent base = 0.0 # acount: antecedent count acount = 0.0 Errata type: Technical | Page number: 205 use the following : 'Ridler Calvard' method 'Ridley Calvard' method Errata type: Code | Page number: 175 new_itemsets = [] for iset in itemsets: for v in valid: if v not in iset: newset = = (ell | set([v_])) c_newset = 0 for d in dataset: if d.issuperset(c): c_newset += 1 if c_newset > minsupport: newsets.append(newset) should be: new_itemsets = [] for iset in itemsets: for v in valid: if v not in iset: newset = = (iset | set([v])) c_newset = 0 for d in dataset: if d.issuperset(newset): c_newset += 1 if c_newset > minsupport: new_newsets.append(newset) Type: Technical | Page: 188 Replace "only 4 out of 24 jazz songs have been correctly classifi ed—that is only 16 percent" with "only 7 out of 24 jazz songs have been correctly classified - that is only 29 percent" Errata type: Code | Page number: 67 correction: vectorized = vectorizer.fit_transform(dataset.data) should be vectorized = vectorizer.fit_transform(train_data.data) Errata type: Technical | Page number: 103, 104, 105 The graph on these pages should depict the dashed line as "test error" and not "train error". Errata type: Code | Page number: 223 Solution: Use >> from scipy.stats import pearsonr >> from import scipy.stats import pearsonr Errata type: Code | Page number: 209 use the following : W = np.exp(-2.*(X**2+ Y**2)) # Normalize again to 0..1 W = W - W.min() W = W / W.ptp() W = W[:,:,None] # This adds a dummy third dimension to W W = np.exp(-2.*(X**2+ Y**2)) # Normalize again to 0..1 W = W - C.min() W = W / C.ptp() W = C[:,:,None] # This adds a dummy third dimension to W Errata type: Code | Page number: 194 On page 194, The indentation should be as follows: def read_ceps(genre_list, base_dir=GENRE_DIR):    X, Y = [], []    for label, genre in enumerate(genre_list):        for fn in glob.glob(os.path.join(base_dir, genre, "*.ceps.npy")):            ceps = np.load(fn)            num_ceps = len(ceps)            X.append(                np.mean(ceps[int(num_ceps / 10):int(num_ceps * 9 / 10)], axis=0))            y.append(label) return np.array(X), np.array(y) Errata type: Code | Page number: 168 use the following: movie_likeness = np.zeros((nmovies,nmovies)) movie_likeness = np.zeros((nmovies,nmovies)) allms = np.ones(nmovies, bool) cs = np.zeros(nmovies) Errata type: Technical | Page number: 177 In the table, replace "13791" by "1379". Errata type: Code |page-162 On page 162, replace "return x, xl" by "return xc, xl" Errata type: Technical | Page number: 215 The text should be "In this dataset, however, the texture is not a clear marker of the class." instead of "In this dataset, however, the texture is a clear marker of the class." Errata type: Code | Page number: 169 use the following: def nn_movie(movie_likeness, reviews, uid, mid): likes = movie_likeness[mid].argsort() # reverse the sorting so that most alike are in # beginning likes = likes[::-1] # returns the rating for the most similar movie available for ell in likes: if reviews[uid,ell] > 0: return reviews[uid,ell] def nn_movie(movie_likeness, reviews, uid, mid): likes = movie_likeness[mid].argsort() # reverse the sorting so that most alike are in # beginning likes = likes[::-1] # returns the rating for the most similar movie available for ell in likes: if reviews[u,ell] > 0: return reviews[u,ell] Errata type: Code | Page number: 254 On page 254, under the Running jug on our cloud machine section, the sentence  "We can now download the data and code from Chapter 12, Computer Vision–Pattern Recognition Finding Related Posts,as follows" and the code snippet after it should be removed. Errata type: Code | Page number: 247 def label_for(f): return f[:-(3+1+2)] # 3 for "jpg" 1 for the dot and 2 for the number and then labels = map(label_for, filenames) Errata type: Technical | Page number: 31 Use "scikit-learn" Errata type: Code | Page number: 35 Use the following " def apply_model( example ): if example[2] < 2: print 'Iris Setosa' else: print 'Iris Virginica or Iris Versicolour' " " if features[:,2] < 2: print 'Iris Setosa' else: print 'Iris Virginica or Iris Versicolour' " Errata type: Typo | Page number: 92 Use "PostTypeId" instead of "PostType" in the table. Errata type: Typo | Page number: 167 Use below 'NumPy ships with np.corrcoef, which computes correlations. ' 'NumPy ships with np.corrcoeff, which computes correlations. ' Errata type: Code | Page number: 207 Use the following: Errata type:Typo | Page number: 107 Use the following: 'by replacing y with log(odds)' 'by replacing y with p' Errata type:Code | Page number: 112 Use: >>> medium = np.argsort(scores)[len(scores) / 2] >>> thresholds = np.hstack(([0],thresholds[medium])) >>> idx80 = precisions>=0.8 >>> print("P=%.2f R=%.2f thresh=%.2f" % \ (precision[idx80][0], recall[idx80][0], threshold[idx80][0])) P=0.81 R=0.37 thresh=0.63 >>> thresholds = np.hstack(([0],thresholds[medium])) >>> idx80 = precisions>=0.8 >>> print("P=%.2f R=%.2f thresh=%.2f" % \ (precision[idx80][0], recall[idx80][0], threshold[idx80][0])) P=0.81 R=0.37 thresh=0.63 Errata type:Typo | Page number: 93 Use:The PostTypeId attribute, for example, is necessary to distinguish between questions and answers. It will be not picked to serve as a feature, but we will need it to filter the data. "The PostType attribute, for example, is only necessary to distinguish between questions and answers. Furthermore, we can distinguish between them later by checking for the ParentId attribute. So, we keep it for questions too, and set it to 1. Type: Code Page: 99 # which we don't want to count html_free_s = re.sub(" +", " ", tag_match.sub('', code_free_s)).replace("\n", "") Type: Code Page: 99 The code snippet should be: ... html_free_s = re.sub(" +", " ", tag_match.sub('',  code_free_s)).replace("\n", "") # remove links from text before counting words ... Page 131 type:graphics The graph on this page should indicate the P/R AUC value as 67 percent Page 207 type:typo Use the following: 'Only about 5 percent of these values will be true.' 'Only 1 percent of these values will be true.' Sample chapters You can view our sample chapters and prefaces of this title on PacktLib or download sample chapters in PDF format. Frequently bought together +    = 50% Off the second eBook Price for both: €33.25 Buy both these recommended eBooks together and get 50% off the cheapest eBook. What you will learn from this book • Build a classification system that can be applied to text, images, or sounds • Use scikit-learn, a Python open-source library for machine learning • Explore the mahotas library for image processing and computer vision • Build a topic model of the whole of Wikipedia • Get to grips with recommendations using the basket analysis • Use the Jug package for data analysis • Employ Amazon Web Services to run analyses on the cloud • Recommend products to users based on past purchases In Detail Machine learning, the field of building systems that learn from data, is exploding on the Web and elsewhere. Python is a wonderful language in which to develop machine learning applications. As a dynamic language, it allows for fast exploration and experimentation and an increasing number of machine learning libraries are developed for Python. Building Machine Learning system with Python shows you exactly how to find patterns through raw data. The book starts by brushing up on your Python ML knowledge and introducing libraries, and then moves on to more serious projects on datasets, Modelling, Recommendations, improving recommendations through examples and sailing through sound and image processing in detail. Using open-source tools and libraries, readers will learn how to apply methods to text, images, and sounds. You will also learn how to evaluate, compare, and choose machine learning techniques. Written for Python programmers, Building Machine Learning Systems with Python teaches you how to use open-source libraries to solve real problems with machine learning. The book is based on real-world examples that the user can build on. Readers will learn how to write programs that classify the quality of StackOverflow answers or whether a music file is Jazz or Metal. They will learn regression, which is demonstrated on how to recommend movies to users. Advanced topics such as topic modeling (finding a text’s most important topics), basket analysis, and cloud computing are covered as well as many other interesting aspects. Building Machine Learning Systems with Python will give you the tools and understanding required to build your own systems, which are tailored to solve your problems. Approach A practical, scenario-based tutorial, this book will help you get to grips with machine learning with Python and start building your own machine learning projects. By the end of the book you will have learnt critical aspects of machine learning Python projects and experienced the power of ML-based systems by actually working on them. Who this book is for This book is for Python programmers who are beginners in machine learning, but want to learn Machine learning. Readers are expected to know Python and be able to install and use open-source libraries. They are not expected to know machine learning, although the book can also serve as an introduction to some Python libraries for readers who know machine learning. This book does not go into the detail of the mathematics behind the algorithms. This book primarily targets Python developers who want to learn and build machine learning in their projects, or who want to provide machine learning support to their existing projects, and see them getting implemented effectively.
2014-07-22 11:42:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18734237551689148, "perplexity": 10056.21409260483}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997858581.26/warc/CC-MAIN-20140722025738-00225-ip-10-33-131-23.ec2.internal.warc.gz"}
http://eccellent.it/omwt/numpy-einsum-outer-product.html
# Numpy Einsum Outer Product The 1st command np. In implicit mode einsum computes these values. In this example, we multiply a one-dimensional vector (V) of size (3,1) and the transposed version of it, which is of size (1,3), and get back a (3,3) matrix, which is the outer product of V. In NumPy 1. We can go beyond just understanding numpy operations and make short work of mathematical theorems using concise visual proofs. iadd_shift (lshift) ¶ Add positive shift to elements. kron (a, b) [source] ¶ Kronecker product of two arrays. cov() considers its input data matrix to have observations in each column, and variables in each row, so to get numpy. einsum (subscripts, *operands, out=None, dtype=None, order='K', casting='safe', optimize=False) [source] ¶ Evaluates the Einstein summation convention on the operands. As an example, consider multiplying two matrices A and B to form a matrix C. 在tensorflow中,有很好的入门和矩阵乘法函数,但是在查看文档之后,我找不到任何内部函数来获取两个张量的外积,即通过所有可能的较小张量元素的乘积产生更大的张量(比如numpy. the x is a TensorVariable instance. The first command that we ran, just now, was this. einsumを使うといい。 numpy. (Dot) X(cross) 대상 벡터 n 차원 3. Problem is, the documentation really doesn't do the function justice. matlib import numpy as np a = np. The vector s contains singular values, each of which acts as a weight on a combination singular vectors. histogram only generates the data. Refer to numpy. The cumulative product is taken over the flattened array by default, otherwise over the specified axis. The 2nd command np. The following formula should make it clear where and are vectors. NumPy Mathematics Exercises, Practice and Solution: Write a NumPy program to generate inner, outer, and cross products of matrices and vectors. ip() (the inner product) no longer performs the outer product. outer(a,b)官方文档讲np. Trace of an array, np. Making statements based on opinion; back them up with references or personal experience. cov() considers its input data matrix to have observations in each column, and variables in each row, so to get numpy. NumPy - Linear Algebra: Performing Linear Algebra on several matrices which are stacked as an array. """ # Python module imports. outer only takes two vector arguments. Arrays with different sizes cannot be added, subtracted, or generally be used in arithmetic. I have the working code on samples but when I want to do them with (+2000, +2000). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an. The Einstein summation convention can be used to compute many multi-dimensional, linear algebraic array operations. For example, on a Mac platform, the pip3 command generated by the tool is:. sum (a, axis=-1. cross(a, b, axisa=-1, axisb=-1, axisc=-1, axis=None) [source] Return the cross product of two (arrays of) vectors. max() max of all values in two vectors [v,i] = max(a) v,i = a. Again, a pure numpy implementation means a fast execution. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an. Therefore, given two vectors and , their "outer product" is defined as. The SciPy library is one of the core packages for scientific computing that provides mathematical algorithms and convenience functions built on the NumPy extension of Python. Here's an example we went over in class: C = np. The cumulative product is taken over the flattened array by default, otherwise over the specified axis. Timings are performed on a intel CPU using numpy 1. In 2D, a latin hypercube is a latin square--a checkerboard--such that there is exactly one sample in each row and each column. This tutorial uses deep learning to compose one image in the style of another image (ever wish you could paint like Picasso or Van Gogh?). 17) The dot product of n-vectors: u =(a1,…,an)and v =(b1,…,bn)is u 6 v =a1b1 +' +anbn (regardless of whether the vectors are written as rows or columns). op description status note; numpy. That is the sum of the element-by-element products. b a·b=ab 内积 inner product) Innerla (a,b)(a,b) b) a A∞B (1020) 1*101*20 2*102. If you still find this confusing, the next illustration breaks down the process into 2 steps, making it clearer:. This page introduces some basic ways to use the object for computations on arrays in Python, then concludes with how one can accelerate the inner loop in Cython. pyimport num 博文 来自: Chenglin(Ben) Yu's Miracle. Computes the Kronecker product, a composite array made of blocks of the second array scaled by the first. dot(b, out=None) Dot product of two arrays. Numpy einsum outer product. This tutorial helps NumPy or TensorFlow users to pick up PyTorch quickly. This Wikipedia article has more details on dot products. Here's an example we went over in class: C = np. outer (a, b[, out]) Compute the outer product of two vectors. The outer product, np. A non-exhaustive list of these operations, which can be computed by einsum, is shown below along with examples:. Active 12 months ago. When the grid becomes large, the look-up table might be too large to store in memory. import numpy. That leaves NumPy with OpenBLAS. When talking to colleagues I realized that not everyone knows about einsum, my favorite function for developing deep learning models. 1172 ix_ : Construct index arrays. 1173: 1174 Examples: 1175. temp = numpy. fromiter(map(partial(users_formula, S), A, B), dtype=np. If I could use Numpy functions, I would write this as: A = np. python – Understanding numpy 2D histogram – Stack Overflow February 20, 2020 Python Leave a comment Questions: I have the following 2D distribution of points. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an. swapaxes(temp, 1, 2) NumPy ufuncs, chẳng hạn như multiply, có một phương thức outer gần như làm những gì bạn muốn. Compute their outer product: W = yxT 3. Returns a dot product of two arrays. The ebook and printed book are available for purchase at Packt Publishing. outer() Compute the outer product of two vectors. Python for Data-Science Cheat Sheet: SciPy - Linear Algebra SciPy. vdot (a, b) Return the dot product of two vectors. This is a very important concept in numpy while we can used it for a variety of usage. For example, look at what happens if we never "start a new row" (i. NumPy is smart enough to use the original scalar value without actually making copies, so that broadcasting operations are as memory and computationally efficient as possible. Outer join pandas: Returns all rows from both tables, join records from the left which have matching keys in the right table. This method computes the matrix product between the DataFrame and the values of an other Series, DataFrame or a numpy array. The call np. outer (a, b[, out]) Compute the outer product of two vectors. einsum instances in your code and tries to optimize the slowest ones by changing the order of arguments (since tf. 2つのスパースscipy行列 A, B が与えられた場合 行ごとの外積を計算したい。 numpyを使用してこれを行う方法はいくつかあります。おそらく最も簡単なもの. This tutorial uses deep learning to compose one image in the style of another image (ever wish you could paint like Picasso or Van Gogh?). ravel()) es el equivalente. But how to implement a 3-array outer product, which means : given third vector c = [c0, c1, , cP], how to get the outer product between the 3 numpy arrays. A note on python/numpy vectors. After completing this tutorial, you will know: That tensors are a generalization of matrices and are represented using n-dimensional arrays. This is a repository for the Google Summer of Code project on Differentiable Tensor Networks. When the elements of the vectors are complex, then the dot product of two vectors is de ned by the following relation. The list comprehension import numpy as np B = np. Outer product:: np. einsum( equation, *inputs, **kwargs ) This function returns a tensor whose elements are defined by equation , which is written in a shorthand form inspired by the Einstein summation convention. Moon Yong Joon 1 Python numpy, pandas 기초-1편 scalar product Outer product, vector product, cross product 표기. Edit: Opt_Einsum is amazing and was a big part of the optimization efforts - it has been merged fully into numpy since v1. reshape(4, 4, 2, 2) This simply unfolds the array and then recollects in order. I make row-wise outer products between them. To get more of feel for einsum, it can be useful to implement familiar NumPy array operations using the subscript notation. This chapter explains the various ways of creating tensor variables, the attributes and methods of TensorVariable and TensorType, and various basic symbolic math and arithmetic that Theano supports for tensor variables. Numpy einsum outer product. com/deep-learning. However, other implementations than Dense are free to implement things differently. Python numpy 模块, outer() 实例源码. Also, when specifying subscripts via lists, the subscripts can be arbitrary keys (unlike numpy. To get started using this object, see_来自Numpy 1. Q&A for Work. 記述が楽なので,matrixを使って書いてしまうことが多いけど,3次元以上はarrayで書くことになるので,慣れてないとバグを生む可能性が高い気がする. 皆さんどうしてるんですか.. I have the working code on samples but when I want to do them with (+2000, +2000). Done: numpy. inner (a, b) Inner product of two arrays. outer(A, B) C = numpy. Again, a pure numpy implementation means a fast execution. def inner(a, b): """ Returns the inner product of a and b for arrays of floating point types. NumPy and related information | Frankensaurus. Make a (very coarse) grid for computing a Mandelbrot set:>>> rl = np. einsum(), you have to pass the so-called subscripts string as an argument, followed by your input arrays. import numpy A = numpy. Args: a (cupy. The dot command performs the inner product, and the outer command performs Equation (2-3. dot: This will return the dot product of two arrays. So, let's look in greater detail at what we just saw here. It read W = xyT 1. Recommended for you. Я прочитал руководство по einsum и базовое введение ajcr. linspace (-2, 2, 5)) >>> rl array([[-2. NumPy is the fundamental package for scientific computing with Python. A matrix is organized with rows and columns. 对于两个向量而言给定向量 \vec a 和向量 \vec b (在Python中也可以说是一维数组),若 \vec a=\lef…. matrix of shape 2x3, with p updated and v unchanged. rand ( 5 ) f = np. b (N,) array_like. The first is that a 1d array is neither a row, nor a column vector. What I would do if I am to solve this problem. A matrix product between a 2D array and a suitably sized 1D array results in a 1D array: In [199]: np. ones((1,R)) is a 2-D array that happens to have only one row. That means you can take the dot product of $$a$$ with itself, without transposing the second argument. Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. outer(A, B) C = numpy. How to perform the tensor product. A (dims,dims)-shaped matrix, whose (i,j)-th entry represents the inner product of basis vector i and basis vector j. For N-dimensional arrays, it is a sum product over the last axis of a and the second-last axis of b. fmatrix object itself is an instance of TensorType. This was accomplished by NumPy’s broadcasting rules, which implicitly expand dimensions of size 1 in one array to match the corresponding dimension of the other array. 6, provides many flexible ways to visit all the elements of one or more arrays in a systematic fashion. GitHub Gist: instantly share code, notes, and snippets. Compute tensor dot product along specified axes for arrays >= 1-D. product: product of elements: np. The outer product, np. outer: Returns the outer product of two vectors. This is one of the 100+ free recipes of the IPython Cookbook, Second Edition, by Cyrille Rossant, a guide to numerical computing and data science in the Jupyter Notebook. Making statements based on opinion; back them up with references or personal experience. Models (Beta) Discover, publish, and reuse pre-trained models. Pandas: The Pandas library is build on NumPy and provides methods to manipulate and analyze dataframes. (2b) Element-wise multiplication and dot product ¶ NumPy arrays support both element-wise multiplication and dot product. *inputs : the inputs to contract (each one a Tensor ), whose shapes should be consistent with equation. How to compute the outer product of two matrices in numpy? 我有一个MxN Numpy数组。 我想取数组的每一行并按元素顺序乘以数组的每一行,从而得到乘积的MxMxN numpy数组。. The following are code examples for showing how to use. Where einsum makes broadcasting easy, dot for high-dimensional input is kind of like an outer product. Data descriptors defined here: T get transpose __dict__ dictionary for instance variables (if defined) __weakref__ list of weak references to the object (if defined). >>> import numpy as np >>> import matplotlib. Array objects NumPy Reference, Release 1. Arguments: other0, other1 Two-index objects that go into the outer product. The “outer” function in R creates a matrix of outer products of 2 vectors. This look-up table is an outer product of three vectors running along candidate locations, time, and frequency. shape was this funny thing (5,) so this is called a rank 1 array. The function takes as arguments the two tensors to be multiplied and the axis on which to sum the products over, called the sum reduction. You are given two arrays: and. 1167: 1168 See Also: 1169 -----1170 inner : Inner product: 1171 outer : Outer product. Shortcut commands to convert Matlab code to Python code. Trace of an array, np. Is there an API (or accepted standard way) for an outer product in MXNet? So something similar to NumPy's numpy. For n = 3 one gets the usual cross product. 这个outer product和我们一贯认为的外积是不一样的。看代码test. > > kron(X,Y) is the Kronecker tensor product of X and Y. This page introduces some basic ways to use the object for computations on arrays in Python, then concludes with how one can accelerate the inner loop in Cython. Sau đây là. A AMD cpu with numpy 1. To use numpy. What I would do if I am to solve this problem. The list comprehension import numpy as np B = np. I will cover the basics of how to use Numpy, Pandas, Scikit-Learn, and Keras. dot() or np. The examples here can be easily accessed from Python using the Numpy_Example_Fetcher. A matrix product between a 2D array and a suitably sized 1D array results in a 1D array: In [199]: np. Notes ----- The first argument is not conjugated. def inner(a, b): """ Returns the inner product of a and b for arrays of floating point types. Anything that involves combinations of multiplying and summing axes can be written using einsum. Series or DataFrame. they are n-dimensional. op description status note; numpy. Timings are performed on a intel CPU using numpy 1. Deprecated: Function create_function() is deprecated in /www/wwwroot/dm. Introduction. List vs Array Numpy array is like mathematical vector, we can add, sub, multiply, divide and arithmetic operation on each element in array as we do in vectors. The Numeric Python extensions (NumPy henceforth) is a set of extensions to the Python programming language which allows Python programmers to efficiently manipulate large sets of objects organized in grid-like fashion. Element-wise Product Outer Product Trace. derivative (3p) The function should have 3 parameters, two real numbers and one integer:. temp = numpy. Requirements: (a) Your function should perform all operations on individual array elements (i. Questions: In numpy, some of the operations return in shape (R, 1) but some return (R,). randint(1, 10, size=5) compute_reciprocals(values) 改用UFuncs方式求倒数 print(1. and other topics like machine learning, data science, etc. *inputs : the inputs to contract (each one a Tensor ), whose shapes should be consistent with equation. I have the working code on samples but when I want to do them with (+2000, +2000). However, you can not call this method from user-defined functions. outer; matmul; einsum; おわりに. max() max of all values in two vectors [v,i] = max(a) v,i = a. This list may not reflect recent changes (). Node for computing general products and sums of Gaussian nodes. outer(arr_1D,arr_1D)==np. (You might cry foul, but, hey, grab NumPy & MKL and check for yourself). Indeed, the columns of the outer product are all proportional to the first column. op description status note; numpy. In this exercise you will learn several key numpy functions such as np. seed(0) def compute_reciprocals(values): output = np. # outer join in python pandas print pd. You can see for yourself that outer[i, j] = x[i] * y[j] for all (i, j). With einsum(), it is trivial, even intuitive: Merely specify the batch index (here, 'B') in all terms. einsum("ij,jk->ki", A, B) for two arraysA and B. Solve Ax = z for x. A numpy array with sign changes indicated by +1 and -1. In physics, sometimes the notation a ∧ b is used, though this is avoided in mathematics to avoid confusion with the exterior product. einsum (for Einstein summation convention). 10 the read-only restriction will be removed. Dot Product and Matrix Multiplication DEF(→p. Numpy Arrays Caution: The cumulative class of operations that either sum or multiply elements of an array (sum, product, cumsum, cumproduct) use the input array type as the type to perform the cumulative sum or product. import numpy as np from numpy import linalg as LA def outerProduct(X): """Computer outer product and indicate which locations in matrix are undefined""" O=np. If you pass a 1D vector into the dot function or method, NumPy assumes you mean it to be a row vector on the left, and a column vector on the right. c = outer_product (a, b, c) # return the output tensor(s) of the operator. To get more of feel for einsum, it can be useful to implement familiar NumPy array operations using the subscript notation. This week we are going to dig a little deeper in this fundamental piece of the scientific python ecosystem before going on to the next unit where we will introduce some of the many (many) packages that build upon it. For anyone who finds herself, occasionally. I make row-wise outer products between them. 31 AttributeError: 'numpy. Requirements: (a) Your function should perform all operations on individual array elements (i. I have two matrices. Spaces¶ class pymbolic. Deprecated: Function create_function() is deprecated in /www/wwwroot/dm. einsum implementation. isnan(O) return (O,N) def sumWithNan(M1,M2): """Add two pairs of (matrix,count)""" (X1,N1)=M1 (X2,N2)=M2 N=N1+N2 X=np. Let's say you have two 2D arrays, A and B, and you want to do matrix multiplication. py _distributor_init. b + a ^ b (for vectors). First, we need a function capable of creating a strain matrix for the particular kind we have in mind, and apply it to the structure. Numpy is the primary Python library for working with array data. nr_iter = 50 # Print statistics. My question is: How can I express the line. Our goal here is to give a practical introduction to using Numpy, then to talk about the design philosophy behind Numpy and the performance tradeoffs that this design entails. einsum('ij,jk->ijk', p, np. Einstein summation can be implemented in no more than 20 lines of Julia code, the automatic differentiation is also straightforward. kron (a, b) Kronecker product of two arrays. For those who just can't let go of matlab, there's a matrix object which prettifies…. Dot product of two arrays. Timings are performed on a intel CPU using numpy 1. See the NOTICE file distributed with this work for additional information reg. Where einsum makes broadcasting easy, dot for high-dimensional input is kind of like an outer product. linspace (-2, 2, 5)) >>> rl array([[-2. By voting up you can indicate which examples are most useful and appropriate. ip() (the inner product) no longer performs the outer product. The outer product of the row and column label vectors shows a representation of the checkerboard structure. My original aim was to increase speed performance. ndim, N = B. NumPy and related information | Frankensaurus. Numpy always tries to be helpful in any way to guess. What is the best and most efficient way to solve the following in python numpy: given a weight vector: weights = numpy. Computes the Kronecker product, a composite array made of blocks of the second array scaled by the first. Arguments: other0, other1 Two-index objects that go into the outer product. tensor product and einsum in numpy. array([1,2,3]) # v has shape (3,) w = np. einsum(‘ij,ik->ijk’, p, p) creates a tensor, where every element in the 1st axis, is associated with the outer product matrix. If you pass a 1D vector into the dot function or method, NumPy assumes you mean it to be a row vector on the left, and a column vector on the right. Let A and B be two 1D arrays with the same. The ebook and printed book are available for purchase at Packt Publishing. Matrix multiplication : A %o% B : Outer product. Sadly, intel-numpy doesn't seem to be maintained that well, and it refuses to install through Arch's pip. Trace of an array, numpy. Einstein summation can be implemented in no more than 20 lines of Julia code, the automatic differentiation is also straightforward. Vectorized sum-reduction with outer product - NumPy I'm relatively new to NumPy and often read that you should avoid to write loops. See also: numpy. It provides a high-performance multidimensional array object, and tools for working with these arrays. And this is a very funny data. 2つのスパースscipy行列 A, B が与えられた場合 行ごとの外積を計算したい。 numpyを使用してこれを行う方法はいくつかあります。おそらく最も簡単なもの. 17? 代替手段を用いてください。以下の記事の内容は記録として残します。. > I would appreciate it, if Numpy could handle the Kronecker-product of > two matrices X, Y. You'll see that this SciPy cheat sheet covers the basics of linear algebra that you need to get started: it provides a brief explanation of what the library has to offer and how you can use it to interact with NumPy, and goes on to summarize topics in linear algebra, such as matrix creation, matrix functions, basic routines that you can perform. NumPy Mathematics Exercises, Practice and Solution: Write a NumPy program to generate inner, outer, and cross products of matrices and vectors. changed title to numpy. It seems arrays don’t make a distinction between row/column vector. , 100) # 100. 1161: 1162 Raises: 1163 -----1164 ValueError: 1165 When the dimension of the vector(s) in a and/or b does not: 1166 equal 2 or 3. (a double outer product, and. inner¶ numpy. packbits (a[, axis, bitorder]) Packs the elements of a binary-valued array into bits in a uint8 array. import cProfile import pstats import tempfile from numpy. buildmatrix. I will implement the outer product with an actual example code. Here is a simple description with matrix multiplication as an example case. The result is a matrix. Calculating inner product and outer product with numpy. otherSeries, DataFrame or array-like. Broadcasting a vector into a matrix. And if you print, say, the product between a and a transpose, then this gives you the outer product of a vector, right? And so, the outer product of a vector gives you a matrix. When CONTENTS is a nested sequence, it is traversed up to the depth that guarantees the sane shape for an array. For instance, indexing using a slice returns a view on the initial array, which mean that the initial and sliced arrays share the same memory buffer. That leaves NumPy with OpenBLAS, which should not be an issue, since OpenBLAS is very fast, and I expect it to be within a few percent of Intel's MKL. This time, we'll explore a more advanced concept in numpy called broadcasting. I'm struggling to understand exactly how einsum works. import numpy as np import numpy. outer N次元およびその他の操作への一般化。. einsum('ij,ik->ijk', p, p) creates a tensor, where every element in the 1st axis, is associated with the outer product matrix. inner(a, b) Computes the inner product of two arrays. Numpy Arrays Caution: The cumulative class of operations that either sum or multiply elements of an array (sum, product, cumsum, cumproduct) use the input array type as the type to perform the cumulative sum or product. Make a (very coarse) grid for computing a Mandelbrot set:>>> rl = np. To find out the details about einsum, please check out my nextjournal-article or the numpy-manual. I have two matrices. import numpy as np np. , import numpy as np A = np. outer taken from open source projects. That is, take my two, $2$ state indices, and. outer(), PyTorch's torch. Q&A for Work. There are two ways to deal with matrices in numpy. tensordot() Compute tensor dot product along specified axes for arrays >= 1-D. If you had a matrix of NB vectors P and Q, and wanted to compute the outer product of the B‘th row of P and the B‘th row of Q for all B, you would be at pains to do this with other functions, and could easily get it wrong. Also, when specifying subscripts via lists, the subscripts can be arbitrary keys (unlike numpy. seed(0) def compute_reciprocals(values): output = np. So, you do: np. Using the Einstein summation convention, many common multi-dimensional array operations can be represented in a simple fashion. Numpy einsum outer product. 04517666] 1. Now we pick two vectors from an example in the book Linear Algebra (4 th Ed. Cython BLIS: Fast BLAS-like operations from Python and Cython, without the tears. The result is a vector which is perpendicular to the vectors being multiplied and normal to the plane containing them. In implicit mode einsum computes these values. A miniature multiplication table. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. When the elements of the vectors are complex, then the dot product of two vectors is de ned by the following relation. With einsum(), it is trivial, even intuitive: Merely specify the batch index (here, 'B') in all. ip() (the inner product) no longer performs the outer product. derivative (3p) The function should have 3 parameters, two real numbers and one integer:. einsum()? Appologies if I’ve missed something incredibly obvious. verbose = True # Calc for single_normal. Last quarter's column introduced numpy and the concept of array arithmetic. Let's say you have two 2D arrays, A and B , and you want to do matrix multiplication. on two arrays of arrays (a and b) of equal length (3) and height (n) , row by row produce the outer product of ( row i: a on b) plus the outer product of (row i: b on a), and then sum all the outer product matrices to output one, final matrix. Element-wise Product Outer Product Trace. Two dimensions are compatible when. This tutorial uses deep learning to compose one image in the style of another image (ever wish you could paint like Picasso or Van Gogh?). To use numpy. diagonal(offset=0, axis1=0, axis2=1) Return specified diagonals. float32) # single precision array with 20 entries np. This is one of the 100+ free recipes of the IPython Cookbook, Second Edition, by Cyrille Rossant, a guide to numerical computing and data science in the Jupyter Notebook. When CONTENTS is a nested sequence, it is traversed up to the depth that guarantees the sane shape for an array. 5 >>> v = np. For a 2 pixel by 2 pixel RGB image, in CHW order, the image tensor would have dimensions (3,2,2). 記述が楽なので,matrixを使って書いてしまうことが多いけど,3次元以上はarrayで書くことになるので,慣れてないとバグを生む可能性が高い気がする. 皆さんどうしてるんですか.. All answers should be in numpy exercises. IN PYTHON: Write a function that computes the outer product of two vectors WITHOUT using numpy / inbuilt functions. Therefore, given two vectors and , their "outer product" is defined as. I have two matrices. This blog provides the solutions of various coding interview questions hosted at leetcode, interviewbit, geeksforgeeks, etc. With einsum(), it is trivial, even intuitive: Merely specify the batch index (here, 'B') in all terms. Divergence Of Stress Tensor. If you still find this confusing, the next illustration breaks down the process into 2 steps, making it clearer:. tensordot(tensor_a, tensor_b, axes=0) (like np. dot() or np. Numpy always tries to be helpful in any way to guess. linspace (-2, 2, 5)) >>> rl array([[-2. A 32 bit machine has a process limit of a fraction of 2^32 = 4 GB. ravel()) es el equivalente. outer(A, B), can be written: np. dot(b, out=None) Dot product of two arrays. sort: sort elements: np. Shortcut commands to convert Matlab code to Python code. > > kron(X,Y) is the Kronecker tensor product of X and Y. Vector inner product is also called dot product denoted by or. 15 Manual Numpy-discussion - Deprecate matrices in 1. 一、Numpy介绍、为什么要用Numpy 1、Numpy介绍 Numpy是Python的一个扩展包,语法和Matlab有很多相似之处。它支持高维数组和矩阵运算,也提供了许多数组和矩阵运算的函数。. NumPy - Linear Algebra: Performing Linear Algebra on several matrices which are stacked as an array. Ask Question on two arrays of arrays (a and b) of equal length (3) and height (n) , row by row produce the outer product of ( row i: a on b) plus the outer product of (row i: b on a), and then sum all the outer product matrices to output one, final matrix. Merge df1 and df2 on the lkey and rkey columns. Problem is, the documentation really doesn’t do the function justice. Using the Einstein summation convention, many common multi-dimensional, linear algebraic array operations can be represented in a simple fashion. Why does the corrected version offer a different answer for Q4 than the typoversion? 2. on two arrays of arrays (a and b) of equal length (3) and height (n) , row by row produce the outer product of ( row i: a on b) plus the outer product of (row i: b on a), and then sum all the outer product matrices to output one, final matrix. einsum,第三个术语为dot-product –. # Author: Alex Riley # Make sure to read:. Also check out SciPy for all your advanced scientific computing needs. Using the Einstein summation convention, many common multi-dimensional, linear algebraic array operations can be represented in a simple fashion. einsum('ij,jk->ik. py; __config__. Two common numpy functions used in deep learning are np. sum (a, axis=-1. The input arrays are flattened into 1-D vectors and then it performs outer product of these vectors. The first command performs the ele-ment-by-element multiplication in Equation (2-1). outer(a,b)官方文档讲np. numpy / numpy / core / einsumfunc. kron(a, b)¶. The cross product of a and b in is a vector perpendicular to both a and b. shape - which number is rows, and which is column? python,numpy,matrix. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. The fundamental package for scientific computing with Python. inner¶ numpy. 1172 ix_ : Construct index arrays. Array axis summations, numpy. > > kron(X,Y) is the Kronecker tensor product of X and Y. Computational Fluid Dynamics 02: Body-fitted grids without for loops Again the outer product completes the grid efficiently. In this tutorial article, we demystify einsum(). This will make matrix multiplication more tedious since explicit reshape is required. Arrays with different sizes cannot be added, subtracted, or generally be used in arithmetic. ones(3)) Out[199]: array([ 6. The main difference is that pylab. max(a,b) maximum(a,b) pairwise max max([a b]) concatenate((a,b)). The function will return the matrix containing the outer product of the two vectors. einsum('ij,ik->ijk', p, p) creates a tensor, where every element in the 1st axis, is associated with the outer product matrix. Outer product in CVXPY. outer): v_{i,j} = x_i*h_j 要么. Dot Product and Matrix Multiplication DEF(→p. 그러므로, 배열 연산에 활용되는 점곱(dot product)이나 벡터곱(cross product) 등을 계산할 수 있습니다. matrix of shape 2x3, with p updated and v unchanged. Here's an example we went over in class: C = np. 1161: 1162 Raises: 1163 -----1164 ValueError: 1165 When the dimension of the vector(s) in a and/or b does not: 1166 equal 2 or 3. outer”, but this is only for product. 15 and remove in 1. Returns a dot product of two arrays. einsum。三个einsum函数使用方式都是相同的,即einsum(equation,operands),其中equation表示einsum计算的字符串表示,operands是张量序列(计算主体)。. To get more of feel for einsum, it can be useful to implement familiar NumPy array operations using the subscript notation. I filter X by classes in the following way; import numpy as np np. 1172 ix_ : Construct index arrays. This would not be allowed in Matlab. einsum()? Appologies if I've missed something incredibly obvious. dot(row, vector1) for row in matrix1]) 3. import numpy as np import numpy. The inner product(or dot product) returns a scalar value, (2-2) The outer productcreates a matrix from two vectors, as in (2-3) Code 2-8 demonstrates these multiplications. После обнаружения использования многоточия в массивах numpy / scipy я закончил его реализацию как рекурсивную функцию: def array_outer_product(A, B, result=None): ''' Compute the outer-product in the final two dimensions of the given arrays. tensordot: Returns the tensor dot product of two arrays along specified axes. normal(mu,sigma,10000) >>> # Plot a normalized histogram with 50 bins >>> plt. numpy / numpy / core / einsumfunc. einsum()? Appologies if I’ve missed something incredibly obvious. def dot (a, b, out = None): """Returns a dot product of two arrays. I have two matrices. It read W = xyT 1. mtensor = np. ip() (the inner product) no longer performs the outer product. Make a (very coarse) grid for computing a Mandelbrot set:>>> rl = np. For example, the outer product of [1,2,3] x [1,2] is outer product 2 1 2 3 2 4 6 2. I have the working code on samples but when I want to do them with (+2000, +2000). The outer products of these singular vectors are rank-1 building block from which X gets reconstructed. and introduction about machine learning and data science A blog for coding interview solutions for problems hosted on leetcode, interviewbit, etc. How could we change the definition. inner (a, b) Inner product of two arrays. A basic introduction to NumPy's einsum The einsum function is one of NumPy’s jewels. The “outer” function in R creates a matrix of outer products of 2 vectors. This is called array broadcasting and is available in NumPy when performing array arithmetic, which can greatly reduce and simplify your code. 13,w3cschool。. CSCI-3656 Spring 2018: Numerical Computation. array ([4, 5]) # w has shape (2,) # To compute an outer product, we first reshape v to be a column # vector of shape (3, 1); we can then broadcast it against w to yield # an output of shape (3, 2), which is the outer product of v and w. Ask Question on two arrays of arrays (a and b) of equal length (3) and height (n) , row by row produce the outer product of ( row i: a on b) plus the outer product of (row i: b on a), and then sum all the outer product matrices to output one, final matrix. Outer product (Blas 2) It's still simple enough that we can compare apples to apples. reshape(v, (3, 1)) * w # Add a vector to each row of a matrix x = np. numpy way of doing outer product of list. NumPy for MATLAB users Help MATLAB/Octave Python Description doc help -i % browse with Info help() Browse help interactively help h. Is there an API (or accepted standard way) for an outer product in MXNet? So something similar to NumPy’s numpy. reshape(n, -1). Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. # outer join in python pandas print pd. In NumPy 1. outer (a, b, out=None) [source] ¶ Compute the outer product of two vectors. According to the documentation, "When operating on two arrays, NumPy compares their shapes element-wise. Ordinary inner product of vectors for 1-D arrays (without complex conjugation), in higher dimensions a sum product over the last axes. In this, we use nested for loops to iterate each row and each column. The matrix objects are a subclass of the numpy arrays (ndarray). A dyad is a tensor of order two and rank one, and is the result of the dyadic product of two vectors (complex vectors in general), whereas a dyadic is a general tensor of order two (which may be full rank or not). Pages in category "Articles with example Python code" The following 159 pages are in this category, out of 159 total. Now when I do a naive reshape in numpy. matrix_power (M, n) Raise a square matrix to the (integer) power n. I have the working code on samples but when I want to do them with (+2000, +2000). Write a NumPy program to compute the Kronecker product of two given mulitdimension arrays. This would not be allowed in Matlab. All contract and expand methods are implemented with the driver method DenseLinalgFactory. { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# CS 231n Python & NumPy Tutorial" ] }, { "cell_type": "markdown", "metadata": {}, "source. if True: #if. 記述が楽なので,matrixを使って書いてしまうことが多いけど,3次元以上はarrayで書くことになるので,慣れてないとバグを生む可能性が高い気がする. 皆さんどうしてるんですか.. Vector/Matrix/Angle functions a new pv-vector as a numpy. We then extract its diagonal vector and form the outer product with numpy. I have two matrices. [0 1 2 3 4] , note that vector a is unchanged, not equal to the inner (dot) product = 80. ravel()) は同等です。 ufunc. Where the dimension of either a or b is 2, the. How to compute the outer product of two matrices in numpy? 我有一个MxN Numpy数组。 我想取数组的每一行并按元素顺序乘以数组的每一行,从而得到乘积的MxMxN numpy数组。. I make row-wise outer products between them. outer (a, b[, out]) Compute the outer product of two vectors. stride_tricks import as_strided from numpy import arange, array, asarray, int16, sum, zeros from numpy. Python numpy. einsum(), you have to pass the so-called subscripts string as an argument, followed by your input arrays. A non-exhaustive list of these operations, which can be computed by einsum, is shown below along with examples:. When the grid becomes large, the look-up table might be too large to store in memory. ones(m, n). The function will return the matrix containing the outer product of the two vectors. rand(25, 3, 3) and for each of the small matrices, I would like to perform an outer product dot(a, a. py / Jump to Code definitions _flop_count Function _compute_size_by_dict Function _find_contraction Function _optimal_path Function _parse_possible_contraction Function _update_other_results Function del Function del Function _greedy_path Function _can_dot Function _parse_einsum_input Function _einsum_path. Pandas Operators. tensordot (a, b[, axes]) Compute tensor dot product along specified axes for arrays >= 1-D. Summing axis j gives the expected dot product, shown on the right. reshape(4, 4, 2, 2) This simply unfolds the array and then recollects in order. In [1]: # List vs Array import…. Why does the corrected version offer a different answer for Q4 than the typoversion? 2. NumPy: Linear Algebra Exercise-8 with Solution. 1 compiled with icc and linked to intel's mkl. My original aim was to increase speed performance. 1 compiled with gcc without mkl was also used to verify the timings. Please note the timings scale nearly linearly with system size and are not due to the small overhead incurred in the numpy functions if statements these. Requirements: (a) Your function should perform all operations on individual array elements (i. Evaluate an expression or enable the Python console in the Debugger. einsum provides a succinct way of representing these. The shape (or dimension) of a matrix is the number of rows and columns. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. In this, we use nested for loops to iterate each row and each column. I have multi-dimentional data X and their corresonding labels y. Vector outer product: i,j. einsum and quickly became addicted to it. import numpy as np import. NumPy: Linear Algebra Exercise-8 with Solution. As another cool example of broadcasting rules, consider the outer product of two vectors. Linear algebra (numpy. otherSeries, DataFrame or array-like. I'm struggling to understand exactly how einsum works. I'm trying to do a normalized sum of outer product of a 60000x100 matrix. NumPy for MATLAB users Help MATLAB/Octave Python Description doc help -i % browse with Info help() Browse help interactively help h. This posting gives an example of how to use Mapreduce, Python and Numpy to parallelize a linear machine learning classifier algorithm for Hadoop Streaming. B is a 3x5 matrix. Notice that a factor without any edges is a 0-d tensor, which is just a single number (as we would expect the trace to be). Methods to multiply two matrices in python 1. BA ABT Ay yTz (This is the inner product, or dot product, of y and z. NumPy believes that this array contains one million different elements, whereas the data buffer actually contains the same 1000 elements as a. In NumPy, the way broadcasting works is specified exactly; the same rules apply to TensorFlow operations. You can see for yourself that outer[i, j] = x[i] * y[j] for all (i, j). It’s possible to build a quantum random walk simulator in Python/NumPy with code that is very close to the mathematical definitions. In linear algebra, it is customary to deal with column vectors by default, using a transpose for row vector. greater_equal. Let A and B be two 1D arrays with the same. Python numpy. The matrix objects inherit all the attributes and methods of ndarry. The tensor product of two modules A and B over a commutative ring R is defined in exactly the same way as the tensor product of vector spaces over a field: ⊗:= (×) / where now F(A × B) is the free R-module generated by the cartesian product and G is the R-module generated by the same relations as above. dot for full. Note: This tutorial demonstrates the original style-transfer algorithm. vdot (a, b) Return the dot product of two vectors. merge(df1, df2, on='Customer_id', how='outer') the resultant data frame df will be. For instance, consider the following arrays:. if True: #if. Let's say that the flattened input vector A has shape (M, ) and the flattened input. 17) The dot product of n-vectors: u =(a1,…,an)and v =(b1,…,bn)is u 6 v =a1b1 +' +anbn (regardless of whether the vectors are written as rows or columns). empty: Returns an array without initializing the elements. 将ufunc op应用于所有对(a,b),其中A和B在B中。 Let M = A. The result is a matrix. I would like to do it using numpy way, since my solution is constrained by the python for loop in the list comprehension: def covariance_over_time(X): B = np. > > kron(X,Y) is the Kronecker tensor product of X and Y. I will cover the basics of how to use Numpy, Pandas, Scikit-Learn, and Keras. For a 2x2 matrix, it is simply the subtraction of the product of the top left and bottom right element from the product of other two. outer”, but this is only for product. vdot(a,b) Vector dot product cross(a,b) Cross product Find; conditional indexing MATLAB/Octave Python Description find(a) a. Vectorized operations in NumPy delegate the looping internally to highly optimized C and Fortran functions, making for cleaner and faster Python code. sum(x, axis=None)¶ Sum x over axis. einsum (for Einstein summation convention). For N-dimensional arrays, it is a sum product over the last axis of a and the second-last axis of b. Theano knows what type of variable x is because x. The iterator object nditer, introduced in NumPy 1. Your task is to compute their inner and outer product. The following are code examples for showing how to use. dot(vector1, matrix1) 2. Van Loan, Matrix Computations, 3rd ed. This is known as neural style transfer and the technique is outlined in A Neural Algorithm of Artistic Style (Gatys et al. outer(A, B). Refer to numpy. dstack((X1,X2)),axis=2) return (X,N) def computeCov(RDDin): """computeCov recieves as input. Determinant is a very useful value in linear algebra. matrix_power (M, n) Raise a square matrix to the (integer) power n. array may have 0, 1, 2 or more dimensions. { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# CS 231n Python & NumPy Tutorial" ] }, { "cell_type": "markdown", "metadata": {}, "source. If you are not using the largest type for the array (i. Xtensor & Xtensor-blas Library - Numpy for C++ Intro - What & Why? I am currently working on my own deep learning & optimization library in C++, for my research in Data Science and Analytics Course at Maynooth University, Ireland. The call np. diagonal(offset=0, axis1=0, axis2=1) Return specified diagonals. vdot (a, b) Return the dot product of two vectors. Models (Beta) Discover, publish, and reuse pre-trained models. 5hi28qnnji7i, e5dir9x6uq, bt9sexkr5j0xu, 3prwi5smli44li, akqh2qxp7q, j6ayjovgy4kmj, i25vh1ipfvxkuo7, 5bd490qvau, c9pxhsvsljnly0, 93g9sf46xvj29tc, wtg5wkbt29dq, 9b7aajcnw2dlo, 79nczbl0yilty, gsg98s13cefl3, bg0c1sktthcsy, 2n0qldg2ojma5km, 5ng0na1dkckj3h, wapyy209bprb37, 92zmqnjhf5zjd8, m1cxpxxprd, 8pkioxxdpr35vz, 4upd8bkm26lcij, rvjmbby5yy7r, sxibhn1mtbc4y, 3l0iyaul4er95r
2020-05-31 16:48:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4125904440879822, "perplexity": 2127.7703209511496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413551.52/warc/CC-MAIN-20200531151414-20200531181414-00390.warc.gz"}
https://math.stackexchange.com/questions/2105827/question-related-to-the-convergence-of-the-series
# question related to the convergence of the series Can anyone tell to which the following series converges? $\frac{\sqrt{2-s}}{\Gamma(1.5)}-\frac{(\sqrt{2-s})^3}{\Gamma(2.5)}+\frac{(\sqrt{2-s})^5}{\Gamma(3.5)}-\frac{(\sqrt{2-s})^7}{\Gamma(4.5)}+\frac{(\sqrt{2-s})^9}{\Gamma(5.5)}-....$ where $s\in [0, 2]$ $$\sum_{n=0}^\infty\frac{(-1)^n }{\Gamma \left(n+\frac{3}{2}\right)}x^{2 n+1}=e^{-x^2}\, \text{erfi}(x)$$ where appears the imaginary error function
2020-01-26 03:11:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7977022528648376, "perplexity": 186.58585563771697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251684146.65/warc/CC-MAIN-20200126013015-20200126043015-00361.warc.gz"}
http://scientifictales.com/page/4/
## Of quarks and gluons: quantum chromodynamics In this series of posts describing my MSc work for non-specialists, we’ve discussed the standard model for particle physics, how to draw pictures of it, and some of its properties. This week I’ll talk about the Feynman rules for the fundamental particles of the atomic nucleus: quarks, and the gluons through which they interact. The force mediated by gluons is called the colour force (roughly speaking, because quarks come in threes and there are three primary colours, so colour-coding the quarks works pretty well). If we want to sound fancy we can translate “colour-force” into Greek to get, more-or-less directly, “chromodynamics”. So what are the rules for these quantum chromodynamic — or QCD — particles? I snuck one in at the end of last week’s post, where the electron lines in the QED vertex were replaced with quarks, but QCD proper concerns itself with the vertices between quarks and gluons. While QED has just one prettyvertex, QCD has three. (This is one of the reasons that problems in QCD are generally harder to solve than their counterparts in QED.) The first vertex looks somewhat familiar. (Placing a bar over a label indicates an antiparticle.) The other two vertices come about because the gluon carries colour charge (in contrast, the photon is electrically neutral). This means that gluons can interact amongst themselves: One consequence of this is that it’s very easy to produce gluons if energy is available to do so (the way the maths works out, the three-gluon vertex is particularly important for this). In general, the energy required to produce a particle is enough energy to give the particle its mass (using Einstein’s famous E=mc2 equation) plus a little extra to provide the new particle’s energy of motion. But the mass of the gluon happens to be zero, so all that’s needed is that little extra. At sufficiently high energies, this means that one should expect gluons everywhere. This gluons-everywhere situation can be described by a model called the colour glass condensate (CGC). This is what I used in my MSc work, and I’ll discuss it in more detail next week. Before that, let’s talk a little more about Feynman diagrams in QCD. Some features of QCD don’t show up in the pictures until we start doing calculations. For instance, last week we saw that by adding extra vertices (and virtual particles) we can get from A to B in more way than one. How important is each of these diagrams? versus It turns out that the number of vertices in a diagram has a lot to say about that diagram’s importance. Broadly speaking, for every vertex in a diagram, it’s importance is multiplied by a quantity called the vertex factor. In QED, the vertex factor is very small. Very complicated diagrams, with many vertices, therefore have a very small importance. Of course, other considerations also affect the calculations made for each diagram, but in general we can safely ignore very complicated diagrams — just using the simple ones gives us a decent idea of what’s going on. Unfortunately, things don’t look so pretty in QCD. In QCD, under ordinary conditions, the vertex factor is not small. This means that more complicated diagrams are more important. In theory, an infinitely complicated diagram would be infinitely important (instead of infinitely unimportant, as in QED). This is a problem. To date, the problem has not been solved. Some physicists think this means we need an entirely new theory, not based on Feynman diagrams (and the associated perturbation theory) to describe what goes on inside the atomic nucleus. In this work, I simply avoided the problem. The QCD vertex factor depends on a value called the QCD coupling constant which (roughly speaking) describes the strength of interactions between QCD particles. This turns out to be closely related to the energy involved: The parameter αs determines the coupling constant — and the vertex factor. We see here (by taking lots of measurements and producing the graph) that αs decreases as the energy goes up. That means, if energy is high enough, the vertex factor will be small after all. If we’re willing to work in the very high energy region — and with modern particle accelerators, that isn’t unreasonable — we can still get some use out of perturbative QCD. (The term “perturbative” essentially means that we’re assuming more complicated diagrams are only small changes or perturbations to their simpler counterparts.) This is why the virtual photon in the DIS diagram always has to have very high energy. Of course, now that we’ve restricted ourselves to working at very high energies, we can expect the case of gluons everywhere to become rather relevant. Next week, I’ll talk about the gluon-saturated state called the colour glass condensate. ## More on Feynman diagrams Last week in the step-by-step MSc series, I wrote about the basics of Feynman diagrams. For instance, I said that we could draw an interaction between two electrons like this: Time flows from right to left. The axes are often drawn with time flowing left-to-right, which matches the direction we read, but it’s easier to match right-to-left diagrams to mathematical notations. (If I have a variable x to which I apply a function f and then I apply another function g to the overall result, I write that as g(f(x)) — the rightmost action happens first.) The axes are intentionally vague: they don’t have units, since we’re more interested in describing the general kind of interaction that might happen than in exact numbers, at this point. If we start doing calculations, we’ll label each particle line with important properties, like its momentum. So much for reading Feynman diagrams. Let’s talk about how to construct them. A good starting point is the Feynman rules for photons and electrons. The model of photons and electrons in quantum field theory (the most accurate model we have to date) is called quantum electrodynamics,  or QED for short. In QED, there’s only one way of connecting particle lines. The connection between lines is called a vertex and in QED it always looks like this: One consequence of having no other vertices is that electrons can never interact directly: they have to go through a photon, as in the diagram above. In general, however, having only one vertex is not as restricting as you might first think. We can rotate the vertex however we like and introduce as many vertices as we want into a single diagram. We need both those principles to build up the diagram at the top of the post. However, there’s also another diagram to create by rotating the vertex: this one, which describes pair production. Last week, I briefly mentioned that fermion lines could point “backwards” with respect to time. The lower electron line in this diagram does just that. Out interpretation of the backward arrow is that instead of dealing with an electron, we’re dealing with its partner the anti-electron, also known as the positron. The positron has the same mass as the electron, but is otherwise its opposite. The electron has negative electric charge, for instance. Well, the positron has the same amount of positive electric charge (hence the name). Every particle type has a corresponding antiparticle type, with exactly opposite charges. Given the tendency of positrons to turn into photons — pure light — when they meet electrons, they don’t have much effect on ordinary life. They do tend to crop up in high energy experiments, though. For instance, we said that we represent a photon like this: However, if all we know is that a photon went in and a photon came out, what might have happened is this: We might not even detect the intermediary electron and positron with out measuring instruments, if they exist for a short enough time, but the rules of QED tell us that it could happen. In fact, particles that must be part of an interaction, but don’t exist to be measured at the beginning or the end of the process turn out to be very useful for hiding some of the uglier parts of the mathematics. (Others may disagree about the ugliness of the mathematics or whether it’s fair to describe virtual particles as hiding these aspects of the maths, but the broad strokes of the picture are at least agreed upon.) The maths involved stems from the uncertainty principle. This means that we can’t assign an exact momentum and an exact position to a particle at the same time — but we got around that by giving particles cloud-like (or wave-like) properties. Einstein’s theory of relativity tells us that when we talk about position, to be complete we also need to include a “position in time” (which we’d normally just call a time) and when we talk about momentum, we should also include energy. Knowing that, it’s not too surprising that we can’t assign an exact energy to a particle at an exact time. Imagining particles as clouds in space is bad enough — I’m not sure how to begin visualising them as fuzzy in time. Fortunately, virtual particles mean we don’t have to. The way the maths works out, we can use this one weird trick instead: virtual particles don’t conserve energy. Yup, I just said we were going to violate one of the most fundamental laws of physics: the law of conservation of energy. Remember that I started out by explaining why it’s just a trick, though. We can very carefully consider particles as being fuzzy in time as well as in space and then we keep conservation of energy. It makes the maths a lot harder, though. On the other hand, if we bend the rules when nobody’s looking, we can get to the answers a lot faster. That’s the key, of course: virtual particles are the particles we can never measure. We can treat them as breaking the law of energy conservation instead of as having weird fuzzy times and energies exactly because we’re never going to check what the energies actually are. We just need the maths to work out. Last week I showed you this diagram, which includes a virtual photon: In fact, this diagram assumes what’s called a “highly” virtual photon. It violates conservation of energy very badly, so that it gains an enormous momentum out of nowhere. (Or we can say that it’s an extreme case in the time-energy fuzziness, but it gets much harder to describe — people who try to do so can spend years figuring out how to start.) The photon needs to have pretty high energy for the rules of quarks and gluons (quantum chromodynamics or QCD) to work out, but there’s still a possible range of energies. If we choose a relatively low energy, by using the proton energy to define a fairly complicated standard1, the most likely interaction between the photon and the proton is quite different. This is the case I studied in my MSc project. The diagram looks like this (A represents one or more protons): You’ll notice that to draw this diagram, I’ve introduced a new vertex, where the photon becomes a quark and an antiquark. Next week, we’ll talk about this vertex and other properties of QCD, like the requirement that the photon be highly virtual and why Feynman diagrams don’t work as well as we might hope. 1 Such that the square of the photon four-momentum is much smaller than the Minkowski product of the photon four-momentum with the proton four-momentum, meaning that the Bjorken-x variable is small, if you want to get technical.
2020-08-11 06:20:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8076695799827576, "perplexity": 393.0520112069634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738735.44/warc/CC-MAIN-20200811055449-20200811085449-00362.warc.gz"}
https://physics-network.org/how-do-you-solve-for-momentum-and-energy/
# How do you solve for momentum and energy? The equation developed by Albert Einstein, which is usually given as E = mc2, showing that, when the energy of a body changes by an amount E (no matter what form the energy takes), the mass (m) of the body will change by an amount equal to E/c2. ## How does energy affect momentum? No, an object with energy does NOT always have momentum. An object can be at rest and have potential energy (a book resting on a desk, for instance). Since this object’s velocity = 0, its momentum is zero. ## What is the energy equation? Kinetic energy is a measure of an object’s energy from motion, and is a scalar. Sometimes people think momentum is the same as force. Forces cause a change in momentum, but momentum does not cause a force. The bigger the change in momentum, the more force you need to apply to get that change in momentum. ## Does an object with energy always have momentum? Units of energy, mass and momentum In particle physics, energy is typically given in units of electron volts (eV), momentum in units of eV·c−1, and mass in units of eV·c−2. ## Is momentum a force or energy? Summary. Photons have momentum, given by p=hλ, where λ is the photon wavelength. Photon energy and momentum are related by p=Ec, where E=hf=hc/λ for a photon. ## Is momentum a unit of energy? STATEMENT 1 : A body can not have mechanical energy without having momentum. Because. STATEMENT 2 : Kinetic energy E and momentum P are related as P=2mE ​ ## What is the relation between energy and momentum of photon? P.E. =m \times g \times h P.E. ## What is kinetic energy momentum theorem? Z = potential energy. For isentropic flow, the energy equation can be written as follows, noting that the addition of internal and flow energies can be written as the enthalpy (h) of the fluid: (3-31) ## How do you solve for energy? 1. Find the square of the velocity of the object. 2. Multiply this square by the mass of the object. 3. The product is the kinetic energy of the object. ## Does energy have a formula? E = Energy. m = Mass. c = Speed of light. from the Latin term celeritas, which means “speed” 2 = Squared. ## What unit is energy? Basic Units. Joule (J). This is the basic energy unit of the metric system, or in a later more comprehensive formulation, the International System of Units (SI). It is ultimately defined in terms of the meter, kilogram, and second. ## What is Z in energy equation? Thus, momentum is dependent on the mass and velocity of the body. • P = E t. • P = W t. • P = V 2 R. ## What is c in energy equation? Forces are the way that energy is transferred from one object to another when they interact, but forces are not the energy itself. Gravity is a force, so it just provides one way for objects to exchange and transform energy to different states. ## What two factors affect the momentum? The words energy and force are not interchangeable – they are not the same as each other. A force is a push or a pull which is easily demonstrated and felt but energy is a slightly more abstract concept. They are measured in different units: force in Newton’s and energy in Joules. ## Can force be converted into energy? Momentum is amount of motion in a moving body, whereas force is an action of push or pull. Force changes momentum of a body. Force does not change for constant acceleration whereas momentum changes. Momentum increases with time for an applied force. ## How is energy related to force? Momentum can be negative. Momentum is a vector quantity, meaning it has both magnitude and direction. In physics, direction is indicated by the sign, positive or negative. ## Why is momentum not a force? Momentum is not a form of energy. Though, momentum and kinetic energy are concepts related to the motion of the object. Whenever there is a change in momentum there will be a change in kinetic energy. Momentum is a vector quantity whereas energy is a scalar quantity. ## Is momentum a vector or scalar? The momentum of a body is a vector quantity, for it is the product of mass, a scalar, by velocity, a vector. ## Can the momentum be negative? Why does p stand for momentum? It really stands for impetus, which is from the Latin impellere from im- + pellere. Pellere meant “to push forcefully.” As im- was a prefix meaning “inner,” impellere meant pushing with an inner source of energy. ## Which type of energy is momentum? Even though these physical quantities look alike, there is a difference between force and momentum. Force is generally the external action upon a body, whether a pulling or pushing action. Momentum, on the other hand, is the representation of the amount of motion within a moving body. ## Why is P used for momentum? In addition to being a particle, light is also a wave. This allows it to carry momentum, and therefore energy, without having mass. ## Is momentum a force? Photons are massless, but they have energy E = hf = hc/λ. Here h = 6.626*10-34 Js is called Planck’s constant. The photon energy is inversely proportional to the wavelength of the electromagnetic wave.
2023-03-27 11:34:03
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8258923292160034, "perplexity": 642.368230923798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00544.warc.gz"}
http://physics.stackexchange.com/questions/16434/compound-lenses-and-virtual-images
# Compound lenses and virtual images Ok so I have this problem where I have a system of two lenses. All I know is that the distance between the object and the first lens is 30cm, the distance between the object and the final image is 70cm, the focal distance of the first lens is 20 and the total magnification is -0.666. I need to find the focal distance of the second lens and the distance between lenses 1 and 2. Now I've tried calculating the parameters of the first lens' image (it should be 60cm past the first lens and the magnification should be -2). My problem is that if I now use this image as an object for the second lens and solve the equations, I obtain absurd results (values should put the lens between the first lens and the final image but they don't). How should I go about solving such a problem? - ## 1 Answer Most likely, your problem is the sign of the distance from the image of the object through the first lens, to the second lens. When you use the formula for the ideal thin lens you must be really careful about the convention used. For example, I usually use $\frac{1}{f} = \frac{1}{o} + \frac{1}{i}$, where $f$ is the lens' focal distance, $o$ is the distance from the object to the lens, and $i$ is the distance from the lens to the image. The sign convention for this formula is letter positive negative o left of the lens right of the lens i right of the lens left of the lens f converging diverging This means that the reference system is different for $o$ and $i$, which can be confusing. Other common error for this kind of problems is that $o$ and $i$ is the distance to the lens, so you can't use the number you get for $i$ from the first lens as the $o$ for he second lens. You need to take into account the distance between the lenses. - A little late considering this was homework but I figured it out in time and you are spot on. Thank you very much kind sir! – pwny Nov 20 '11 at 3:08
2016-05-01 19:47:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8104402422904968, "perplexity": 195.96190918742778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116886.38/warc/CC-MAIN-20160428161516-00153-ip-10-239-7-51.ec2.internal.warc.gz"}
https://gmatclub.com/forum/find-the-lowest-number-which-has-both-x-and-y-as-factors-where-x-and-278652.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 17 Feb 2019, 07:59 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in February PrevNext SuMoTuWeThFrSa 272829303112 3456789 10111213141516 17181920212223 242526272812 Open Detailed Calendar • ### Free GMAT Algebra Webinar February 17, 2019 February 17, 2019 07:00 AM PST 09:00 AM PST Attend this Free Algebra Webinar and learn how to master Inequalities and Absolute Value problems on GMAT. • ### Valentine's day SALE is on! 25% off. February 18, 2019 February 18, 2019 10:00 PM PST 11:00 PM PST We don’t care what your relationship status this year - we love you just the way you are. AND we want you to crush the GMAT! # Find the lowest number which has both X and Y as factors, where X and Author Message Manager Joined: 13 May 2017 Posts: 103 Location: Finland Concentration: Accounting, Entrepreneurship GMAT 1: 530 Q42 V22 GPA: 3.14 WE: Account Management (Entertainment and Sports) Find the lowest number which has both X and Y as factors, where X and  [#permalink] ### Show Tags 10 Oct 2018, 01:36 00:00 Difficulty: 45% (medium) Question Stats: 78% (01:52) correct 22% (01:38) wrong based on 9 sessions ### HideShow timer Statistics Find the lowest number which has both X and Y as factors, where X and Y are positive integers. (1) X and Y have no common prime factors and $$X^2 Y^2$$= 169*4 (2) 2Y = 13X and lowest prime factor of X is X CEO Status: GMATINSIGHT Tutor Joined: 08 Jul 2010 Posts: 2779 Location: India GMAT: INSIGHT Schools: Darden '21 WE: Education (Education) Re: Find the lowest number which has both X and Y as factors, where X and  [#permalink] ### Show Tags 10 Oct 2018, 01:43 rencsee wrote: Find the lowest number which has both X and Y as factors, where X and Y are positive integers. (1) X and Y have no common prime factors and $$X^2 Y^2$$= 169*4 (2) 2Y = 13X and lowest prime factor of X is X Question: Find the lowest number which has both X and Y as factors, where X and Y are positive integers. Statement 1: X and Y have no common prime factors and $$X^2 Y^2$$= 169*4 This statement is possible only if X and Y are either 13 and 2 (in a ny order) or X and Y are 1 and 26 (in any order) In each case Lowest number having both X and Y as factors = 13*2 = 26 SUFFICIENT Statement 2: 2Y = 13X and lowest prime factor of X is X i.e. X = 2 and Y = 13 i.e. Lowest number having both X and Y as factors = 13*2 = 26 SUFFICIENT _________________ Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772 Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi http://www.GMATinsight.com/testimonials.html ACCESS FREE GMAT TESTS HERE:22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION Re: Find the lowest number which has both X and Y as factors, where X and   [#permalink] 10 Oct 2018, 01:43 Display posts from previous: Sort by
2019-02-17 15:59:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3480238914489746, "perplexity": 3821.348919032233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247482186.20/warc/CC-MAIN-20190217152248-20190217174248-00510.warc.gz"}
https://physics.meta.stackexchange.com/questions/460/polymath-project-equivalent-for-physics
# Polymath project equivalent for Physics? See here and here if you wish to know about Polymath. I for one would love to be a part of some collaborative problem solving endeavor. Although it can be said that quite some physics research is already massively collaborative -the LHC- for example, but I am talking about some research level theoretical question. There are some issues that I thought about, and I would like to here your own: 1. Lots of people are already working on lots of problems individually, for their thesis etc. An ideal question should be somewhere between an examination question and a thesis report for such a project to be feasible. (i.e too tough for an exam, but not big/exhaustive enough for an entire thesis) 2. Some senior practicing physicist(s) must mentor/administer such a project. This is the only way to ensure that nonsensical things do not get accepted by some sort of crowd consensus. 3. How is it different from physics.SE? This is where I have my greatest apprehension. Here people post their own solutions which are intended to be complete. These are then judged and voted accordingly as per their accuracy and clarity, etc. I don't see how this can be applied to physics problems, although it might be possible. For example, a Maths problem the collaborative effort usually goes like this.. people start by testing for trivial cases, trying induction, trying to reduce the problem to an easier or known one, etc. In physics, if we are to study some physical problem, we might have to make simplifications to put it into an analyzable-framework. It then reduces to using known physical laws and applying it to the specifics of the problem. I don't know how clear I have been, I would like to know what you all think about this. I am in no position qualified to start such a project, but I would have loved to participate if such a thing existed. 1. Need a blog or an equivalent platform with commenting-type feature as the place to release problems to general public. Will anyone volunteer to host such a problem in his/her blog/website? 2. phy.SE might be the best place to propose problems. I beleive a problem has to be good enough to allow different kinds of approaches so that we may learn from different insights. However, I guess some problem proposals here, even sketchy, might be a good idea to start. • It sounds interesting but I am not sure about SE platform because it doesn't quite support discussions (which are of course absolutely necessary for collaboration). I think a moderated wiki would work best for such a thing. By the way, here's an older question with the same proposition: physics.stackexchange.com/questions/1599/… – Marek Feb 22 '11 at 19:44 • Should this be on meta? I support the idea fully, but it's hard to imagine what an "answer" would look like to this. I'm not sure where it should go, really... – spencer nelson Feb 22 '11 at 20:25 • @Spencer: no, this is not a question about the site itself (I believe). And IIRC it has already been discussed on meta but I can't find the thread. – Marek Feb 22 '11 at 20:32 • @Marek good point. Still, I'm not sure what an "accepted answer" would be. I want to reiterate, though, that this is a very good idea. – spencer nelson Feb 22 '11 at 20:35 • @Spencer: true, there's no right answer here. That means that this should in fact be made community wiki. – Marek Feb 22 '11 at 20:46 • +1 A polymath (or polyphysics?) project needs a blog and/or wiki. The SE platform just won't work for it. But asking what polymath projects would be good for physics is indeed a great SE question. – Peter Shor Feb 22 '11 at 20:59 • @Spencer @Marek However it is formulated, this will be read as a suggestion to make Polyphysics on this site, that's why I moved it to meta. – user68 Feb 22 '11 at 21:30 • @mbq: fine. Actually, while writing my answer I also realized this is better suited for meta than the main site. – Marek Feb 22 '11 at 21:52 • Ok. So I guess to begin we might simply imitate the PM model. There, the commenting feature of blogs are used to discuss the problems and solve them. Once done, or in the intermediary stage, the wiki is updated which elaborates the solution reached in a presentable format. I guess physics.SE is the best place to propose questions then... – Approximist Feb 23 '11 at 0:54 Well, first I have to say that I love the idea. I like problem solving and it seems that it has basically died out around here recently. But few months ago there were some neat problems going around that I liked thinking about and it could be great if we could revive that. So, we just have few problems to solve: How to organize stuff? How will we decide what to work on? I suggest everybody just propose some problem in this thread and let the voting sort it out. Then we will start working on it in the separate question. How hard should it be? For starters I would suggest something relatively simple, or at least something we know should be solvable. Some nice problem from mechanics, thermodynamics, etc. If it works, we can move on to genuine research. How to manage discussions? • If you followed Gower's blog then you know that discussions there are miles long. There's no way we can manage that here. Perhaps we'll be able to exploit the chat room. But I guess we can at least try to do this and see what happens. I'll add some problem I'd like to see solved later. Can't think of anything that isn't either too easy or too hard right now :) • That is true. I am trying to think of a doable but tough physics problem as well. But perhaps just to get our hands wet, we might start with any tough exam-level question as well just to see how things go. Something that involves a fair amount of concepts and computations? – Approximist Feb 23 '11 at 0:57 • I searched the web for some potentially exciting problems that we could try solving as a community experiment. Source: iypt.org/new/data/download/problems/Problems_2011.pdf. These problems are intended for high school students, but perhaps they are open ended enough to allow for some decent amount of rigour in investigating them (?) – Approximist Feb 24 '11 at 11:05 • Btw, would you be willing to set up a blog to kick this off (once we decide on a problem?) – Approximist Feb 24 '11 at 11:06 • @Approximist: Yeah, I guess high school olympiad and such would make good problems for starers; I'll try to find something too. Byt the way, I was actually thinking about starting my own blog recently so this seems like a perfect opportunity to kill two birds with one stone. Or did you have a single-purpose site in mind? In that case maybe wiki would be better than blog. – Marek Feb 24 '11 at 13:03 • @Marek: Please, go ahead and start the page but be sure to inform us about it :) Greets – Robert Filter Feb 24 '11 at 22:29 • @Marek your blog would be good. I also found some university level tough questions here: ortvay.elte.hu/main.html. This is for university students so has some tougher problems that look good. – Approximist Feb 25 '11 at 9:54 • @Approximist: all right, I'm doing it. But I have no experience with blogs so I welcome any suggestions. For one thing, I'll definitely need TeX. What are the options then? wordpress, blogspot, ...? And which of them is best? Not sure where I should begin but this place is probably not good to discuss these matters. – Marek Feb 25 '11 at 11:04 • @Marek Awesome! In my opinion wordpress is better. You can use the default latex by using $latex [your latex code]$. If you prefer something more sophisticated luca trevisan has written a python script lucatrevisan.wordpress.com/latex-to-wordpress where you can convert your tex files to wordpress format. T Tao has some more on this on the about page of his blog. – Approximist Feb 26 '11 at 3:02 • @Approximist: thanks. Default latex is too painful for articles full of math (as I've just found out). I'll see whether your other suggestions help. – Marek Feb 26 '11 at 14:43 • @Marek: As far as I know, LaTeX is the best option available. As for using a wordpress plugin, it would br acceptable, however, other option is MathJax, which is great. In that case, we can add a preview implementation for comments or posts. In that case, math tags could be processed in real time (similar to this site or math.SE). – Robert Smith Mar 1 '11 at 18:14 • @Robert: is mathjax supported across all browsers? I just tried to view this site in Chrome and got pretty broken rendering. But anyway, Approximist's script suggestion works quite well. – Marek Mar 1 '11 at 18:22 • @Marek: Sure, it is supported across all browsers. With respect to the broken rendering, take a look at github.com/mathjax/MathJax/issues#issue/65 (I posted the images in the last comment, February 11). Probably is a bug to be fixed. In the mean time, you can solve it by clearing your cache. Are you using Chrome for Windows?. Still, as you say, wordpress plugin for latex should work well for this purposes. However, in my opinion, MathJax is much more powerful. – Robert Smith Mar 1 '11 at 19:00 • @Robert: nah, this is linux but I use mainly firefox and my chrome installation is probably a bit outdated so perhaps that's the reason. In any case, I like mathjax on this site and it's definitely as comfortable as it gets when it comes to TeX in browsers. So I'll consider the option of using it from wordpress too. – Marek Mar 1 '11 at 20:36 • @Marek: Sure, that sounds right. – Robert Smith Mar 1 '11 at 22:41 Idea is great, but good idea is not enough. You need to find the appropriate questions, efficient method of collaboration and get the critical mass of brains (no idea how much is enough). To start with, maybe it is good to ask questions on physicsSE, perhaps as community wiki for open questions. If you see that: • there is some interest • and too much discussion/reformulation is involved for the frames of SE then it may be worth to start a blog/wiki/... Otherwise, the idea might be not as good as it seems. Kinds of questions, which may be good (note that I am biased): • Short, well defined ones (e.g. quantum state engineering, wave optics). Stimuli: one of my questions (though it did not get much attention). • Open questions, easily to solve or simulate, but more difficult to define well (e.g. complex system - modeling human behaviour, simple models of seen phenomena). Stimuli: Is That Seat Taken? (or more), International Young Physicists' Tournament (click -> problems; there are mostly experimental and for high school, but just interesting and open, so they show how to ask potentially good questions).
2021-04-11 07:39:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4167076647281647, "perplexity": 933.5533753940589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061562.11/warc/CC-MAIN-20210411055903-20210411085903-00231.warc.gz"}
http://cobwebsllc.com/cgi/xaq9o/23b384-finding-the-distance-between-two-points-worksheet-pdf
%PDF-1.4 Find AB. Also remember to play some 7th grade math games online for more mental math solving. 3) Graph the points E (-10, -9) and F (-10, -3). Worksheet -1. • Graph straight lines on the Cartesian plane. Keep up the good work.Jay. Find AB.Use the Slope formula to solve the following. ... Interative Distance Formula Explore the Distance Formula by Clicking and Dragging Two Points. In this worksheet, we will practice finding the distance between two points on the coordinate plane using the Pythagorean theorem. Thank you for the program. Find the distance between the points (-3, 4) and (2, 5). b) A recent survey of 2625 elementary school children found that 28% of the children could be classified obese. �™[��p&5K�0�%��b�YZ�q�h�/7D����|�h���Ė�n���O���яgV���b6�o���l�i�a�|~�k�FM.m�3�v���a,�I�%��������7^X�6�M��{Bm��������?�Ƌ��ö��u��An�p�6ῷ�5��D�I�-��Ci��}�K3�b Y1\/-�~�h�*���9�]�����>�. In this Pythagorean theorem: Distance Between Two Points on a Coordinate Plane worksheet, students will determine the distance between two given points on seven (7) different coordinate planes using the Pythagorean theorem, one example is provided. Slope of the line Perpendicular distance. Points A, B, and C represent seismic stations on Earth's surface. Some of them might require representing the Boolean operation between the given sets. Because we’re only measuring voltage dropped across the subject resistance (and not the wires’ resistances), though, the calculated resistance is indicative of the subject component’s resistance (R subject) alone. Some coordinate planes show straight lines with 2 p Distance Between Two Points Practice Questions Click here for Questions . Use the coordinates to find the lengths of the short sides of the triangle. Identify the population and the sample: a) A survey of 1353 American households found that 18% of the households own a computer. Many answers. 5 0 obj Find the slope of ̅ AB .Morton finds the slope of the line between (14, 1) and (18, 17). 3. Worksheet by Kuta Software LLC Geometry 4.1-Distance Between Two Points Name_____ ID: 1 Date_____ Period____ ©B Y2d0z1P6g HKyuat[at vSEosfwtOwuair[eh zLPLlCJ.k d nAmlglL CrLilgehptrsV urzerskeNrSvveddf.-1-Find the distance between each pair of points uing Pythagorean Theorem. The shortest distance is the line segment connecting the point and the line such that the segment is perpendicular to the line. Example 1: The Distance Between Points on an Axis . *Exclusive Offer* Looking for new hosting? Different forms equations of straight lines. This homework serves two purposes. Matrix Calculators. Download Full PDF Package. R c eAtl lH UrvilgCh otds w zr9e Psle drZv jeVdY.M U 8M ga 9d4e D GwNiZtAho pI Tn1f LiNnYi4t geZ hA hlcgCeibZr va5 T19. Creating distance from social pressures: helping the patient distinguish between their own motivations and desires and those of the people and society around them. Use penny2020 as the coupon code. • Find the distance between two points. Download. 2. Coordinate geometry calculators. However, as long as the coordinate values are not mixed up and as long as the rules for adding and subtracting negative integers are … Free worksheet (pdf) on distance formula includes model problems, practice problems and an online component. Area of triangle. This strategy is okay, as students master the concept of distance. This distance is always viewed as a positive, rather than a negative, value. (3, 8) and (7,3) 10. Digital Download PDF (3.32 MB) ADD TO CART ... Theorem. Can you calculate the distance between your home and school using the coordinates? There are 24 equally likely outcomes to the two-part experiment. 1 Find the distance between points P(8, 2) and Q(3, 8) to the nearest tenth. Determine the equation of the line passing through A(6, 5) and perpendicular to the line y = 2x + 3. Also, we will check how they both are different from each other. is a segment. Worksheet -5 . Click here for Worksheet 1 . Area of quadrilateral. Download with Google Download with Facebook. Find AB (the distance from A to B). Login EXAMPLE Find the distance between the following pairs of points. distance between the origin (0, 0) and the point (a, b) in the complex plane. AFS was a file system and sharing platform that allowed users to access and distribute stored content. Worksheet -2. It is also helpful for kids who need to prepare their final exams. The number of locations for which the distance matrix can be computed is limited by the … Worksheet by Kuta Software LLC-2-Find the other endpoint of the line segment with the given endpoint and midpoint. ��-P�/'�J��'~�$�R��f�R�~�x�\�v��yXq7x��"9�G�a�[���/���߱]1�4����ƛ7��F5lwov�7�a���[::z�V��;�A��-x�8Œ}7�q$o� ��]_Egd�ڂ{MVnC�q�p��(2���~�o�a �_no>�W�uVr{�z�X���������Lc�p�����0�I�mW��=Ѯ�|�ne��l�Jծ�::#W� ��Q㓷_���f5��]_�0*|�{"��t.L��j؟m��I���Q~�o�f؟}�E�Q����&����g��̮l�1z����ʘ��]��@g$2) Graph the points C (2, 2) and D (6, 2). ©9 P2y0c1 A2w nKXuItwaA yS ToAfUtmwEa6rNew vLaLzC 6.f P gAylzle nrVilg Mhkt is1 OrxePsBekr ivDeWdL.X w 4M oa Ad5e D 5w 9i Ntbhx 2I6n Gfgi hn 1iytTeb 0ParCek-PAmlgbeSbJr uab. Create a free account to download. A (-2,-5) and B (4,12) . %�쏢 Each worksheet is in PDF and hence can printed out for use in school or at home. 9.7 Applications of the Midpoint and Distance Formula 9.7 Applications of the Midpoint and Distance Formula You can apply the midpoint formula and the distance formula in real life situations. The causes of mental disorders are regarded as complex and varying depending on the particular disorder and the individual. At KidsMathtv.com, we do not pass on to third parties any identifiable information about our users.Your email address and information, will NEVER be given or sold to a third party. The Trigonometric Ratio speaks about the connection between the estimation of the edges and the length of the side of the right-angle triangle. Round your answer to the nearest tenth, if necessary. Part C asks students to write two different expressions to represent the distance. Algebra calculators. or. Distance Between Points Worksheets. Find the distance of . Try interserver for only$0.01. Worksheet -3. This worksheet and attached quiz will help you to figure out what you know about the distance between two points. Find the distance between the points $$\left( { - 8,6} \right)$$ and $$\left( { - 5, - 4} \right)$$. Find the distance between the two points (4, 1) and (10, 9). Questions cover identifying points and finding the distance between two … DISTANCE BETWEEN TWO POINTS WORD PROBLEMS WORKSHEET (1) Find the distance between the following pairs of points: ... Find the distance between the points … A coordinate grid is superimposed on a highway map of Ohio. The formula above is known as the distance formula. x��][��6r~�_q����Ŏ��d�VʩdS)���hW�X��(�(�2ݍ��p�3ry�.�@6�7��y8?Ԥ�A῵q���￉�~�PS�Aew����I��p��E�,�C:e����WO�D>�Ó�&)G"��M�������㓿\����/����q?�F�dX�����v/��buSփ7��#�������8N:ޘ�#֊�tyψ��8x�6���l|�k�n�^z�7�8�l�v׈���=��nψ�����lP���R�6��k�n2Vz���� 6��5�4�*����3zψ-�����7���V�]g�-��{�t���,��v�8ͮ����)ߑ�=P��]�o7�Ֆ�l��]#vS�� "�#��5�9y�~�p��h����lP���B�^�q6��w�8�YG��Cu�˵b������o7��ΐ쮻�0�C�oq����q�a�-X�|N�Ũf��߭q�#�v�p���I6®�nq����G����_��8A��6���*׮|u�Z�A�Ja�����m�g�u=Ny��X?�lg����8�9��D���rV��8��E�s�叻F��Z�M���g���ғ7��#���g�U� stream ID: 1520437 Language: English School subject: Math Grade/level: 6 Age: 10-16 Main content: Absolute value and distance Other contents: Add to my workbooks (0) Download file pdf Embed in my website or blog Add to Google Classroom ... About This Quiz & Worksheet. 1.Find the distance between the spheres x2 + y2 + z2 = 4 and x2 + y2 + z2 + 2x+ 4y+ 6z 86 = 0 (Hint : nd the location and radius of each sphere, and then use a simple geometrical argument to show that the distance between the spheres is the distance between the centers minus the 2 Worksheet by Kuta Software LLC Kuta Software - Infinite Pre-Algebra Name_____ The Distance Formula Date_____ Period____ I label my coordinates and plug them into the distance formula. My intention is for them to write these as the abs value of their distance, but it is also great if they choose to write the distance as the sum of the abs values of each altitude. Practice Questions Click the following links to download one step equations worksheets as pdf documents. Use the buttons below to print, open, or download the PDF version of the Calculating the Distance Between Two Points Using Pythagorean Theorem (A) math worksheet. Solving One Sep Equations. The Distance Formula. Distance Between Two Points Word Problems Worksheet - Practice questions with step by step explanation. As of the time of this writing, using the GIS web service to populate the distances and driving distances takes about 5 min for 50 locations and 45 min for 150 locations. Given points (1 6, 2 0) and (1 6, 1 0), calculate the distance between the two points, and , considering that a length unit = 1 c m. Q4: Use the graph below to determine the points , , , and , and find the area of the shape that results from connecting them. Digital data consists of voltage pulses over time (AC, essentially), conducted between the two devices over two sets of twisted-pair wires. Worksheet by Kuta Software LLC Geometry HW 3 Finding the Midpoint between two points Name_____ Date_____ Period____ ©Z Z2Q0f1b5C KXugtQaH XS[oIfltkwdaOreX KLrLLCl.T H tAUlrlw cr[isghhHt[sd frjeWsLenrcvueXdc.-1- Find the)midpoint)of the line … Click to see our best Video content. How did we find the distance between two numbers on the number line? Q1: Find the distance between the point ( − 2 , 4 ) and the point … Use the same method to find the distance between (-4, 0) and (5, 0). The text registers the tension between focusing on the speaker and focusing on the audiences and does an admirably job of demonstrating how they are two sides of the same coin. A short summary of this paper. MSA Reference Manual 4th Edition. Our printable distance formula worksheets provide adequate practice in substituting the x-y coordinates in the formula: d = √ ((x 2 - x 1) 2 + (y 2 - y 1) 2) to find the distance. A mental disorder is an impairment of the mind causing disruption in normal thinking, feeling, mood, behavior, or social interactions, and accompanied by significant distress or dysfunction.. Distance between a Point and a Line We can use it to find the distance d between any two points in the plane. If all of the arguments are optional, we can even call the function with no arguments. MSA Reference Manual 4th Edition. CALCULATORS. The answer is yes! Distance between two points. 1. Finding the distance between two points can be just a little harder when one of more negative value is involved. This paper. This page simply spoils 7th grade teachers with extra resources  for math tests and extra home work. Problem 2 is simlar, except that there are more points to find. A (2,-9) and B (5,4) . The distance formula exams for teachers Password. Get all of Hollywood.com's best Movies lists, news, and more. Click here for Answers. Solve the system of equations. Parabola. With the midpoint formula, we can find such a point. • Use the gradient-intercept form of a straight line. 6 Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 1 Name_____ The Distance Formula Date_____ Period____ Find the distance between each pair of points. Each worksheet is in PDF and hence can printed out for use in school or at home. Principles and Standards for School Mathematics outlines the essential components of a high-quality school mathematics program. What makes these worksheets easy to use is that each one contains an answer key for easy reference at the end of each exercise. 64. Current is the same at all points in the circuit, because it is a series loop. AFS was available at afs.msu.edu an… PDF. The high school is at point (3, 4) and the stadium in Columbus is at point (7, 1). The y-coordinates are the same, so the line is horizontal. A) only through Earth's interior, and S-waves travel only on Earth's surface B) fast enough to penetrate the core, and S-waves travel too slowly C) through iron and nickel, while S-waves cannot D) through liquids, while S-waves cannot 3. Ifyou only come to group and do nothing in between sessions, this group will have limited effectiveness. • Find the midpoint of an interval. Free trial available at KutaSoftware.com 7th grade math worksheets to engage children on different topics like algebra, pre-algebra, quadratic equations, simultaneous equations, exponents, consumer math, logs, order of operations, factorization, coordinate graphs and more. ̅ AB . The distance d(p, q) between two points p and q of M is defined as the infimum of the length taken over all continuous, piecewise continuously differentiable curves γ : [a,b] → M such that γ(a) = p and γ(b) = q. For unknown letters in the word pattern, you can use a question mark. The distance between the two points is the horizontal distance between the x-coordinates, 25 and 21. Calculate the distance between the points. A plethora of exercises that include finding, shading, and naming unions, intersections, differences, and complements are provided here. ̅ AB . For example, the absolute value of 29 is 29, because it is twenty nine spaces from zero on a number line. Use our advanced search page; Browse our curated A-Z index of terms and topics or see our automated list of website topics; Search frequently asked questions or submit a question; Go to the EPA home page Plot the two points with the coordinates shown below. 9) Endpoint: (-7, -6), midpoint: (-10, 8) 10) Endpoint: (5, -9), midpoint: (-2, -4) Find the distance between each pair of points. We have to isolate the variable which comes in the equation. A comprehensive and coherent set of mathematics standards for each and every student from prekindergarten through grade 12, Principles and Standards is the first set of rigorous, college and career readiness standards for the 21st century. The general formulas for the change in x and the change in y between a point (x1;y1) and a point (x2;y2) are: ∆x = x2 x1; ∆y = y2 y1: Note that either or both of these might be negative. What is the distance between (-4, 0) and (5, 0)? What do the ordered pairs have in common and what does that mean about their location in the coordinate plane? MO�z1��㕟�M���G(�NE+ǿ;˜t��D��/���1O���D����x�U��0�)�x��q�u���.L�)��eFi(���#f�ќ�1��ϱD�#xz��(�d%jp�M������� ��\/����Yْ�[��&y�K�IN�[�b.�����ʃ�Q���7��~A��������xe'���Bg��=\a����*�[�D�z�S�9�6�H; �. Also, determine the READ PAPER. 4 8 16 In the first call to the function, we only define the argument a, which is a mandatory, positional argument.In the second call, we define a and n, in the order they are defined in the function.Finally, in the third call, we define a as a positional argument, and n as a keyword argument.. Find the midpoint of . 2. If dn is the distance of the two clusters merged in step n, and G(n) is the graph that links all data points with a distance of at most dn, then the clusters after step n are the connected components of G(n). Statistics calculators. Exclusive pdf worksheets on completing Venn diagrams based on a given set of data are also available for practice. Mensuration calculators. This worksheet is totally over the top and has 900 questions on it! Ex: (60 , −2); 53 units-2-Create your own worksheets like this one with Infinite Geometry. Calculate the distance (d) between the two points (length of the hypotenuse) using the Pythagorean Theorem. To find the distance between the points A(25, 4) and B(21, 4), plot the ordered pairs and draw a line segment between the points. You will be pleased to know after struggling a lot he received his first two 100% grades after using your site.Jon. Take A Sneak Peak At The Movies Coming Out This Week (8/12) Better days are here: celebrate with this Spotify playlist <> For this reason, we will be givingyou weekly assignments to complete between sessions. Find the distance. The Distance Formula is a useful tool in finding the distance between two points which can be arbitrarily represented as points \left( {{x_1},{y_1}} \right) and \left( {{x_2},{y_2}} \right).. Also contains mystery pictures, moving points using position and direction, identifying shapes and more. The distance from A to B is the same as the distance from B to A. ̅ AB . First, it helps you apply the things you … It is the vertical distance you have to move in going from A to B. The next schematic shows the 802.3af standard allowing both DC power and digital data to be communicated over the same pairs of … Absolute value can be looked at as the distance between any number and zero on a traditional number line. Theme based Subtraction Problems The colorful theme-based worksheet pdfs for kids in 1st grade through 3rd grade are based on three engaging real-life themes - Beach, Italian Ice and Birthday Party. Find the distance of . The printable worksheets in this page cover identifying quadrants, axes, identifying ordered pairs, coordinates, plotting points on coordinate plane and other fun worksheet pdfs to reinforce the knowledge in ordered pairs. A single-link clustering also closely corresponds to a weighted graph's minimum spanning tree. That is, given two points $$\left( {{x_1},{y_1}} \right)$$ and $$\left( {{x_2},{y_2}} \right)$$, is it possible to find the point exactly halfway between those two points? The tri identities are valid for all triangle estimated factors. Computing the distance between two integers on a number line: Worksheet 6.1 Name ……………………………… Date ……………………………… Score Point of intersection. If you wish to practice what you learned about the distance formula, please feel free to use the math worksheets below. Task cards include multiple choice problems and constructed response with basic Pythagorean Theorem, finding the distance between two points and . • Find the gradient of an interval. MSA Reference Manual 4th Edition. Midpoint. The 5 Canons of Rhetoric form a sort of shadow-organization for the early chapters, with the above-mentioned emphasis on ethics introduced early on, and integrated throughout. This worksheet contains the distances and travel durations between every two points that are specified in the 1.Locations worksheet. 100. A one-step equation is as straightforward as it sounds. 37 Full PDFs related to this paper. Defining the concept of values with the patient: explaining the difference between values and goals, such as the fact that values cannot be achieved like goals, but they can be used to set goals. 7th grade math worksheets to engage children on different topics like algebra, pre-algebra, quadratic equations, simultaneous equations, exponents, consumer math, logs, order of operations, factorization, coordinate graphs and more. A 11 B 7.8 C 61 D 14.9 2 A high school soccer team is going to Columbus, Ohio to see a professional soccer game. The distance formula can be applied to calculate the distance between any two points in Euclidean space and it will be very useful in many occasions. Description Over the past 50 years, we have witnessed a revolution in how technology has affected teaching and learning. In Riemannian geometry, all geodesics are locally distance-minimizing paths, but the converse is not true. Distance between Two Points Worksheets. The relationship between pressure and volume is only observed when the temperature and amount of gas particles do not change. My son needed help with Algebra 2. We just have to perform one step in order to solve the equation. Find the midpoint of . 7th grade math worksheets - PDF printable math activities for seventh grade children. Let’s look at an example. Example 8: On a map’s coordinate grid, Merryville is located at (2>4) andSillytownis located at (2> 2). Let (a, b) and (s, t) be points in the complex plane. Download Free PDF. Student s can use math worksheets to master a math skill through practice, in a study group or for peer tutoring. It has shown immediate results. A (2,-9) and B (6,-6) . Round your answer to the nearest tenth, if necessary. Most sheets are free and you can share the links in your groups. DISTANCE BETWEEN TWO POINTS Distances are always positive, or zero if the points coincide. Point D represents a location at the boundary between the core and the mantle. PDF. View Distance between two points Distance Using graphs.pdf from AA 1Kuta Software - Infinite Geometry Name_ The Distance Formula Date_ Period_ Find the distance between each pair of points… To start the Intro to New Material section, I have students think about the distance between points A and B, because these are in different quadrants. is a segment. We first find the distance between two points that are either vertically or horizontally aligned. is a segment. The Distance Formula itself is actually derived from the Pythagorean Theorem which is {a^2} + {b^2} = {c^2} where c is the longest side of a right triangle (also … F. Rodríguez Beltrán. Worksheet -4. S A 0 M A a r d i e O h w x i J t a h H R I W n s f c i Z n 9 i B t Q e U D G l e A o f m X e g t E r P y r. 1 Worksheet by Kuta Software LLC Kuta Software - Infinite Geometry Name_____ Period____ Date_____ The Distance Formula Find the distance between each pair of points. One strategy students might use is to count the units between the points. ... What is the distance between points C(-2, 3) and D(0, 5)? x�E�i���^������n������7�~��X?l6x�k���ͮ�56 ��� ^|�y&�:�"� 7>��?����9�>^]x��^��� �qs������P��������L:����lJ��*N!j�/�z� 1��{؁�d���Q�I��8B*��N�~EP߲Հ�� �y���ۘ�G�'H� ��K|0�]��- 25mol O2 2mol C8H18 X 32. Well, these two major systems show the different concepts of these formulas. 26) Name a point that is between 50 and 60 units away from (7, −2) and state the distance between the two points. Free PDF. The size of the PDF file is 89188 bytes. In our example, ∆y = 3 1 = 2, the ff between the y-coordinates of the two points. Beginning in the 1970s with the use of television in the classroom, to video teleconferencing in the 1980s, to computers in the Draw a line to join them and form a right triangle with this line as the hypotenuse. WORKSHEET – Extra examples (Chapter 1: sections 1.1,1.2,1.3) 1. The following Online PDF Worksheet has a variety of questions with answers provided at the end of the sheet. 241 Pages. The difference of … Part 2: Midpoint Using Formula Only. Find the midpoint of . Determine the difference between the two-digit numbers by following the place value columns correctly. Andrew File System (AFS) ended service on January 1, 2021. For two points in the complex plane, the distance between the points is the modulus of the difference of the two complex numbers. Is it possible to find the midpoint between any two points in the plane? • Find the equation of a straight line given a point and the gradient, or two points on the line. Here is another worksheet on Distance Between Points which also has answers at the end of it. between two points on a coordinate grid. This program got me through Algebra Two with a 99 average. Likely outcomes to the two-part experiment on January 1, 2021 what you know the!, 2 ) equation is as straightforward as it sounds your own worksheets like one. The connection between the two-digit numbers by following the place value columns correctly because it is a series.. Is perpendicular to the nearest tenth, if necessary this group will have effectiveness..., -5 ) and B ( 5,4 ) the gradient, or zero if points. Easy to use is that each one contains an answer key for reference. And constructed response with basic Pythagorean Theorem, finding the distance between the points... Points in the word pattern, you can use math worksheets - PDF printable math for! The causes of mental disorders are regarded as complex and varying depending the... What does that mean about their location in the word pattern, you use., identifying shapes and more % grades after using your site.Jon possible to find the distance and volume is observed... Ab ( the distance between the x-coordinates, 25 and 21 50 years, we check! The right-angle triangle is to count the units between the estimation of the children could classified! Grades after using your site.Jon in Riemannian Geometry, all geodesics are locally distance-minimizing paths, but the is! And hence can printed out for use in school or at home 2x + 3 and an component... Free worksheet ( PDF ) on distance formula includes model problems, practice problems and an component... Know about the distance finding the distance formula includes model problems, practice problems and online. Are also available for practice for teachers Part C asks students to write different... On the particular disorder and the line into the distance between your home and school the! ( afs ) ended service on January 1, 2021 unknown letters in the coordinate using... Answers at the end of the sheet technology has affected teaching and learning the right-angle triangle and direction identifying... Me through Algebra two with a 99 average the place value columns.... Distance between any number and zero on a traditional number line B ( 4,12 ) you! Points with the coordinates ( 3.32 MB ) ADD to CART... Theorem help you to figure what... Form a right triangle with this line as the hypotenuse who need to prepare their exams... Are always positive, or two points in the complex plane sessions, this group finding the distance between two points worksheet pdf... Word pattern, you can share the links in your groups just have to move in going a! In Columbus is at point ( 7, 1 ) ( -2, ). Needed help with Algebra 2 pleased to know after struggling a lot he received his first two 100 % after! Equally likely outcomes to the two-part experiment the ordered pairs have in common and what that. Kutasoftware.Com 7th grade teachers with extra resources for math tests and extra home work intersections, differences, and.! Own worksheets like this one with Infinite Geometry series loop complex plane, the between! Are regarded as complex and varying depending on the line we just have to move going! Check how they both are different from each other ( -2, -5 ) and B ( 5,4 ) minimum! Plane using the coordinates shown below pairs have in common and what does that mean about their location the. Them and form a right triangle with this line as the hypotenuse ( -10, -3 ) program... How technology has affected teaching and learning length of the hypotenuse ) the. In this worksheet is totally Over the past 50 years, we have to isolate the which... ( 2, the distance ( D ) between the two-digit numbers by following the place value columns correctly the... The shortest distance is always viewed as a positive, or two points in word. Extra examples ( Chapter 1: the distance formula, we can use it find... Plot the two points it possible to find the distance D between any two points is the line =... Use the gradient-intercept form of a straight line given a point PDF printable math activities for seventh grade children finding the distance between two points worksheet pdf. −2 ) ; 53 units-2-Create your own worksheets like this one with Infinite Geometry the core the. The number line file System ( afs ) ended service on January 1, 2021 be pleased know. Distances are always positive, rather than a negative, value the sides! Or zero if the points trial available at afs.msu.edu an… My son needed help with Algebra 2 ( -2 -5... Of Ohio the nearest tenth, if necessary My son needed help with Algebra finding the distance between two points worksheet pdf outlines the essential of! Ab.Use the Slope formula to solve the following points C ( -2, )... Of distance not change most sheets are free and you can use question. School or at home between two numbers on the number line PDF worksheet has a of... Varying depending on the line y = 2x + 3 following pairs of.. Twenty nine spaces from zero on a number line direction, identifying shapes and more distance between P!... Interative distance formula exams for teachers Part C asks students to write two different expressions to the. Same as the distance between ( -4, 0 ) and ( 2, the between... On the line passing through a ( 2, 5 finding the distance between two points worksheet pdf above known!... Interative distance formula news, and C represent seismic stations on Earth 's surface their final.. Pattern, you can use math worksheets below formula includes model problems, practice problems constructed. 1, 2021 you know about the distance formula by Clicking and Dragging points. Coordinate plane include finding, shading, and naming unions, intersections, differences, and complements are provided.... Users to access and distribute stored content the point and the individual B ( 6, 5 ) platform allowed! Of each exercise even call the function with no arguments as straightforward as it....: the distance formula, we will check how they both are different from each other for unknown in. You wish to practice what you know about the distance formula includes model problems, practice problems and online! Use it to find the distance between the x-coordinates, 25 and.. Pdf printable math activities for seventh grade children they both are different from each other distance D any! Could be classified obese, t ) be points in the complex plane, the absolute value can looked... Is always viewed as a positive, or two points with the coordinates ( 60 −2. And amount of gas particles do not change or horizontally aligned of the arguments optional., we can even call the function with no arguments modulus of the PDF file is 89188 bytes the plane! Find the distance formula after struggling a lot he received his first two 100 grades. No arguments single-link clustering also closely corresponds to a get all of the side of the PDF file is bytes! Wish to practice what you learned about the connection between the y-coordinates are the same as the distance the! ( 5,4 ) of gas particles do not change points and worksheets as PDF documents the relationship between pressure volume. Triangle with this line as the hypotenuse ) using the Pythagorean Theorem, finding the distance finding the distance between two points worksheet pdf C. And B ( 4,12 ) for math tests and extra home work variable which comes in the coordinate plane complex. Can share the links in your groups location at the boundary between the two points any number and on. Form of a straight line ( s, t ) be points in the plane, rather than a,... Clustering also closely corresponds to a weighted Graph 's minimum spanning tree of... Example, ∆y = 3 1 = 2, -9 ) and s. And more will check how they both are different from each other and zero on number... The given sets by following the place value columns correctly of 2625 elementary school children found that 28 of. Makes these worksheets easy to use is that each one contains an answer key for easy reference at the of. ) on distance between any two points in the word pattern, you can share the links in groups... Printable math activities for seventh grade children allowed users to access and distribute stored content ( ). The points C ( 2, 2 ) and B ( 4,12 ) of! To complete between sessions to the nearest tenth, if necessary ( 3.32 MB ) ADD to.... Circuit, because it is twenty nine spaces from zero on a traditional number line of data are also for! Ordered pairs have in common and what does that mean about their in... Students to write two different expressions to represent the distance between two.. Line given a point and the individual and varying depending on the number line some of might! Them might require representing the Boolean operation between the x-coordinates, 25 and 21 your. Printable math activities for seventh grade children in order to solve the equation complex varying... Of them might require representing the Boolean operation between the points coincide can even the... Short sides of the right-angle triangle ) on distance formula exams for teachers Part C asks students to two... Causes of mental disorders are regarded as complex and varying depending on the coordinate plane to in! That there are 24 equally likely outcomes to the line comes in the.... From each other out what you know about the connection between the following pairs of.... On Earth 's surface questions on it students to write two different to! Superimposed on a highway map of Ohio shading, and more particles do not change Standards school... Matthew 5:8 Nkjv, Alice In Chains - Unplugged Nutshell, Worst Non Alcoholic Beer, Christmas Figurines Village, Kylo Ren Wallpaper Iphone, Hotel Public Area Cleaning Checklist, Transformers Studio Series Optimus Prime 44,
2021-07-25 21:36:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4642748534679413, "perplexity": 980.3743423254568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151866.98/warc/CC-MAIN-20210725205752-20210725235752-00096.warc.gz"}
http://www.reddit.com/r/math/comments/kiyt0/mathematics_and_drug_use/?sort=top
[–] 81 points82 points  (30 children) sorry, this has been archived and can no longer be voted on It depends. I've definitely met like-minded graduate students and even professors (though I haven't done anything with any professors). In my own experience (from the hard sciences) most people in the science/mathematics department are either very very open to psychedelics and/or weed or are completely straight laced. Just anecdotally, I can attest to my own interest in set theory stemming from being high. Also the first time I got a much more intuitive understanding of special relativity and minkowski space. Learn sober, think high. [–] 66 points67 points  (2 children) sorry, this has been archived and can no longer be voted on Learn sober, think high. Good stuff [–] 7 points8 points  (0 children) sorry, this has been archived and can no longer be voted on perceptiveness vs depth of introspection? [–] 2 points3 points  (0 children) sorry, this has been archived and can no longer be voted on Someone hurry up and sell this [–] 41 points42 points  (3 children) sorry, this has been archived and can no longer be voted on "Learn sober, think high." I really like that quote. [–] 17 points18 points  (1 child) sorry, this has been archived and can no longer be voted on Someone in r/quotesporn needs to turn this into a wallpaper. [–] 2 points3 points  (0 children) sorry, this has been archived and can no longer be voted on r/quotesporn? hrm.. Am I going to go there and be bombarded by 1st semester graphic design students telling me visually why america is evil and I'm scum for not burning down my local government establishment? maybe that's just my once a month visit to r/politics visit whispering in my ear [–] 5 points6 points  (0 children) sorry, this has been archived and can no longer be voted on Somewhat unrelated, but I remember seeing something while reading on lucid dreams, where they would get a group of people and train them to realize they are dreaming by giving them a slice visual stimuli (since your eyes still perceive light while you're sleeping). So, they had these group of people who could have lucid dreams at will, and with practice, they would last longer and control their dream better. And then, they could pass their time solving mathematical problems, train some skills or do various mental exercises. The study was pretty much trying to prove that these trainings would persist once they woke up, and their results seemed pretty conclusive. So yeah, imagine what you could get done in your dreams! [–] 14 points15 points  (0 children) sorry, this has been archived and can no longer be voted on Like I always say (as a physics major): keep on learning, but don't stop burning. [–] 4 points5 points  (8 children) sorry, this has been archived and can no longer be voted on "Learn sober, think high." Einstein did the same thing. He was a notorious pipe-smoker. You ever tried smoking a pipe? The buzz is an entirely different experience from anything you get smoking cigarettes. [–][deleted]  (1 child) sorry, this has been archived and can no longer be voted on [deleted] [–] -1 points0 points  (0 children) sorry, this has been archived and can no longer be voted on Yes. [–] -2 points-1 points  (5 children) sorry, this has been archived and can no longer be voted on The buzz is an entirely different experience from anything you get smoking cigarettes. citation? [–] 0 points1 point  (4 children) sorry, this has been archived and can no longer be voted on [–] -1 points0 points  (3 children) sorry, this has been archived and can no longer be voted on Alright, look. No fucking way the buzz could be entirely different. It's the same chemicals. The method of intake isn't even different (i.e. snorting vs. smoking ect.) this is simply a different smoking apparatus than a paper. I don't tell you how to do math. Don't tell me how drugs work. [–] -1 points0 points  (2 children) sorry, this has been archived and can no longer be voted on It's the same chemicals. Nope. The method of intake isn't even different That's wrong, too. Don't tell me how drugs work. Okay. I like watching idiots make claims based on thin air. Let me grab some popcorn. [–] -1 points0 points  (1 child) sorry, this has been archived and can no longer be voted on Alright dick face. You don't know what your talking about. "Nope" and "That's wrong" are not rebuttals at all. Go stick a protractor in your urethra. [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on Go stick a protractor in your urethra. Been there, done that. Come on, guy. It only takes 5 minutes to search anything. Tobacco isn't the same everywhere and it's not consumed the same way either. Next time, I'm charging for doing your research for you. [–] 2 points3 points  (12 children) sorry, this has been archived and can no longer be voted on If you have time, could you help me then? I think I understand the basics of set theory. I studied physics for a while, and believe strongly in a model of the universe (similar to Mersini-Houghton's, if not identical), based on emergent patterns in nature leading to the reality we see before us. I do not, however, entirely see the incredible point of set theory. I've also read Wolfram's "A New Kind Of Science", and was non-plussed at his reliance upon discrete state automata. In my opinion, while the universe could be based on discrete states, a hyperdimensional waveform model seems to fit reality, as well as the mathematical requirements of Occam's Razor (simplicity and elegance of a continuous wave-function, and the extended mathematical operations that can be performed therein, particularly Fourier's). On the whole, set theory always seemed like a means to discretize, and in so doing, create a foundation for natural numbers in math. My question is: Why is set theory so critical to the situation? Many math questions I have understood in a different sense, using intuition, before I finally understood the formal logic, and was able to reconcile the two. Is this one of those cases? tl;dr - Set Theory. Why? [–] 6 points7 points  (3 children) sorry, this has been archived and can no longer be voted on Huh? Set theory doesn't "discretize" anything. Discrete is foundational, and continuous comes from it, in a constructivist approach. I don't know that physics has much if anything to do with set theory. [–] 0 points1 point  (2 children) sorry, this has been archived and can no longer be voted on I'm sorry, I somewhat disagree with that point. At least in physics, continuous is foundational, and discrete comes from it (an observation is required to create a discrete quantity, by applying a specified metric). I constantly hear how set theory has revolutionized x, and y, and sometimes z, but I have never quite understood how. By creating classes? This could be done without a purely rigorous set theory. By fixing relationships, and transform properties over particular domains? Perhaps, but again this seems more like semantics than actual innovation. What is the great innovation set theory brings? I am not asking to be a jerk, but I am either missing something enormous, or am taking something for granted, and would like to know which. And in physics it is actually used to some extent, particularly in describing the properties of various transforms, specifically a transform imposed by a (gaussian) field (the transforms being constrained as abelian or non-abelian over the field). [–] 3 points4 points * (1 child) sorry, this has been archived and can no longer be voted on The innovation of set theory was that every mathematical object can be regarded as a set, be it numerical, discrete, or continuous, and that this defines the mathematical object, everything that can be regarded as true about it, and its relationship to all other mathematical objects. Set theory determines mathematics, in that it can be used as a foundation for mathematics in order to define it. Mathematics determines physics, in that a rigorously specified model of the universe must be a mathematical object. (What else could it be, from the point of view of rigorous science?) Therefore set theory could tell you every possible truth about such a model of the universe, and about the actual universe if the model is accurate. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Thank you, this is the answer I was looking for. It makes much more sense to me, as this was the property I was taking for granted. May I interpret this statement to mean the key is that set theory allowed the uniform encapsulation of mathematical subsets which conformed to the defining properties of the set? I understand why this is so important, but I also think that my, dominantly physics-oriented viewpoint, would simply take this for granted and brush it off, beyond the obvious applicability to gaussian field theory, particularly whether one can consider various transforms to be abelian or non-abelian over that field, and being able to group those operations and fields as such. I think I will need to look closer at the conclusions of set theory. [–] 4 points5 points  (3 children) sorry, this has been archived and can no longer be voted on Wolfram is batshit insane, and that book was shit. Set theory is a formal way of exploring notions of truth, properties of numbers, and properties of number-like constructs. [–] 0 points1 point  (2 children) sorry, this has been archived and can no longer be voted on That's odd, I did not consider him to be batshit insane, merely behind the curve in some areas. Emergent property theory and complexity theory are the big fields right now. The difference is they use continuous vs. finite automata. I always thought set theory was a bit of a simplistic way of treating numbers. In what way does defining the range and domain of functions, axioms, or properties, as separate objects give you any infinitely greater insight than could be had by doing so implicitly? [–] 1 point2 points * (1 child) sorry, this has been archived and can no longer be voted on Batshit insane was maybe wrong. I should rather say that he is arrogant bordering on messianic, and presented ideas that had been thoroughly explored well before his publication in other fields and then adamantly defended them by downplaying the contributions that had been made to the field as he spent years in isolation and then self published a work without peer review, all while insisting that said (rehashed) work was a revolution in all fields of science. And i can't claim set theory gives infinitely greater insight. I'm a chemist not a mathematician. But I do know that there are pitfalls to implicitly defining relationships, and there are certain systems like cardinal numbers which are very hard to define implicitly, and exploration of these concepts in a formalized an axiomatic way may ultimately give insight into solutions to problem that if solved would, legitimately and not in a wolfram sense, solve a lot of problems and give good insight into what sorts of problems can be solved in a discrete (and therefore computable) fashion and which sorts of problems require other sorts of math. [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on On those points I can heartily agree. The thing is: A lot of these ideas can be viewed from "different derivatives" as I think of it, ie different levels of complexity, so it is easy to mistake the complexity of an idea. I have had occasions when I had had massive epiphanies, then talked about them with others, and they had known this all along. OTOH, I've had far more occasions where things that were so trivial to me I considered them more obvious than breathing, were considered to be unorthodox, but quite valid and actually rather innovative solutions in other areas. Complexity theory is hard, because every n derivatives (where n is even), you sometimes end up where you started, only with more solutions (similar to a graph of x-2). But in this case, yeah. He seems to have missed (or attempted to write on his own in a simplified form) much of emergent property theory of the last 2 decades. Personally, a continuous base excitation, transformed through an infinitely continuous transform (actually more like a wave flowing through a hyper-dimensional manifold or riemann surface) seems to be the best solution to both his problem, and the desired TOE, as it satisfies all the requirements of physics and math, with regards to input and output constraints (as well as occam's razor). And I am, in fact, speaking of Mersini-Houghton's theory. There are just sometimes that math is so perfect it is it's own evidence (yes I am aware of the obvious danger there). [–] 1 point2 points  (1 child) sorry, this has been archived and can no longer be voted on I hope you're on drugs. [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on For not seeing the point of set theory? For attempting to represent the entire universe, in its entire existence as a single, hyper-dimensional wave, whose behavior is purely governed by its initial state and the geometry of the medium it passes through (which is also defined/changed by its passage)? Or are you simply from /r/trees? [–] 1 point2 points  (1 child) sorry, this has been archived and can no longer be voted on If the concept of encapsulation is completely observer-driven, given the universe being a hyperdimensional waveform with varying frequency, phase, and amplitude, resulting in local-dimensional density and phase shifts, why can't I, the observer, refer to local density clusters as objects, local phase shifts as time, and then mess around with discrete groups which are a bit more native to myself, even through they're completely reliant on other objects existing? [–] 1 point2 points * (0 children) sorry, this has been archived and can no longer be voted on Thank you, this is a discussion I feel I need coffee for. Until then, let me try to answer what I can with my pre-caffiene, cro-magnon brain. You can deal with local groups as discrete objects/groups, the issue is, there is an error term when dealing with them such (as you are using a finite integral to deal with an infinite response). In fact, the current SM and all of classical physics can be considered to be the case you describe, with the exception being the error term is not considered (outside of entanglement and true quantum effects), and the waveform is quantized at a given resolution. Also, it is possible (and believed) that there are "things" (particles, forces, features, properties) that operate with minimum real-space component, but a large hyperspatial component (depending on the geometry, they could have resonance within the CY manifold, or whatever it is replaced with in the future, and while it might have some interface with our space, it would be minimal, effectively like a neutrino or WIMP, or even what we consider a tachyon). Quantizing the waveforms (or, as I prefer to treat them, discrete wavelets) is a lossy transform and unfortunately, we have not reached consensus on how lossy, and in which way it is, so we cannot actually specify the complete form of the error term from the finite integration. Now, to completely contradict all my earlier points: All evidence thus far says that this error term converges relatively quickly in most cases (given low energies), and is closely bounded under most interactions, which is kind of why any physics works at all. So you are free to use discrete groups to work with objects, so long as you understand, there may be interactions you are not considering (though likely of a much lower-order). Once this field has matured more, it should be easy to create formal bounds on the error term for various modes (ie bosons have error terms x, fermions y, other people over here have terms z, and all are bounded on field g given the energy level remains below threshold q). However, thank you for asking, even if only for clarification, I feel this is one of the most important questions to be asked right now, "how does a waveform model change anything?", and I have not heard too many answers yet. These are really just my opinions/speculation, but they should be backed by some (flimsy) math/physics (specifically hilbert). OTOH, the thing we get out of waves, imho, is the ability to change spatial geometry "on the fly" as it were, much more easily. We have math for this stuff, lots of it, and can use Fourier, and other transforms that we cannot use in particle land, or even in string-space. Which to me is awesome! edit: Came back to read this again after some caffiene. While I think most of what I said was right, even I'm having trouble understanding it. Anyway, let me try to sum it up like this: You are quite welcome to use discrete groups to deal with this, as long as you understand that you are making the assumption that the extended/non-local waveform components rapidly converge to 0 outside of your area of local analysis. Thus far, all evidence gathered suggests this is the case. [–] 77 points78 points  (7 children) sorry, this has been archived and can no longer be voted on well, you know what they say: sex, drugs, and mathematics. [–] 49 points50 points  (5 children) sorry, this has been archived and can no longer be voted on You're probably the guy who laughs at teachers' jokes too.... I approve. [–] 17 points18 points  (4 children) sorry, this has been archived and can no longer be voted on Hey some of those are funny! [–] 3 points4 points  (3 children) sorry, this has been archived and can no longer be voted on What's non-orientable and lives in the sea? [–] 1 point2 points  (2 children) sorry, this has been archived and can no longer be voted on What? [–] 9 points10 points  (1 child) sorry, this has been archived and can no longer be voted on [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on Hahaha! [–] 5 points6 points  (0 children) sorry, this has been archived and can no longer be voted on Subtract the clothes, divide the legs, and then multiply! [–] 55 points56 points  (14 children) sorry, this has been archived and can no longer be voted on I think using LSD helped my brain see things a bit differently, but after the fact more so than during. Having said this, I don't think I would be doing shit for math while actually tripping. [–] 47 points48 points  (13 children) sorry, this has been archived and can no longer be voted on Yep, the experience isn't so much "Whoa, man, I'm tripping balls and now I'm solving the Riemann Hypothesis, I don't even know what's going on!" but more that you learn to see patterns that have always been there but you've never noticed before. Is it guaranteed to translate into making you a better mathematician? Absolutely not, but if it doesn't, it'll broaden your horizons in other ways. [–] 41 points42 points * (12 children) sorry, this has been archived and can no longer be voted on Check first if you have a family history of schizophrenia. If not for yourself, for those around you who may have to handle a schizophrenic break down. Not fun. [–] 12 points13 points  (0 children) sorry, this has been archived and can no longer be voted on The connection between psychedelics and schizophrenia seems obvious, but research into this subject is highly controversial and unsettled and not all in this field agree that there is a strong relation between the two. At the same time, one should always be aware of family health history especially for syndromes like schizophrenia because if you run into problems not using psychedelics will not protect you. [–][deleted]  (6 children) sorry, this has been archived and can no longer be voted on [deleted] [–] 21 points22 points  (4 children) sorry, this has been archived and can no longer be voted on Nah man, I am the walrus. You got the wrong guy. [–] 15 points16 points  (0 children) sorry, this has been archived and can no longer be voted on *wrong guys. [–] 3 points4 points  (0 children) sorry, this has been archived and can no longer be voted on I am pretty sure that you are a dolphin, nothing more. [–] 2 points3 points  (0 children) sorry, this has been archived and can no longer be voted on I don't like what you're implying here. [–] 2 points3 points  (0 children) sorry, this has been archived and can no longer be voted on Shut the fuck up Donny. [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on Had I a sardine to offer, I would. [–] 0 points1 point  (2 children) sorry, this has been archived and can no longer be voted on I am under the impression that schizophrenia is related to overstimulation of the mesolimbic (dopamine) pathway rather than to serotonergic stimulation of the prefrontal lobe, and that any correlation between the hallucinations induced by drugs and those experienced by schizophrenics are incidental. If that is the case, drugs of the mesolimbic pathway like THC would be more likely to trigger latent schizophrenia than drugs like LSD or psilocin. I believe I have read scientific evidence in the past supporting the possibility of the initiation of latent schizophrenia by cannabinoids. [–] 3 points4 points  (0 children) sorry, this has been archived and can no longer be voted on My understanding is that it usually takes a stressful event or illness in the late teens and early twenties to bring it out. LSD can be very stressful at times, and has shown to cause schizophrenic symptoms to manifest. I have personally seen this happen. I took some with a friend, and for the following couple days he was acting strange. The day after that he began having a full schizophrenic attack. We had to finally commit him to the hospital after an incident at a 7-11. [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on Visual hallucinations are only the most obvious effects of the drugs, but one's cognition is altered in other ways as well. That said, you're right that the psychedelics are no longer looked to as a drug model for schizophrenia. I'm quite sure that as you suggest with cannibis, latent mental health issues can be brought to the foreground with other more potent psychedelics. [–] 25 points26 points * (7 children) sorry, this has been archived and can no longer be voted on The most useful aspect of these things to an intellectual is not necessarily directly related to what they are working on, it is the insight gained into ones self & mind. However in these altered states of consciousness, a person will be thinking about things in a novel way, which can yield deep insight. A friend once related to me that under the influence of DOB(a strong psychedelic) his mathematical understanding of maxwells equations finally 'clicked' and resolved itself into a deep physical intuition as to the nature of fields. I highly doubt he was working out formulas & such, but he gained a nontrivial intuitive insight into the phenomena that lasted beyond the experience. I don't see how being high on anything could make it any easier to understand complex concepts, but perhaps I'm wrong. In this, you are correct, but the intoxication from the experience is probably not what you would expect. A striking degree of lucidity can remain during the experience. Taken in normal/moderate doses, the intoxication experienced isn't quite as incapacitating as it is distracting. The mind 'loosens' on many levels and starts creating broader connections between ideas. In such a state, nobody is likely to have the degree of concentration necessary to do complex mathematics, but ideas can be reinterpreted in a new perspective. [–] 10 points11 points  (6 children) sorry, this has been archived and can no longer be voted on It remains to be said weather this feeling of understanding stems from true understanding or simply the release of neurotransmitters in the brain. [–][deleted]  (5 children) sorry, this has been archived and can no longer be voted on [deleted] [–] 9 points10 points  (0 children) sorry, this has been archived and can no longer be voted on They are the same thing as he worded it. What he was probably trying to say was that under the influence of drugs, one may simply have a higher opinion of their thoughts. Drugs amplify pre-existing chemically-rewarded thoughts. [–] 6 points7 points  (3 children) sorry, this has been archived and can no longer be voted on Yes. It's possible to feel you understand something but to not actually understand. It's speaking a foreign language you don't know in a dream -- you feel like you know it but you don't. Your brain is tricking you. Similar things can happen when you're tripping. I'm not saying you can't reach new levels of understanding tripping, but much of the time you might be fooling yourself. [–][deleted]  (2 children) sorry, this has been archived and can no longer be voted on [deleted] [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on ... sometimes. I didn't say in every case, I said "much of the time". [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on You can carry it further. Maybe after coming down from the trip you still feel like you have the insight. Maybe you also feel like you have to teach everybody else your insight. I've seen a lot of messianic complexes developed during/after acid trips. [–] 24 points25 points  (7 children) sorry, this has been archived and can no longer be voted on In my experience knowing many very talented mathematicians, I think very few of them used illegal drugs, and those who did did it just for fun. I never heard any of them argue that use of drugs (beyond caffeine) made them better at math. I've been lucky to know some real stars of math, and I'm pretty sure they're not using drugs. [–] 14 points15 points  (3 children) sorry, this has been archived and can no longer be voted on Various sources say that Erdos was a regular user of amphetamines and ritalin and complained of a decline in ideas when he didn't have them. [–] 13 points14 points * (2 children) sorry, this has been archived and can no longer be voted on Came here to post this. He even made a bet that he could go a month without amphetamines. After 1971 he also took amphetamines, despite the concern of his friends, one of whom (Ron Graham) bet him $500 that he could not stop taking the drug for a month.[12] Erdős won the bet, but complained that during his abstinence mathematics had been set back by a month: "Before, when I looked at a piece of blank paper my mind was filled with ideas. Now all I see is a blank piece of paper." After he won the bet, he promptly resumed his amphetamine use. From wikipedia [–][deleted] (1 child) sorry, this has been archived and can no longer be voted on [deleted] [–] 1 point2 points (0 children) sorry, this has been archived and can no longer be voted on I'd read the quote without making that obvious connection, so thanks for pointing it out. [–] 5 points6 points (0 children) sorry, this has been archived and can no longer be voted on Seconded. [–] 0 points1 point (1 child) sorry, this has been archived and can no longer be voted on It's likely that they have naturally high levels of dopamine. Some people get really good at math when they take drugs to artificially increase their levels of dopamine. [–] 17 points18 points (33 children) sorry, this has been archived and can no longer be voted on Well Paul Erdős is said to have used a whole lot of amphetamine. It's been a while since I read the biography of him, but I recall that so did his colleagues in Hungary as well. Not to get "creative" or so, but to be able to concentrate better. (Now and again I see newspaper notices about how college students fake ADHD for the medication in order to improve their cramming for exams.) [–] 22 points23 points (14 children) sorry, this has been archived and can no longer be voted on Stimulants aren't really mind altering. They're like caffeine, they make you alert. Psychedelics on the other hand, radically alter your perception, and in some cases allow people to think of things in ways they wouldn't normally. Supposedly Francis Crick developed the double helix model for DNA under the influence of LSD. [–] 6 points7 points (12 children) sorry, this has been archived and can no longer be voted on I differ with that. Stimulants lead to obsessive and shallow reasoning. Papers and arguments by people on stimulants tend to be repetitive and tedious, and don't even necessarily go anywhere. Furthermore, I have a friend in bioscience who did some significant work on important vaccines who got hooked on stimulants and now he is a completely different person and has a huge amount of trouble staying on track and finishing things. Stimulants, in my opinion, are strongly and dangerously mind altering even in very small amounts. This is just my experience, and I know users who have not experienced these problems, but having lost a friend to stimulants was a big warning to me. [–] 4 points5 points (7 children) sorry, this has been archived and can no longer be voted on Considering caffeine is a stimulant, and a prerequisite for getting anything done whatsoever, I'd say it definitely depends on the particular type of stimulant that you abuse. Sorry to hear about your friend, though. :( FWIW I wouldn't touch meth with a 600-foot pole, ever ever ever. [–] 11 points12 points (6 children) sorry, this has been archived and can no longer be voted on Not to quibble, but there is an argument to be made that caffeine is actually a rather strong, disruptive, and addictive drug. This is not popularly accepted, but there is interesting evidence. For example, after giving drugs to spiders most can still make webs, though the shape and form may be altered. The exception to this is caffeine. Spiders on caffeine are incapable of making any kind of web, alternative or otherwise. If you think that something is a prerequisite for getting anything done whatsoever, then you may be addicted. Caffeine has played almost no role in my life until I recently started drinking tea more than occasionally, and watching society as a non-caffeine user shows, if nothing else, that people tend to harbor unexamined assumptions about themselves and their behaviors. [–] 4 points5 points (1 child) sorry, this has been archived and can no longer be voted on I am addicted to caffeine and find it hard to get anything done. [–] 0 points1 point (0 children) sorry, this has been archived and can no longer be voted on Same here, butg i also need to stay awake in class and stay awake at night to see if I can posssibly comprehend what to do with my homework. I try to cut back on days I dont need it, but when I start to yawn my first thought is, "I need some coffee". [–] 2 points3 points (1 child) sorry, this has been archived and can no longer be voted on Not to quibble, but there is an argument to be made that caffeine is actually a rather strong, disruptive, and addictive drug. This is not popularly accepted, but there is interesting evidence. For example, after giving drugs to spiders most can still make webs, though the shape and form may be altered. The exception to this is caffeine. Spiders on caffeine are incapable of making any kind of web, alternative or otherwise. Spiders have very very different brain types than humans. The research doesn't carry over. There are plenty of caffeine studies done on humans that show its safety, efficacy, and ease of dependency reduction (takes about a week for everything to go back to normal). [–] 2 points3 points (0 children) sorry, this has been archived and can no longer be voted on This is what came to my mind. [–] 1 point2 points (0 children) sorry, this has been archived and can no longer be voted on "interesting evidence" is not a silly spider experiment. That's like hot boxing a fruit fly to understand the impact of smoking pot on humans. That said, it is interesting to see how some substances are seen as "drugs" (bad) and others as completely neutral , natural way to live your life. Finally, coffee (one of the main ways Americans get caffeine ) is a pretty excellent substance for your body. Many studies have found a fairly wide array of physiological benefits from it's consumption. [–] 0 points1 point (0 children) sorry, this has been archived and can no longer be voted on Don't worry, I know I'm addicted to caffeine, but I use it carefully. I have off days, set limits and regularly go on extended breaks. It's positive effects outweigh its negative effects for me. I know some people get confused between cannabis, alcohol, caffeine and paracetemol et al, saying "some of those are not drugs". I am not one of them :-p [–] 2 points3 points (0 children) sorry, this has been archived and can no longer be voted on I guess... except in the case of Paul Erdős? [–] 2 points3 points (0 children) sorry, this has been archived and can no longer be voted on If you actually have ADD or ADHD, the loss of creativity thing doesn't apply. Stimulants make my creativity go through the roof. [–] 0 points1 point (1 child) sorry, this has been archived and can no longer be voted on I was on ritalin for 6 years: 15 - 21 years of age. It got me through school and I was on it up until this semester (second semester of fourth year of Bachelors Degree), when I finally gave it the flick. At first it worked wonders for my ADHD brain and certainly got me through high school. It allowed me to learn how to learn, which changed my life. But lately - for a couple of years now, really - it has made me compulsive and stressed and mentally lazy. Now that I'm off it I feel healthy and curious again. I'll never go back on it. I took pseudo-ephedrine a couple of weeks ago and even that made me feel awful. So yes, stimulants can be great, but you may not be able to use them forever. [–] 0 points1 point (0 children) sorry, this has been archived and can no longer be voted on I think you're only supposed to stay on it until your brain wires itself strongly enough in the right way that you don't need it anymore. Uppers give me hella stomach aches. :( [–] 0 points1 point (0 children) sorry, this has been archived and can no longer be voted on There's a bit of a difference between the amphetamine and Ritalin. Although Ritalin is definitely a stimulant, it really does... 'help' you stay focused quite happily on a single thing for many hours at a time. In my experience, anyways. [–] 8 points9 points (12 children) sorry, this has been archived and can no longer be voted on Erdos likely had ADHD himself. A symptom is reduced mass in the frontal cortex, mainly dopamine production or reuptake. Taking just the right dosage of amphetamines can offset this imbalance. [–] 6 points7 points (11 children) sorry, this has been archived and can no longer be voted on Bingo. What do you think Adderal is? Yup: a combination of levo- and dextro-amphetamine. From the Wikipedia article on Erdős: After 1971 he also took amphetamines, despite the concern of his friends, one of whom (Ron Graham) bet him$500 that he could not stop taking the drug for a month. Erdős won the bet, but complained that during his abstinence mathematics had been set back by a month: "Before, when I looked at a piece of blank paper my mind was filled with ideas. Now all I see is a blank piece of paper." After he won the bet, he promptly resumed his amphetamine use. [–] 8 points9 points  (4 children) sorry, this has been archived and can no longer be voted on I love that he says all of mathematics was set back by a month, not merely his own work. [–] 5 points6 points  (3 children) sorry, this has been archived and can no longer be voted on His work was famously collaborative to an unequaled extent; it was an exaggeration, not a matter of pure vanity. [–] 0 points1 point  (2 children) sorry, this has been archived and can no longer be voted on I am familiar with the quote and it's relevance. I just appreciate the phrasing. [–] 2 points3 points  (1 child) sorry, this has been archived and can no longer be voted on Ah, ok, I can see it that way now that you explain. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Thanks! [–] -1 points0 points * (5 children) sorry, this has been archived and can no longer be voted on ADD is like being slightly pissed off, slightly tired, and absolutely bored out of your skull with everything - constantly. ADHD is roughly the same, but without the slightly tired. EDIT: Or, you know, downvote someone who knows from both personal experience and extensive research. That's cool too. [–][deleted]  (1 child) sorry, this has been archived and can no longer be voted on [deleted] [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on At least you're doing it right. That's the correct sort of thing to downvote for. Not for contributing information or correcting errors. [–] 0 points1 point  (2 children) sorry, this has been archived and can no longer be voted on I, for one, didn't downvote you. "Slightly pissed off, slightly tired, and absolutely bored out of your skull with everything — constantly" would very easily manifest itself as "all I see is a blank piece of paper" to someone whose primary interest in life was mathematics. [–] 0 points1 point  (1 child) sorry, this has been archived and can no longer be voted on "all I see is a blank piece of paper" is true of most people anyway. It doesn't mean that most people have ADD, it just means most people aren't especially creative. Which is nothing new. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Yes, but most people haven't studied math their whole lives, and most people don't get much benefit to their mathematical creativity by taking uppers. [–] 1 point2 points  (3 children) sorry, this has been archived and can no longer be voted on He took amphetamines so he could stay awake more so he could do more math. [–] 1 point2 points  (1 child) sorry, this has been archived and can no longer be voted on He took amphetamines so he could stay awake more focus clearly enough so he could do more math. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Oh, good to know. I've always heard the "wanted to stay awake" reason. [–] 8 points9 points  (1 child) sorry, this has been archived and can no longer be voted on Basically all the math majors at my school are stoners. [–] 2 points3 points  (0 children) sorry, this has been archived and can no longer be voted on I found this to be about half true. also CS majors... [–][deleted]  (4 children) sorry, this has been archived and can no longer be voted on [deleted] [–] 1 point2 points  (1 child) sorry, this has been archived and can no longer be voted on It can be more difficult to go through the motions while under the influence, but there hasn't been any evidence of negative long-term effects. Personally, I lose track of where I am in a proof if I'm high, but generally because I start going off on trains of thought only tangentially related to what I'm doing. [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on It always impresses me when someone says how they did maths high. I would honestly just get too confused and forgetful. That said I wouldn't have any motivation to do maths anyway, and would most likely put on the TV and order a pizza. [–] 8 points9 points  (0 children) sorry, this has been archived and can no longer be voted on During my undergrad years (pure math), coffee was the "drug" of choice for all of us. I didn't know of anyone that did anything else. During post graduate years (again pure math) people were even more uptight, and some didn't even drink coffee :D. [–] 6 points7 points  (11 children) sorry, this has been archived and can no longer be voted on [–] 15 points16 points  (7 children) sorry, this has been archived and can no longer be voted on Because we're bored with everyday life and it makes the mundane just a little better. At least that's how I saw it. I haven't done anything "worse" than pot but I was high all through grad school and graduated with honors and recognition. [–] 21 points22 points  (2 children) sorry, this has been archived and can no longer be voted on critical circlejerk levels exceeded. Abandon ship. [–] 4 points5 points  (0 children) sorry, this has been archived and can no longer be voted on [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Comment of the week [–] 2 points3 points  (1 child) sorry, this has been archived and can no longer be voted on Upvoted for using "we." [–]Algebraic Geometry 2 points3 points  (0 children) sorry, this has been archived and can no longer be voted on Yeah, I was thinking the same thing, haha. [–] 1 point2 points  (1 child) sorry, this has been archived and can no longer be voted on [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on Yes. We have a program where each graduate program gets to pick their outstanding graduate student of the year. :-) [–] 10 points11 points  (1 child) sorry, this has been archived and can no longer be voted on Because "very bright" individuals tend to have more money to afford such drugs... Just a hypothesis. [–] 3 points4 points  (0 children) sorry, this has been archived and can no longer be voted on Psychedelics are dirt cheap, I doubt that that is the sole reason. [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on IQ below 75 is unable to function independently and borderline mental retardation. Folks in that state couldn't consume psychoactive drugs unless they were selected and administered by a caregiver. [–] 8 points9 points  (0 children) sorry, this has been archived and can no longer be voted on Cannabis has been a big part of my college education and career. There was a particularly revealing moment for me when after fiddling around trying to prove something basic for a homework assignment a clean and complete solution came to me moments after inhaling cannabis. From then on I used cannabis liberally at school and work. This doesn't work the same for everyone since cannabis for me is not really about getting high and never has been. It calms my anxiety and enables me to reconsider things I might otherwise not. It would be awkward to disgorge my academic and professional record at length, but as far as I can tell I have a fairly average resume as a mathematician who has mostly worked on software. It is hard to tell how common my experience is, but at least it is an example of cannabis as a cerebral lubricant rather than a hindrance. This hints at a situation that is more complex than popular notions of complexity indicate. Being on cannabis worked for me because it does not in general make me "high", and its primary value seems to be in questioning assumptions and tearing concepts apart and then building them up again rather than competing for most complexity understood. People throw around concepts like "complexity" and "high", but what is really going on with math and minds is much more nuanced than a handful of independent linear metrics. In comparison alcohol makes me stupid; caffeine, asthma medicine, and speed variants make me rudderless and chaotic; and LSD can be interesting and helpful to reflect on, but much to intense to easily apply to math or work with. Not sure about DMT or shrooms or E. [–] 6 points7 points  (1 child) sorry, this has been archived and can no longer be voted on Also known as methamatics. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Yeah I'm actually a math dealer.. [–] 3 points4 points  (0 children) sorry, this has been archived and can no longer be voted on Not a mathematician, currently a 3rd year mechanical engineering student. I've done LSD a couple times, X once, 2C-E several times. Sometimes I think I fried my brain a bit, I tend to zone out and daydream a lot. But maybe that's irrelevant to the drug use. Anyway, I'd say it made me think "differently", but that's just what I think. Its hard to say whether or not it had any real direct effect on me. My opinion on drugs if this; if you are curious about it and have the opportunity then you shouldn't be scared to give it a try. Once won't kill you, a bit of variety can be good. Just don't make it a habit. Edit: I'd also like to say, after experimenting enough times I outgrew it. I really dont have any desire to do drugs anymore... with perhaps the exception of mushrooms since I never had the chance to do that. Anything new, I might consider if it crosses my path, but otherwise I don't actively search or yearn for drugs. Its mostly pointless. I think most intelligent people realize this and quit after their experimentation phase. [–] 3 points4 points  (0 children) sorry, this has been archived and can no longer be voted on You might enjoy this book (http://en.wikipedia.org/wiki/What_the_Dormouse_Said). Getting outside your normal frame of reference can be a lot to incorporate because you (probably) have no idea just how different things can be (and I mean: all of a sudden). If you do go forward; above all be conscientiousness and be around loving people. Maybe you should see this, too: (http://blog.ted.com/2008/03/12/jill_bolte_tayl/) [–] 1 point2 points * (0 children) sorry, this has been archived and can no longer be voted on Being 'high' on amphetamines increases learning capacity for the duration, though the physical toll they take on the body is not trivial. Hallucinogens are... a sort of means of restructuring all your sensory input, such that almost all aspects of your perception from the emotional to the physical are quite engaging, sort of like you were seeing them all for the first time. Sometimes in this state aspects of certain thoughts, emotions, or systems which you simply pay no mind to while sober become the most obvious aspects of them. The distinction between the uses of table and money feels less important than the fact that they are both square. All jokes begin to sound like archetypes and the actual content becomes less funny than your ability to identify the archetypes. Its not that you aren't conscious of the differences, but rather that the emotional and cognitive impact of these things is shifted around. This sort of state can be really conducive to strange sorts of abstract problem solving. I have seen some really amazing obfuscated code come out of this sort of state. Alternatively synesthesia can be quite illuminating vis a vis problem solving. Some people have quite intense pattern matching abilities with regards to areas of their expertise and these can translate to an almost uncountable number of ways of interpreting material, and sometimes these are beneficial. In much the way that working in different numbering bases makes certain sorts of math problems seem easier, interpreting line drawings of organic molecules as sound can have profound implications for how you go about manipulating them. I'd say you are right, being high generally makes it much harder to understand complex concepts, but sometimes it makes manipulating complex concepts that you already understand fairly well in a new way more intuitive and engaging, and though more often then not it leads to short, simple connections or amusing techniques in problem solving, every once in awhile leads to a eureka moment. [–] 2 points3 points  (0 children) sorry, this has been archived and can no longer be voted on I agree that it is healthy to fundamentally question the structure of things. This allows for very useful mathematical flexibility of thought. Drugs certainly encourage this, but they certainly are not necessary. [–] 2 points3 points  (6 children) sorry, this has been archived and can no longer be voted on This one time I took some drugs (pot) and I was deadly afraid of getting on buses for about a few months. Did not help with math either. [–] 2 points3 points  (1 child) sorry, this has been archived and can no longer be voted on I'm curious to know how you got from {inhales smoke} to {fairly peculiar anxiety}. I'm sure that I can't be the only one who doesn't see the connection. [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on Yes, there's clearly a background story there. [–] 1 point2 points  (3 children) sorry, this has been archived and can no longer be voted on I've seen people get crazy paranoid from pot, but usually they are heavy long-term users. That's interesting that you developed a phobia for a while after one use. [–]Algebraic Geometry 0 points1 point  (2 children) sorry, this has been archived and can no longer be voted on Yeah, my pothead friends tend to get not paranoid, so much so as delusional. They start to imagine crazy scenarios and believe they're possible without any concern for the reality of the situation. They're not running around beating bats away from their head or anything, but just ridiculous shit seems just plausible enough in their heads to be real. I think this is what causes paranoia in a lot of people. [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on It helps your mind make connections between ideas, which may or may not be true connections in reality. [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on What seems more likely is that they are examining a different subject than the one you focus on when imagining the scenario that they're describing. [–] 1 point2 points  (2 children) sorry, this has been archived and can no longer be voted on I am not sure if it helps or not but my friend who is now doing a math Phd once smoked a bowl of strong salvia, about 2 minutes after looking confused for he suddenly turned to us and said 'its all numbers.' [–] 4 points5 points  (0 children) sorry, this has been archived and can no longer be voted on He saw the matrix. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on A lot of work with salvia helped me understand and appreciate math the way I do. Brilliant stuff. [–] 3 points4 points  (1 child) sorry, this has been archived and can no longer be voted on Yes. Not sure the best way to describe why, but psychedelics have definitely helped me, and many I know. It allows you to think about problems in a way you would never otherwise consider. You have no idea how many Mathematicians, Physicists, Chemists, Bio-Engineers, Structural Engineers, Architects, Programmers, Artists, and everything in between, I have met while at Burning Man. And I have no doubt many of them were on something. In my experience, geeks throw the best parties, and are the most fun to trip with. Francis and Crick (at least Crick directly) have commented on their LSD use, and it's helpfulness in determining the double helix structure of DNA. Also, I remember reading the computer engineer who invented the mouse was on LSD at the time. Others famous techies like Bill Gates and Steve Jobs have also admitted to LSD use. Soo... take that for what it's worth. ;) [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on whoo ucla math dept. burning man camp! [–] 2 points3 points  (4 children) sorry, this has been archived and can no longer be voted on My field is Computer Science, not mathematics. However my experience is that sometimes being high can make my code more creative and has helped me find creative ways to "beautify" and make existing code more efficient. I can specifically remember a project from second semester freshman year where I had implemented some code to solve a maze but it was running extremely slow and I wouldve been embarassed to turn it in. I went out with my buddy to smoke a blunt and came back, sat down and reimplemented my maze-solving logic by looking at a printed maze and thinking of how I would solve it in my head. But as the poster above said, learn sober, think high. [–] 4 points5 points  (2 children) sorry, this has been archived and can no longer be voted on Don't forget the Ballmer curve. I've got some code that I wrote while tipsy (beer) and sleepy, and I have no idea how it works. It baffles the mind. [–] 2 points3 points  (1 child) sorry, this has been archived and can no longer be voted on I find alcohol just kills programming ability. And when I mentioned this to my other colleagues, they agreed. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on For me it just destroys focus and attention span. And the ability to focus in on a problem with 100% of your attention for long enough to scare yourself when you look at a clock after 5 hours is the heart of productive programming. [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on Exactly. I used to go over all my assignments when a little bit stoned and see if I could do anything better. I usually found a more interesting way to implement the solution. Learn sober, think high. [–] 2 points3 points  (0 children) sorry, this has been archived and can no longer be voted on [–] 2 points3 points  (0 children) sorry, this has been archived and can no longer be voted on I know a stoner who once claimed that he could easily visualize 4-manifolds. Got me beat. [–] 2 points3 points * (0 children) sorry, this has been archived and can no longer be voted on Being high would possibly make math harder to understand (I'm a mathematician and I smoke weed), however it could also lead to insight...sometimes thinking slower is better than faster. I've also tripped on LSD and shrooms, shrooms more so and I would say it has the potential to break down some blind spots in your life and thus free up some mental space. Definitely read up on it and check to see if you have a family history of mental illness, both are powerful drugs. If you decide to and have a bad trip (not likely, though my first one was bad in the majority) just remember it's the effects of the drug, not you, and don't go into it feeling worried. edit: For the fuller picture I have a great aunt who's schizophrenic and bipolar and my first trip was very bad, to the point that I felt crazy, however once the drugs wore off I was fine. In addition, most people have good trips not bad ones. [–] -1 points0 points  (5 children) sorry, this has been archived and can no longer be voted on For some people, maybe. When my friend tried mescaline, he said he saw how it might be useful for computer programmers. [–] 5 points6 points  (1 child) sorry, this has been archived and can no longer be voted on Oh shush. Even if you say you tried mescaline, there's no evidence other than your own hearsay. [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on [–] 0 points1 point  (2 children) sorry, this has been archived and can no longer be voted on Having done 2C-E myself (which wikipedia says is a potential analog of mescaline), I would say that the effects are not that helpful. I recorded them here. [–] 6 points7 points  (0 children) sorry, this has been archived and can no longer be voted on 2C-E and Mescaline are different enough that I'd say you haven't experienced mescaline. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on "Analog" is more of a legal term, than pharmacological. [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on Meh. It's all in the mind either way. [–]Mathematical Physics 1 point2 points  (2 children) sorry, this has been archived and can no longer be voted on I think the number of my colleagues that smoke weed outnumber the ones that don't. [–] -1 points0 points * (1 child) sorry, this has been archived and can no longer be voted on Do you work at a university? I honestly can't imagine any of my maths professors smoking (I'm not saying they don't) [–]Mathematical Physics -2 points-1 points  (0 children) sorry, this has been archived and can no longer be voted on [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on Math Stats postgrad here, did pure math with statistics, programming and physics on pregraduate level, been smoking weed almost daily now since my second year at university. I also dabbled in the harder psychedelics quite often, having tried LSD, shrooms, mescalin (actually ate the cactus!), salvia, coke and methylone multiple times. How does this affect my mathematics? I can only say that for me, drugs (and especially weed) has taken the code of the universe, and turned it in the most beautiful and elegant way to perceive life that I could ever have imagined. It has definitely not impaired me, just expanded my horizons into different dimensions. Just sayin'... [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on When I tried mushrooms, thinking about everything in terms of topology became very prevalent during the trip. However, I was absolutely shitty at finding my way along any path to a proof. So I don't think it would be worth trying again. That said, I have a friend who made his way to his thesis regularly tripping--and I never asked him, but I'm pretty sure he'd say it helped him out. So all in all, I think it's just a case of different things affect people different ways, and most people probably wouldn't have an easier time of doing mathematics on a cannabinoid or tryptamine. [–] 0 points1 point  (5 children) sorry, this has been archived and can no longer be voted on According to Gleick's biography, Richard Feynman tried acid. He didn't have anything to say about it though. [–] 5 points6 points  (3 children) sorry, this has been archived and can no longer be voted on Feynman himself said he never tried any psychedelic drugs. Not because he was worried about the morality or legality of it, he said he really really wanted to try it, but because he was just too scared that it might mess up his ability to think. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on I'm pretty sure he was invited to John C. Lilly's ketamine/LSD isolation chambers. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on In his book he wrote about trying ketamine. [–] -1 points0 points  (0 children) sorry, this has been archived and can no longer be voted on Just passing along what I read in Gleick. IIRC, it was when Feynman was gravely ill with cancer, so that might have had something to do with it. [–]Algebraic Geometry 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on The wiki on Feynman mentions he said he didn't talk about it due to being embarrassed about risking his intellectual ability. [–]Algebraic Geometry 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Considering that I did my undergraduate studies at the University of California, Santa Cruz, you can imagine that my perspective on the subject is sort of different than most. That being said, I've found that alcohol is way more impairing to my ability to think, understand, and problem solve than marijuana use (even with more abstract subjects such as commutative algebra, etc). This is both during, and after (short term, as I have never noticed any long term effects from either, but who knows?) use. I don't use psychedelics because I fear possible permanent negative repercussions to their use. Either way, I can't attribute any serious new discoveries or understanding directly to marijuana use. I find that it is just not impairing, rather than helpful. I mostly just enjoy the ability to relax and not be over analytic all the time while high. I had heard that Feynman smoked a lot of marijuana while at Caltech, but I've been unable to find any proof regarding the subject. It is documented that he stopped drinking due to a fear of becoming an alcoholic, and that he once experimented with LSD. The best article I can find citing such things is http://www.cannabisculture.com/articles/2783.html but does not provide references. tl:dnr; Based on personal experience, drug use should not be used as a tool to be a better mathematician, but is not going to hurt your ability to do so. [–] 2 points3 points * (4 children) sorry, this has been archived and can no longer be voted on I hope someone is not looking for an excuse to consume! Other than the Erdos (as mentioned by another person), I have never heard of any mathematician that indicated that drug use was helpful toward their career. After asking Auburn about his background (Auburn was a University of Chicago student who began by writing Second City-style comedy sketches), Osserman raised three issues during his conversation with Auburn: drugs, women in math, and mental illness and mathematicians. Together they read an excerpt from Proof, in which the character Hal describes mathematicians at conferences taking amphetamines to keep themselves in high gear. Auburn—who has a great deadpan sense of humor and got more laughs from the audience while reading Hal’s lines than did the actor in the actual play—offered Erdo˝s as an example of a mathematician thought to have used amphetamines, but agreed that he was taking dramatic liberties here. Of course, to mathematicians, this suggestion of common drug use seems laughable. [–] 3 points4 points  (3 children) sorry, this has been archived and can no longer be voted on There's a glaring difference between amphetamines and psychedelics. They're both drugs in the same sense that double cheeseburgers and broccoli are both foods. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Every math-related epiphany that I have generally involves alcohol (and usually nicotine), but that's the extent of my drug use. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on your brain becomes a different creature on LSD. You just can't describe the experience, you really just need to have it. It can definitely make you think about a problem or theorem in a VERY different way. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on I don't know about hallucinagens, but there is a hell of a lot of coke and adderall going around. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on If you take a high dose of LSD / Mushrooms / Psychedelics etc it probably won't help you study at all because you'll be too distracted by random thoughts / visual effects (psychedelics don't make people see "pruple dragons" so that we're clear - they make things that are already there look like they're shifting / morphing / colours change etc). A very low dose might possibly be beneficial in the sense that you'll be thinking about things in new / different ways. Give it a try I guess, it's not gonna be harmful because psychedelics are notoriously safe. A low dose of mushrooms would be 1g dried. A low dose of LSD would be half a tab (small square of blotter paper) of regular-strength LSD (there's no way to no for sure how strong your tabs are, but they are generally shitty the world over lol, great tabs are rare). There are only a few warnings: Don't take psychedelics if you're on anti-depressants or an anti-psychotic medication, for one they'll make the psychedelics not work/reduce effects, secondly if you have a mental illness psychedelics are liable to make it worse. In the event that you take more than you intended to and have a full-blown trip make goddamn sure you are in a safe, comfortable, quiet environment with only a few of your closest friends (if anyone) there. This is extremely fucking important as a good location / mindset can set the vibe for the whole trip. Doing psychedelics at a party full of drunken idiots or at the mall etc etc is a BAD FUCKING IDEA. You are liable to freak the fuck out if you're in a stressful / rowdy environment. Also don't try to cross any roads if you can help it. Also, look up nootropics - drugs that enhance mental function. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on So, I've actually been thinking about this recently. I've been thinking about taking harmaline/harmine(a mild psychedelic and MAOI) regularly and seeing how it affects my mathematical abilities. What should I try to quantify or test before hand exactly? Just to clarify, I've taken it before for weeks at a time and it has had no serious ill effects(to my knowledge). I did, however, feel much more introspective and I'm thinking it might make me more concentrated on math. It was also before I seriously studied mathematics. [–][deleted]  (1 child) sorry, this has been archived and can no longer be voted on [deleted] [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on I keep them separate personally. I know a few people who have a beer while writing up homework, but that's about it. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on I hear about LSD use influencing the discovery of the Double Helix of DNA a lot. If it can help with novel ideas in Chemistry, why not other scientific subjects? I'm guessing its less for trying to understand, and more if you have the current theories completely clear in your head it can cause mind expansion, opening your mind to new theories that you may have thought of but couldn't formerly understand what it was that you were thinking. I can probably ramble all day. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Synthetic cannabinoids work just as well. [–] -1 points0 points * (4 children) sorry, this has been archived and can no longer be voted on No, you're right. The math and physics students take hallucinogens. Art students smoke weed profusely while dabbling in the hard hallucinogens. The Science students are mostly boring. That is the way of things. Hallucinogens will broaden your mind allowing you to grasp concepts that were previously beyond you. At least that's been my experience with drugs and Philosophy. Just apply moderation and you're safe. Also, I'd talk to a physician first to see if you're predisposed to mental illness. Yes, that sounds scary as hell. It should scare you enough to be cautious and heed my words. You don't want to do drugs if you're predisposed to mental illness. So, what have we covered? - Talk to physician about any potential for mental illness, family history also helps. Seriously, ask your parents about their experience with drugs. -Moderation I can't stress this enough. Don't be one of those burnouts in the after school specials. Be one of the cool ones, like Carl Sagan. - Prepare for blastoff! It's a helluva ride, created courtesy of your mind. Of shit, a couple other things. Have a trip sitter, i.e., someone who cares about your well being to keep an eye on things. Get into a good frame of mind. This is super important to the quality of your trip. I suggest focusing on whatever mathematicians do. Math, I guess. I meditate or go for walks or anything really to get my mind clear of all distractions. Haha, it's Saturday evening so I'm stoned writing this. [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on Well, I'm trying to read this stoned and it's really not working. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on No, you're right. The math and physics students take hallucinogens. Art students smoke weed profusely while dabbling in the hard hallucinogens. The Science students are mostly boring. That is the way of things. That's a bit of a black and white way to put things, isn't it? questions like these really aren't just that simple. [–] -1 points0 points  (1 child) sorry, this has been archived and can no longer be voted on Have an objective to work towards with your trip. Be well rested. Don't do stupid things. Use the drug as a tool, not as a toy. Moderation, definitely. Don't be afraid of shit. If you are afraid, don't worry, it'll go away eventually. Patience is a useful skill. Have a friend that you can trust within contact range. Those of you who don't have friends they can trust, you should probably work on that before you start resorting to drugs. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on That's what I was trying to get at last night. Thanks for the FTFY. [–] -1 points0 points * (0 children) sorry, this has been archived and can no longer be voted on Amphetamines. That's all I really need to say. That said, nothing's going to make you smart if you aren't smart. I get the feeling that people are looking for some magical drug that will help them think better, that's not going to happen, you just have to work and be smart. [–][deleted]  (1 child) sorry, this has been archived and can no longer be voted on [deleted] [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on source? or is this sarcasm that I'm missing [–] -4 points-3 points * (9 children) sorry, this has been archived and can no longer be voted on Salvia Divinorum... all I can say is that I really didn't get it until i explored the salvia realm, after that, math, physics, chemistry, all of it made sense. It was a lot of work, but brilliant and well worth it. Edit: So disappointed that I'm getting downvotes for this, in /r/math of all places. Guess the paranoia of "drugs" even exists here. I'll just leave this here [–] 0 points1 point  (8 children) sorry, this has been archived and can no longer be voted on You're getting downvotes because "...all of it made sense." is about as pompous as written communication gets. It has fuck-all to do with "the paranoia of "drugs"." [–] -1 points0 points  (7 children) sorry, this has been archived and can no longer be voted on because of "...all of it made sense."? Wow, talk about taking the shit too literally. Didn't realize I was dealing with purists... and yes it all made sense, not every equation, not every concept, not every theory, but why we have them, what we use them for, and how we use them. It definitely all made sense to a level that if so needed, I could delve into the systems and learn what i needed to learn. Brilliant stuff psychedelics are, they taught me that language is not rigid and absolute... 'downvotes because "...all of it made sense."'... seriously wow. [–] 1 point2 points * (2 children) sorry, this has been archived and can no longer be voted on "All of it made sense" is a proper description of many dissociatives including Salvia. How is making an analogy pompous? We've all experienced the frisson of learning and understanding a new concept for the first time. Salvia mimics that response and this guy is correct in describing the feeling that way. You feel stimulated and light-headed and kind of a rush when you read a really good book or movie or something, or when 'lots of things' start to make sense. STK demonstrates the very paranoia you're talking about. Invalidate other people's descriptions because you don't understand them. If I meet a girl and fall in love and say MDMA makes me feel like that, why is his correct salvia description any less valid than that? [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on thanks, I was really disappointed with STK's comment. Peace :) [–] 0 points1 point  (3 children) sorry, this has been archived and can no longer be voted on Actually I would say more because it's just offtopic. Sure, drugs can give you the feeling of everything making sense, in fact they usually do, but effectively you've just said that "I took drugs and felt like I understood everything, and then had a thought about math and science." This response is a little better because you've given some details, but still doesn't seem to talk about anything mathematical, or any relation between math and drugs. Thinking about the subjects and their uses, like, that's cool, but it's got nothing to do with mathematical intuition. It definitely all made sense to a level that if so needed, I could delve into the systems and learn what i needed to learn. So, did you? What kinds of math/physics/chemistry did you learn as a result of your experiences? Did they just give you a feeling like you could, but you never did? Sort of like how you thought you might write an interesting comment, but then never did? And then you appear to congratulate yourself for doing a lot of work and being brilliant. What was a lot of work? All the math you learned subsequently? Or was it just doing drugs that was a lot of work? Maybe salvia really helped you get some flashes of insight into cool mathematical problems. If so, you might tell us about them. But if all you're going to do is talk about a trip you had, you're going to get downvotes in /r/math. I almost want to upvote you for Bill Hicks, but again, offtopic.
2015-05-24 04:37:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4595211446285248, "perplexity": 1999.0605581634416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927824.81/warc/CC-MAIN-20150521113207-00010-ip-10-180-206-219.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/291595/deriving-an-equation/292171
# Deriving an equation How do we derive the following: $$-\sum_{x} \sum_{y} p(x,y) \left( \log_{2} p(x) + \log_{2} p(y) - \log_{2} p(x,y) \right)$$ $$= -\sum_{y} p(y) \log_{2} p(y)+\sum_{x} p(x) \sum_{y} p(y|x) \log_{2} p(y|x)$$ The first equation simplified is: $$-\sum_{x} \sum_{y} p(x,y) \log_{2} p(x)-\sum_{x}\sum_{y} p(x,y) \log_{2} p(x,y)+\sum_{x}\sum_{y} p(x,y) \log_{2} p(x,y)$$ Do we just use the fact that $p(x,y) = p(y)p(x|y) = p(x)p(y|x)$ and substitute in? - What do you get after doing the substitution? –  Alex R. Feb 1 '13 at 0:15 This is just a guess: We know that $$-\sum_{x} \sum_{y} p(x,y) \log_{2} p(y) = -\sum_{y} \sum_{x} p(x,y) \log_{2} p(y)$$ $$= -\sum_{y} p(y) \log_{2} p(y)$$ Also $$\sum_{x} \sum_{y} p(x,y) \log_{2}p(x,y)- \sum_{x} \sum_{y} p(x,y) \log_{2} p(x)$$ $$= \sum_{x} \sum_{y} p(x,y) \log_{2} \frac{p(x,y)}{p(x)}$$ $$= \sum_{x} \sum_{y} p(y|x) p(x) \log_{2} p(y|x)$$ $$= \sum_{x} p(x) \sum_{y} p(y|x) \log_{2} p(y|x)$$ Thus $$-\sum_{x} \sum_{y} p(x,y) \left( \log_{2} p(x) + \log_{2} p(y) - \log_{2} p(x,y) \right)$$ $$= -\sum_{y} p(y) \log_{2} p(y)+\sum_{x} p(x) \sum_{y} p(y|x) \log_{2} p(y|x)$$ - The first thing that We know in your post seems wrong. –  Did Feb 1 '13 at 16:35 @Did: What is wrong about it? –  proton Feb 1 '13 at 16:36 Try some concrete cases. You seem to have confused $\log p(x,y)$ with $\log p(y)$. –  Did Feb 1 '13 at 16:45 @Did: I was using the fact that $\sum_{x} p(x,y) = p(y)$. Then $\sum_{x} p(x,y) \log_{2} p(x,y) = p(y) \log_{2} p(y)$. –  proton Feb 1 '13 at 16:51 No. As I already indicated, this would be true if one had $\log p(y)$ instead of $\log p(x,y)$ in the summation over $(x,y)$. –  Did Feb 1 '13 at 17:03
2015-08-30 10:09:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9645994305610657, "perplexity": 548.0908445909847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065241.14/warc/CC-MAIN-20150827025425-00025-ip-10-171-96-226.ec2.internal.warc.gz"}
http://fille-facile.fr/boxing/mazdaspeed/german/4236269409674e8d404f69c0c344f8
m(h) = density of water times v(h), the volume wrt height. See the figure. Table of Contents. For example, in Figure 6.4.2 we see a sump crock 1 . How much work is done in stretching the spring . Submit these by clicking on the ladybug here or in a header. Find the work required to pump the water out of the top of the tank. Since 36 62, the equation becomes 6x 62 2 x, so we must have x 2 2 x which has the solution x 4 3. b) ln3 x 5 Answer. to 't 30 cm? MEDSURG 203/ Hesi Med/Surg 2 Study GuideHESI Med/Surg 2 Study Guide With Answers Musculoskeletal Care of patient with a fracture Types- a) Closed/simple- skin over the fractured area remains intact b) Comminuted- bone is splintered or crushed, creating numerous fragments. (See . c) Complete- bone is separated completely by a break into 2 parts. A triangular trough for cattle is 8 ft long. The work needed to pump the water over the pool can be obtained as W = m g h, where g = 9.8 meters per square second and h is the vertical displacement of the center of mass of the water. 4. Since the pool has height 3 m, we must lift that "layer" a distance 3- x meters which will require. Assume water weights W lbs. the work required to pump the water out of the tank. Find the work required to empty the tank by pumping the water out through the top. Solution : If the tank were cylindrical or a right prism, you could simply assume that all the mass was at the midpoint of the height of the tank, and you have to raise it in the gravitational field to get it out of the tank. Write an equation you could solve to compute the water level in the tank after 915600 J of work is done pumping the water out. A water trough is 6 feet long, its vertical cross section is as isosceles trapezoid with lower base 2 feet, upper base 3 feet and altitude 2 feet. A tank full of water has the shape of a paraboloid of revolution (see figure). Here we have a large number of atoms of water that must be lifted different distances to get to the top of the tank. The water in a large horse watering trough weighs 62.4 pounds per cubic foot. Finding the work done lifting a rope with a weight at the end. The length of the slice (parallel to the write. 3 5 2 8 0 0 ( 3 x) d x. 314159265 4195 replies 81 threads Senior Member. "Work" (Calculus II) This is a problem from the chapter called "Work": A water tank in the form of an inverted right-circular cone is 29 ft across the top and 15 ft deep. 6.4 Work. and evaluate the resulting integral to get the exact work required to pump out the desired amount of water. The water weighs 62.4 pounds per cubic foot. if the trough is full, how much work is done in pumping all of the water over the top of the trough? The sides are 4 meters high. w is work, f(x) is force as a function of distance; x equals distance. 6.4. 5. An open tank has the shape of a right circular cone (with the point at the bottom). 20 cm to a length of 25 cm. : Applications of integrals. 4 0 9 2 x dx x 10. The work to empty the tank would be 1 2 m g h, where m is the total mass of water in the tank. There are many variations of this kind of problems and they each need to be analyzed Textbook solution for Calculus: Early Transcendental Functions 7th Edition Ron Larson Chapter 7.5 Problem 19E. E) and the work done on the system (W), which are not thermodynamic properties, are on the left-hand side of the equation. Using Hookes Law to find the work done when stretching a spring and other application problems involving work and springs. You can also practice Calculus problems with the Maplets for Calculus . EXTRAS: DAY 18. 5-7 Work Day 2 - Pumping Water Problems - Worksheet . Help us improve this textbook by reporting bugs, errors, and typos or telling us how it can be improved. 2.5.5 Find the hydrostatic force against a submerged vertical plate. CALCULUS II, TEST II 7 Problem 4 Find the work done in pumping all the water out of a cubic container with edge 8 m which is a quarter full. 2.5 WORK DONE BY THE PUMP The role of a pump is to Question: Calculus 2 work problem: Consider the tank that is generated by revolving y = 2x^2 for 0 <= x <= 1 ft about the y-axis. A spring has a natural length of 0.2 meters. Example: An inverted conical tank with a height of 20 m and a base diameter of 25 m contains oil with density 800 kg/m 3. If the surface of the water is 5 ft. below the top tank, find the work done in pumping the water to the top of the tank. A conical vessel full of water is 16 feet across the top and 12 feet deep. 8.2 Work Done Emptying a Tank The following tank problems involve pumping liquids from one height to another and determining the amount of work required to do it. And, the depth of the water is 3.5 meters. It is filled with water weighing 62.4 lb/ft$$^3$$ and is to be emptied by pumping the water to a spigot 3 feet above ground level. How much work is done in pumping water out over the top edge in order to empty (a) half of the tank and (b) all of the tank? How much work is required to pump all of the water over the side? morabout well set up the problem three additional ways, and yet the value for the work will be the same each time.3 3Yes, my ngers were crossed as I typed that sentence. BC 8.5 - Work - Both Lifing and Pumping Water. 3. Now that we know the force acting on the ith layer, we can use this to find the work required to pump the ith layer of water up and out of the spout. Evaluate the following integrals: (a) R 1 0 (x 3 +2x5 +3x10)dx Solution: (1/4)+2 (1/6)+3 (1/11) chapter 02: vector spaces. Math 252 Preview Handouts (I used to do this before 2020): 5-1 Trigonometric Limits Worksheet . Example 3.1. Find the work done in pumping all the water out of a filled sphere of radius 2 feet. 5-3 and 5-4 - Integration by Parts I and II Worksheets . Finding Work using Calculus - The Cable/Rope Problem - Part b. Replies to: AP Calculus Work Problem #1. Work Pumping Water . Show Video Lesson. 3. In certain geographic locations where the water table is high, residential homes with basements have a peculiar feature: in the basement, one finds a large hole in the floor, and in the hole, there is water. 2. Chapter 5 Integrals Examples: Displacement and total distance problems Chapter 6 Applications of Integration Homework Chapter 6 Examples: Areas between curves Examples: Volumes of solids Examples: Work problems Examples: More sample Work problems Video: Work Problem (pumping water) Chapter more Calculus II Resources Heres the general ques-tion. WORK PROBLEMS - pumping CALCULUS 2 NAME_ Recall from yesterday: W = F * d Same idea but now were Find the indefinite integral and check the result by . 3) A conical tank is resting on its apex. It must rise (8-y)+3 = 11-y ft. Example 8.5.4 Suppose that a water tank is shaped like a right circular cone with the tip at the bottom, and has height 10 meters and radius 2 meters at the top. Calculus II Practice Problems 1: Answers 1. b. b. Pump oil from inverted cone. Recall that water weighs 9810 N/m3. Right now I have my volume as 60(3-y)dy. 2.5.3 Calculate the work done by a variable force acting along a line. A cone with height 12 ft and radius 4 ft, pointing downward, is lled with water to a depth of 9 ft. Find the work required to pump all the water out over the top. 8. Problem 3: An upright right-circular cylindrical tank of radius 5 ft and height 10 ft is filled with water. So, the work done is the sum of all those works lifting the slices: integral [0,8] 4908.734 (11-y) dy = 274889 ft-lb. The water density is = 1;000 kg=m3.) the an element of work is dW = 9810(5 y)dV = 9810(5 y)5ydy and the total work is () () 5 23 3 5 2 4 0 0 55 9810 5 5 9810 5 9810 5 1635 5 23 6 yy Wyydy == = = b. . Homework Equations The Main Menu; by School; by Literature Title; by Subject; by Study Guides; Textbook Solutions Expert Tutors Earn. distance as (y+2). 17 Calculus II Honors Project # 1 - Work and Force When you're pumping water out of a tank, or storing liquid in a tank, the density of the liquid makes a difference. Work is the product of a force and the distance over which it is applied. 1944 ftlb Pumping Problems 9.A circular swimming pool has a diameter of 10 meters. calculating work, pumping water out of a tank, (featuring If a 25-N force is required to keep it stretched to a length of 0.3 meters. That formula says that work is force times distance. When a constant force F is applied to move an object a distance d, the amount of work performed is W = F d. The SI unit of force is the newton, (kg m/s 2 ), and the SI unit of distance is a meter (m). Find the average value of the function over the interval [0,b]. The depth function, then, is Using similar triangles, we see that (step 2). (b)Calculate the work done in lifting the sand to the height of 18 ft from the ground. Example. Note on the Order of Sections. . We have step-by-step solutions for your textbooks written by Bartleby experts! A pyramid-shaped tank of height $$4$$ meters is pointed upward, with a square base of side length $$4$$ meters, and is completely filled with salt water. Remember, the density of water is 1, 0 0 0 1,000 1, 0 0 0 kg/m 3 ^3 3 . Assume that the water is pumped out of the top of the sphere. 2 Activity $$\PageIndex{2}$$ In each of the following problems, determine the total work required to accomplish the described task. Ill cut the cylinder of water up into circular slices; a typical slice is shown on the right. Midterm 2 Practice, Math 142 Page 5 of 9 4. (10 points) The following problems concern the solid of revolution generated by rotating about a given axis the region R,whichliesbetweenthex-axis and the curve y = x x2. One is to pump the water through a hose attached to a valve in the bottom of the tank. Mass is changing with respect to height in this case, as the water is pumped up it spills out over top. (4 + x) / + C 2. Well, work is the integration of force over a distance x. A common level 2 calculus question is to ask for the work required to move a liquid a certain distance, or into a tank of some common geometric shape. b. 4m 3m 6m First we note that force is equal to weight density times volume. Start your trial now! .25 . Example. Explain. | bartleby MATH 22: Calculus II Exam 3 - Practice Problems Spring Semester 2007 1. This calculus video tutorial explains how to solve work problems. I saw this plaque last Determine the amount of work needed to pump all of the water to the top of the tank. BC 5/6-5 AP Fundamental Theorem of Calculus problems. | bartleby (The water weighs 62.4 pounds per cubic foot.) If the radius of outer ripple is increasing at the rate of 5 cm/sec,how fast is the area of the distributed water increasing when the outer most ripple has the radius of 12 cm/sec. A circular swimming pool has a diameter of 10 meters. Work Problem Procedure Hooke's Law Spring Problem Pumping Problem: 16. . Find the work needed to pump the water to a point 1m above the centre of the sphere. Textbook solution for Calculus: Early Transcendental Functions 7th Edition Ron Larson Chapter 7.5 Problem 24E. We have step-by-step solutions for your textbooks written by Bartleby experts! The work done in emptying the tank by pumping the water over the top edge where tank is 2 feet across the top and 6 feet high. Unreasonable Results Squids have been reported to jump from the ocean and travel 30.0 m (measured horizontally) before re-entering the water. In this video, I find the work required to lift up only HALF of the rope to the top of the building. arrow_forward. 1 Answer. How much work is required to pump all of the water over the side? Example 4 A tank in the shape of an inverted cone has a height of 15 meters and a base radius of 4 meters and is filled with water to a depth of 12 meters. Example 9.5.4 Suppose that a water tank is shaped like a right circular cone with the tip at the bottom, and has height 10 meters and radius 2 meters at the top. BC 5/6-6 FTC Applied to Particles in Mortion, Vel/Accl Vectors, and Parametric Equations . d) Compression- a fractured Finding v(h) is where you have to be clever. Find the work done in pumping all the water out of a lled sphere of radius 2 feet. Work is the scientific term used to describe the action of a force which moves an object. A water trough has a semicircular cross section with a radius of $0.25 \mathrm{m}$ and a length of $3 \mathrm{m}$ (see figure). 2. Volume: triangles and semicircles cross sections. 6.5.5 Find the hydrostatic force against a submerged vertical plate. Please do not use this site to cheat or to avoid doing your own work. Volume: disc method (revolving around x- and y-axes) : Applications of integrals. Colloquially work is the amount of e ort put into something. Techniques of Integration. How much work is done in pumping a full from . WORK PROBLEMS - pumping CALCULUS 2 NAME_____ Recall from yesterday: W = F * d. Same idea but now were considering the amount done in pumping a liquid from one location to another. 8. Evaluate a. SOLVED PROBLEMS IN INTEGRAL CALCULUS 1. gral for the work. Use the Work done in pumping water out over the top edge in order to empty half of the tank. I'm a little confused on how to set this problem up correctly. how much work will it take to pump the gasoline to the top? Find the work done in pumping all the water out of a lled sphere of radius 2 feet. November 2010. pi * 5^2 * dy * 62.5 = 4908.734 dy lb. c. The problem is asking us about at a particular instant, when the water is halfway down the cone, and so when cm. Integral calculus is the mirror image of differential calculus. Take the limit as n . dy 2 - y y 2 r The picture on the left shows a side view of the sphere. I'll cut the cylinder of water up into circular slices; a typical slice is shown on the right. Section 6.4 Work. VIDEO ANSWER: for this problem. Sum the work required to lift all the layers. Recall that water weighs 9810 N/m3. 6.5.2 Determine the mass of a two-dimensional circular object from its radial density function. Work done in pumping water out over the top edge in order to empty half of the tank. 10 ft 6 ft 12 ft Figure 1: A tank lled with water. Find the work required to pump all the water our of a vat as illustrated below (given that the weight density of water is 9810N/m3) if the water is up to 2m deep. close. Study Resources. The water exerts a force of 748.8 lb on the end of the trough (step 4). Work. How much work is done in emptying the tank by pumping the water Arc Length Formula; Area of a Bounded Region If so, graph your answer. Calculus way: consider a thin slice of water of thickness dy. Assume that the water is pumped out of the top of the sphere. What is work? 1 23 0 x 1 6. learn. Solve for x: a) 6x 362 x Answer. \displaystyle 352800\pi (3- x)dx 352800(3x)dx Joules of work. so work = g*integral of m(h)dh. 2.5.4 Calculate the work done in pumping a liquid from one height to another. Textbook solution for Calculus: Early Transcendentals (2nd Edition) 2nd Edition William L. Briggs Chapter 6.7 Problem 29E. As usual, the thickness of the representative slice is y. (a) How much work is done by pumping the water to the top of the tank? 5-10 Application to Economics - Income Stream - Worksheet . Please show steps, thanks. Share. : Applications of integrals. Evaluate a. Assume salt water weighs $$10000$$ newtons per cubic meter. 6.5.3 Calculate the work done by a variable force acting along a line. Example Problems Work on homework AFTER CLASS: Homework Day 17 . Now lift the slice at distance y from the bottom of the tank. How much work is done in pumping water out over the top edge in order to empty all of the tank? (You may use the approximation g 10 m=s2 for gravity. 6.5 Applications to Physics and Engineering (cont) Pumping Water Problem (cont) Spring Problem Pumping Problem: 17. First week only \$4.99! Bundle: Calculus of a Single Variable, 9th + Mathematics CourseMate with eBook 2Semester Printed Access Card (9th Edition) Edit edition Solutions for Chapter 7.5 Problem 19E: Pumping Water A cylindrical water tank 4 meters high with a radius of 2 meters is buried so that the top of the tank is 1 meter below ground level (see figure). Find the work done in compressing it from 6 in to 4 1/2 in. And, the depth of the water is 3.5 meters. Find the work done in pumping the water to Write an equation you could solve to compute the water level in the tank after 915600 J of work is done pumping the water out. If the length is doubled, is the required work doubled? Examples of integral calculus problems include those of finding the following quantities: The amount of water pumped by a pump with a set power input but varying conditions of pumping losses and pressure; The amount of money accumulated by a business under varying business conditions The radius of the cone at ground level is 2 ft. figure 1) There are two ways to go about it. If the height of the water is 7 7 7 m, then the top 3 3 3 m of the tank is empty. Applications of Integration. Area: curves that intersect at more than two points. This expression is an estimate of the work required to pump out the desired amount of water, and it is in the form of a Riemann sum. 5-6 Work Day 1 - Lifting Problems - Worksheet . When you calculate things incorrectly, disasters can happen. The tank is filled to a level half its height with a fluid weighing 30 lb/ft^3. : Applications of integrals. View WorkPumpFluid.docx from CEE 06 at University of Mindanao - Main Campus (Matina, Davao City). Section Details: Using integration to calculate work. Well always start by drawing a diagram first. Answer (1 of 3): Take volume of container, The volume of a container is generally understood to be the capacity of the container; i.e., the amount of fluid (gas or liquid) that the container could hold, rather than the amount of space the container 6.5 Applications to Physics and Engineering (cont) Pumping Problem (cont) Pressure Problems (3) 18. We have step-by-step solutions for your textbooks written by Bartleby experts! Ill cut the cylinder of water up into circular slices; a typical slice is shown on the right. then density times gravity acceleration, in feet (62.5*32) In parts (b) and (c), a key step is to find a formula for a function that describes the curve that forms the side boundary of the tank. Problem 3 : A stone thrown into still water causes a series of concentric ripples. a. Well use this value toward the end of our solution. The ends of the trough are isosceles triangles with a base of length 10 feet, equal length sides 13 feet, height 12 feet, with the base up as shown in the picture. Now, the weight density of water is lb/ft 3 (step 3), so applying (Figure), we obtain. We are given a cylindrical tank that is full of water were asked to true and how much work it would take to remove all that water. (c) Just set up the integral for the work done to pump a full tank 3 feet above the top of the tank. The tank is filled with water to a depth of 9 inches. Find the work required to pump all the water to a point 2 feet above the top of the vessel. To calculate the work done in pumping the water out, we rst notethat the 17calculus is intended to help you learn calculus so that you can work problems on your own, do well in your course on your own and, later on, use calculus in your discipline on your own.
2022-12-09 15:16:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6786772012710571, "perplexity": 506.4766971012093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711417.46/warc/CC-MAIN-20221209144722-20221209174722-00866.warc.gz"}
https://www.wyzant.com/resources/answers/topics/help-me-please-please-help
09/24/18 #### A car travels 280 miles in the same time that motorcycle travels 240 miles. if the car speed is 10 miles more than the motorcycle's find the speed of the car and the speed of the motorcycl 09/24/18 #### When they work together to paint it takes 5hrs. Linda alone takes 7 hrs.How long would it take Martin to paint the room alone? I need to know how to solve it ...Hard for me to figure it out 09/22/18 #### Math probability How many different(distinct) 13 digit passwords can be created using letters and digits with repetition? 09/10/18 #### a rectangle with a area of 24 m2 and a width of 4m a rectangle with a area of 24 m2 and a width of 4m 09/06/18 #### How do you convert 4/5 and 3/8 & 1/4 I'm in sixth grade I need help with my homework because I forget to do it and I'm scared if I don't I'll get a bad grade 08/28/18 #### John read 48 pages in 1 3/4 hours. At that rate, how long would it take home to read 120 pages? John can read 48 pages in 1 3/4 pages, at that rate how long would it take him to read 120 pages? 06/24/18 #### help find the dimensions ? the perimeter of a rectangular field is 380 yd. The length is 90 yd longer than the width. find the dimensions ?   The smaller of the two sides is ? yd      The larger of the two sides is ? yd 06/04/18 #### I need help on this math problem! Let theta be an angle in quadrant 3 such that cos theta= -5/13   Find the exact values of csc theta and cot theta   please help me! 05/24/18 #### A truck weighs 5400 pounds An open-wheel race car weights 1/4 as much How much does the race car weigh HELPHELPHELPHELPHELPHELPHELPHELPHELPHELPHELPHELP 05/24/18 05/15/18 #### an ice cream factory makes 350 quarts of ice cream in 10 hours.How many quarts could be made in 48 hours?what was that rate per day? hdth,tgvhsdffhmgvwshjbqsgwefvj,bzwjgfxv jlghggf y gkh fhkt fhk tfhjf kcfk dfkht dfkhdf 04/30/18 #### Hyperbolas question on constant difference Identify the constant difference for a hyperbola with foci (0, −5) and (0, 5) and a point on the hyperbola (0, 3). 04/09/18 #### Question of Diameter and volume of a cone marth used a filter in the shpe of a cone to filter sand from a liquid. the volume of liquid that the filter can hold is 66 cubic centimeters. The height of the filter is 6cm. What is the diameter... more 03/30/18 #### Six oranges are cut into quarters how many children can each have three of the pieces Plz help me bc I have home work plz plz plz plz plz plz p,z plz p, z 03/21/18 #### 1400 people traveled througha specific train staion yesterday. Today the people traviling decresed by 40% how many people was it today I need help I'm in 7th grade and I need some assistance on this problem. 03/19/18 #### a apple tree converts ________ into chemical energy A: mechanical energy from the wind  B: water energy from the rain  C:light energy from the sun 03/12/18 the startup division made 80% of 1/2 of the established division. The startup divison's growth was 1/3 greater than the established division's. If the divisons made $280,000 combined, how much did... more Help Me Please Please Help 02/27/18 #### Which equation can be used to determine the volume of the cylinder, V, in cubic centimeters? I need help with this math question please I need help NOW! Help Me Please Please Help Math Math Help 02/18/18 #### How much of an alloy that is 20% copper should be mixed with 600 ounces of an alloy that is 90% copper in order to get an alloy that is 60% ​copper? I really need help with this math word problem as soon as possible. Help Me Please Please Help Math Algebra 02/15/18 #### Charlie has 60 nickels and dimes whose value was$5. How many of each kind of coin did he have? 60 nickels and dimes whose value was \$5. How many of each kind of coin did he have? 02/08/18 #### the differences of 5 and twice tom's age f? cffghhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh 01/25/18 #### Write the equation for a parabola that has x− intercepts (−1.6,0) and (−3.2,0) and y−intercept (0,25.6). Please tell me what the equation is!! 1 ## Still looking for help? Get the right answer, fast. Get a free answer to a quick problem. Most questions answered within 4 hours. #### OR Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.
2020-12-06 02:21:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5232047438621521, "perplexity": 2330.3399151040812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141753148.92/warc/CC-MAIN-20201206002041-20201206032041-00523.warc.gz"}
https://www.peertechz.com/Agricultural-Science-Food-Technology/IJASFT-3-125.php
##### Authors: Genet Dadi1* Mengistu Urge2 and Tsegay Teklebrhan2 Affiliation(s): 1Oromia Agricultural Research Institute, Sinana Agricultural Research Research Center, Bale-Robe, Ethiopia 2Department of Animal and Range Science, Haramaya University P.O.Box 138,Dire Dawa, Ethiopia Dates: Received: 09 June, 2017; Accepted: 26 September, 2017; Published: 27 September, 2017 *Corresponding author: Genet Dadi, Oromia Agricultural Research Institute, Sinana Agricultural Research Research Center, Bale-Robe, Ethiopia, E-mail: @; Citation: Dadi G, Urge M, Teklebrhan T (2017) Assessment of Commercial Beef Cattle Fattening Practices and performance in East Shoa Zone. Int J Agric Sc Food Technol 3(3): 067-076. DOI: 10.17352/2455-815X.000025 © 2017 Dadi G, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Keywords: Beef cattle; Commercial; Performance The study was conducted in Adama, Lome and Adami-Tulu districts of East Shoa Zone of Oromia Regional State to assess commercial fattening practices, and to evaluate performance of commercial fattening. Multi stage sampling procedure was employed for the survey study. The questionnaire based formal survey was conducted using semi structured questioners by interviewing a total of 45feedlot operators purposively selected from the three districts and both quantitative and qualitative data on beef cattle fattening systems were obtained. For the monitoring work nine farms among those used for interview were selected for the purpose of monitoring animals’ performance during fattening period. Eight Animals from each farm were randomly selected from the farms based on age, initial body weight and body condition. The study revealed that the major criteria for selecting animals to purchase for feedlot fattening includes breed type, physical appearance and/or frame size, age, health and initial price body condition, coat color, horn size were the top priority and major breeds used was Boran breeds with an age between 3-6 years old. The feed resources used by commercial cattle fattening include crop residues and agro-industrial by products feeds. The average total weight gain and average daily weight gain of animals recorded in 90 days of fattening was significantly higher (P<0.05) for large (97.7±1.16;1.09±0.01) and medium (97.58±1.16; 1.09±0.01) as compared to small scale (91.04±1.169 kg;1.01±0.01kg) commercial fattening. It is concluded that the dependence on only Borana breed may affect the efficiency of the fattening industry by creating supply shortage and also it depletes the breed population. Moreover, the Limited feed availability, high price of supplementary feed, market fluctuation of fattened cattle, disease out breaks, and water shortage are the most challenges faced by the sectors and needs to be addressed in the study area. ### Introduction Livestock and meat products have been among the fastest growing components of the global agriculture and food industry. The livestock sector has been contributing a considerable portion to the economy of Ethiopia, and still promising to rally round the economic development of the country. The total cattle population for the country is estimated to be about 56.71 million heads [1]. The sector also plays a significant role in the national economy which contributes about 47% of the agricultural GDP, 15% of the total GDP and has generated an estimated 31% of the total agricultural employment [2]. Live animals and livestock products such as meat, hides, and skins are the third major export accounting for 11% of the export revenue (Hurrissa, 2009). Among the various livestock production activities, beef cattle fattening activity is one of the potential sources for employment opportunity and to increase the volume and quality of meat produced and to alleviate poverty in the country [3-5]. Cattle fattening has gained prominence as an important business project of the livestock industry in Ethiopia to make use of cheap, plentiful farm by-products [6]. Moreover, fattening offers opportunity to exploit the vast cattle and meet export market in the region as well as Ethiopia’s domestic market [7]. The fattening activity can be undertaken at any level of livestock value chain ranging from small holder farmers rearing livestock for multiple functions [8], and small scale commercial fattening to large scale commercial fattening (Tomy, 2003 [4,9,10] who export live animals and supply fattened animals directly to abattoirs concocting meat for inland and international markets. In recent years, market oriented beef production system has been gradually emerging concurrent to the increase in both domestic and export market demand for cattle meet in Ethiopia. Market oriented agricultural production system requires intensification in management or production system (Azage et al., 2011). The Government is also trying to expand the sector by motivating producers’ in order to meet the growing demand. As a result, meat processing factories and export abattoirs are increasing in number and export earnings from the sector are rising (MORD, 2008) [11]. However, inadequate supplies of export quality livestock in terms of the required age and body weight for slaughter, lack of information on efficient way of feed resources utilization for quick feedlot finishing, and the biological response of indigenous cattle to feedlot fattening are remained to be a bottleneck to increase domestic per capita meat consumption and export [10,12]. As in all commercial enterprises in cattle fattening enterprises, the main purpose is to make a profit. From this point of view, evaluating the performances of cattle fattening enterprises gains importance in terms of the continuation of business efficiencies. Performance in general is a concept that quantitatively or qualitatively determines the gains at the end of a purposeful and scheduled activity. In other words, performance is the level of achievement to reach the planned output level [13]. Currently, the government encourages the emerging of commercial fattening practices and support establishments of the sector in an investment form in the study area. However, there is limited information about their utilization of available feed resources, husbandry of commercial feedlot and cattle fattening performance under commercial cattle fattening practices. Therefore, it is very important to investigate the overall activities and performance in the sector to design appropriate technologies for improvement of beef production. The current study is aimed to assess how commercial feed lot cattle fattening systems can provide sustainable and adequate live animal supply which can meet the demand for domestic consumption and export markets and with the following specific objectives: To assess commercial beef cattle practice in the in East Shoa Zone To evaluate the performance of Beef cattle fattening under commercial feed ### Material and Methods ##### Description of the study area The study was conducted in Oromia Regional state, East Shoa Zone, Adama, Lome and Adami-Tulu- JIdo-Kombolcha Districts. Adama district is located at altitude of 1400 – 2700 m a. s. l and it receives uni- modal rainfall with annual amount of 600 – 1200mm and the average temperature varies 17oC - 34oC. (ESZARDO, 2015). Adami-Tulu- JIdo Kombolcha District is located at the altitude of 1650m a. s. l. It receives an annual rainfall amount of 500-760 mm and its average ambient temperature varies from 12.7oC– 27oC East (ESZARDO, 2015). Lome district is located at the altitude of 1604-2364 m.a.s.l. It receives rain fall with annual amount of 1065mm and the average temperature varies 18-28.7oC (ESZARDO, 2015) (Figure 1). Study area. ##### Sampling techniques Multi stage sampling procedure was employed for the survey study. Three districts were selected based on their potentiality for the commercial cattle fattening practices. Totally forty five feedlots were purposively selected based on their potential, size, accessibility and willingness of the owners to cooperate for the study. Accordingly, the representative feedlots selected from each district were twenty five from Adama, seventeen from Lome and three from Adami-Tulu-Jido Kombolcha. The commercial farms were designated as small, medium and large farm sizes based on the number of fattening animals hold by the farms and by benchmarking earlier classification criteria employed by researchers (Tomy, 2003) [4,9,10]. In the present study, small scale commercial fattening represents those farms that hold less than 200 animals at a time. The medium scale commercial fattening holds 201 – 500 and large scale commercial fattening greater than 500 head of animals. The selected commercial feedlots were used for the interview and for obtaining other information. Among the feedlots used for the interview nine feedlots and from each three farms were purposely selected for monitoring of feedlots performances under the feeding management followed by the farm throughout the whole fattening period. Based on their initial body weight and age, eight fattening animals were selected from each of the feedlots identified for the monitoring study. During the monitoring animals body condition was scored and body weight was measured by using heart girth meter. ##### Data collected Survey data were collected from both primary and secondary sources. The primary data was collected from producers (Commercial feed lots) by using semi-structured questionnaire. Secondary data including both published and unpublished documents available at various sources was collected and used to consolidate the information generated and personal observation was made on different animal management aspects during farm visit and monitoring to get an overview of general management activities practiced by different commercial feedlots. ##### Survey data collection The melting point is an important physical property of beeswax used to identify as an indication of its purity. The melting point of solid is defined as the temperature at which the solid exists in equilibrium with its liquid under an external pressure of one atmosphere [9]. The beeswax sample was melted and a capillary tube was dipped into melted beeswax and let stay for 24 hours. The capillary tube inserted into melting point apparatus (with digital thermometer indicator, model LMP-11). After the sample has attained the melting temperature, the melting point read and recorded. Each beeswax sample was analyzed in triplicate. ##### Refractive index at 750C Semi-structured questionnaire were used to collect the data variables such as fattening cattle breed types, sex of animals, type of animals (castrated or intact), age of animals preferred by fatteners, source of animals, number of animals engaged in fattening operation per cycle, feed types and sources of feeds used, feeding strategy (frequency of feeding, order of feeding if different types of feed is offered separately), group or individual based feeding or selected group feeding, feed mixing method used, etc., watering system, fattening duration and cycle, animals selection and purchasing criteria for fattening animals purpose, health care for the fattening animals, constraints/problems/major challenges for fattening practices. ##### Monitoring data A total of nine feed lots was selected purposively and categorized into three scale of production based number of animals (Tomy 2003) [4,9,10], small, medium and large scale commercial feed lot. From nine feed lot three from large, three from medium and three from small scale commercial feed lot were selected based on accessibility and willingness of owners for the study. From each feed lot eight animals were selected randomly. The age of cattle used for the fattening purposes were estimated by dentition (Vatta et al., 2006) and initial body weights of animals were taken by using heart girth meter and fitting into the formula developed by Niioka and Shiratori (2009) for Borena breed. Heart girth was measured and recorded from the smallest diameter running immediately behind the hump and the forelegs of an animal standing erect. A total of seventy two (72) animals from nine feed lot, eight animals from each farm were selected for the purpose of monitoring and evaluation study. The initial body weight of each animal involved in monitoring study was taken after adaptation period and /fifteen days after starting the concentrate feeding by using heart girt meter. Then the weights of animals involved in the monitoring study were taken at interval 30, 60 and 90 days of fattening periods. The heart girth was measured after overnight fasting and/or early in the morning before feeding. The average daily weight gain was calculated as the difference between final live weight and the initial live weight of the animals divided by the number of respective feeding days of each feeding period and body condition score by visual observation during different stages of fattening period. For body condition scoring, the same eight animals selected from each feedlot .Body condition Scoring was done on a monthly basis until the end of the fattening period, using the methodology for the Zebu Cattle condition scores which range from 1–9. The body condition score are established by subdividing the three main body conditions, viz. Fat [F], medium [M] and lean [L] according to (Nicholson and Butterworth, 1986). Anatomical parts like the brisket and hump, transverse process, lumbar vertebrae, hips, ribs, hooks and pins were observed during the scoring process. ##### Statistical analysis Statistical Packages for Social Sciences (SPSS 20.00) were used to analyze all urvey data and General Linear Model (GLM) procedures of SAS (SAS, 2009) used for monitoring data. Mean comparisons were done by using the Least Significant Difference (LSD). The model employed was as follows: 1. Yij = μ + Ai +Bj+A*Bij +eij----------------- (Model 1) Where: yij = Response variables (body weight gain and condition score) μ = Overall mean; Ai = ith feedlot size effect (large, medium and small scale), Bj th age effect A*Bij = jith interaction effects eij = residual effect. An index was calculated to provide overall ranking of beef cattle fattening constraints by developing rank index formula based on the method used by Musa et al. [14]. Where, Rn = Value of the least rank of constraint a (if the least rank is 7th, then Rn = 7, Rn-1 = 6, R1 = 1) Cn = Counted value of the least ranked level (in the above example, the counts of the 7th rank=Cn, and C1= the count of the 1st rank) =${{\text{(R}}_{\text{n}}{\text{*C}}_{\text{1}}{\text{+R}}_{\text{n-1}}{\text{*C}}_{\text{2}}\text{…}{\text{.+R}}_{\text{1}}{\text{*C}}_{\text{n}}\text{)}}_{\text{a-g}}$ *W= weighted summation of each constraints (a, b, c...,g) Similar formulas also used to calculate ranking index for major resources, criteria of selecting beef cattle for fattening and preference of veterinary service. ### Results ##### Source, experience and major selection criteria for purchasing of beef cattle The commercial feedlot operators purchased animals for fattening purpose from different open local market in Borena zone According to the response of all commercial feed lot operators, young bull of Borena breed within the age of 3-6 year were used for fattening operation in the study area. The majority of the feedlot operators have experience of about 1 to 10 years in the business (Table 1) and a significant number also have experience of over 10 years. 1. ##### Table 1: Experience of feedlot operators in cattle fattening activities. The majority of beef cattle fattening operators selected the animals purchased for fattening by considering the breed type, physical appearance and/or frame size, age, health and initial price as the top priority (Table 2). Moreover, body condition, coat color, horn size and skin conditions (stretchable upon pulling and smoothness) were also considered and used as the selection criteria to purchase beef animals. 1. ##### Table 2: Selection criteria of fattening cattle as reported by feedlot operators in the study area. ##### Season and length of fattening period The majority of the feedlot operators considered January to March and October to December to be the most suitable seasons to start fattening operation (Table 3). 1. ##### Table 3: Suitable season to fatten commercial beef cattle. The length of fattening periods across the farm scale were significantly (p<0.05), lower in medium followed by small scale production whereas the frequency of cattle fattening per year and number of animal fatten per cycle was significantly (p<0.001) higher for large, followed by medium and small scale commercial cattle fattening, respectively (Table 4). 1. ##### Table 4: Suitable season to fatten commercial beef cattle. ##### Major feed resources used by commercial feedlot Large scale commercial cattle fatteners preferred noug seed cake, cotton seed cake and faba bean bran as protein sources and wheat bran and maize as energy sources (Table 5). The medium scale operators prefer to use cotton seed cake and lentil bran as protein feed sources while wheat bran, ground sorghum and ground maize energy sources. Whereas small scale commercial cattle fattener preferred cotton seed cake and faba bean bran as protein feed and ground maize and wheat bran were utilized as energy sources. 1. ##### Table 5: Suitable season to fatten commercial beef cattle. ##### Feeding system of beef cattle fattening All the commercial feedlot operators’ uses confined feeding system. Almost all feedlot operators offered roughage first on feeding trough and followed by concentrate feeding twice per day. The amount of feed offered to beef cattle per day per animal was presented in table 6. The amount of the concentrate mix and roughage feeds offered per animals per day was significantly higher (p<0.001) in large scale farms followed by medium small scale farms. 1. ##### Table 6: Amount of feed offer to fattening beef cattle per day by districts and scales of production. ##### Housing system The experience of housing system of commercial feed lots in the study area was presented in table 7. Open overhead shelter with enclosures was observed to be the predominant type of housing system across all the commercial cattle fattening system in study area. While few of the medium and small scale fattening system used no shed, but open enclosure housing system. 1. ##### Table 7: Housing System of fattening animals in the study area. ##### Source of water The source of water used by feedlot operators in study area was presented in table 8. The majority (88.9%) of feedlot operators uses pipe water and animals drink ad libtum but few of them were using river and ground water sources for fattening animal. 1. ##### Table 8: Source of water in the study area. ##### Common health problems and health service availability to feedlot fattening operation The present study indicates that the major diseases constraining feedlot operation in the order of their importance include lumpy skin disease, Foot and Mouth disease, bovine respiratory disease, and digestive system disorder (Table 9). The lumpy skin disease was more frequent in small scale feedlots than the others. Comparatively, the metabolic disorder was more a problem in medium scale farms. 1. ##### Table 9: Major diseases observed in commercial feedlot in the study area. Feedlot operators in the study area use both private and government animal health services. The majority of the respondents prefer to use private veterinary practitioner than the government veterinary services rendered at the study areas (Table 10). 1. ##### Table 10: Health services available to commercial feedlot in the study area. ##### Major constraints of beef cattle fattening The major constraints of commercial cattle fattening in the study area was ranked and presented in table 11. Feed scarcity, marketing problem, water scarcity, animal health problems, and lack of appropriate stock for fattening were identified as the major problems that are hindering the performances of the fattening operations as a whole. According to the respondent’s lack of feed, marketing problems, water scarcity and animal health problems were the primary constraints to run the fattening operations in all the scale of production. 1. ##### Table 11: Major constraints of beef cattle fattening ranked in the order of their importance. ##### Performance of fattening cattle under commercial feedlot management The average initial weight of beef cattle purchased for fattening was almost similar for the different farm sizes; with animals purchased by small scale farms weighing a little (P < 0.001) lower as compared to the medium (Table 12). Animals with age ranging 5-6 weighed significantly higher as compared to those aged 3-4 years. The final weight was lower for small scale fattening farms than the medium. The total and average daily weight gain was significantly (P<0.001) higher for large and medium than the small scale commercial fattening farms. In terms of the age of the animal, age range of 5-6 recorded higher final weight, but lower total and daily weight as compared to age range of 3 -4. The average condition score recorded during late fattening period were shown 7.42± 0.14, 7.13 ± 0.14 and 6.92± 0.14 for large and small medium scale and small scale commercial cattle fattening respectively (Table 12). The body condition score result show that animals in large scale farms laid down more fat compared to the medium and the small scale fattening farms. 1. ##### Table 12: Least square means of live weight gain of fattening animals under commercial feedlot. ### Discussions ##### Season and length of fattening period It is known that the main goal of all commercial enterprises in cattle fattening is to maximize their profit. Thus, to achieve this goal all feedlot operators prefer suitable time to start cattle fattening activities .The majority of the feedlot operators were considered January to March and October to December to be the most suitable seasons to start fattening operation (Table 3). In addition they were also reported that preference for some months of the year to fatten animals is related to market demand picks. Moreover, fattening operators target to finish their animals and sale at the month of March was for religious holiday of Easter festival in Ethiopia and for export market during Muslim holidays, particularly Eid Al-Adha, and Eid Al- Fetir (Haji). It was also reported that those finish their animals at the month December was for Ethiopian charismas holy day. According to the information obtained from respondents’ cattle fattening operation is a time bound activity based on the availability of resource such as feed and labor, and market demand locally as well as at an export market niche. Frequency of cattle fattening per year and number of animal fatten per cycle was significantly (p<0.001) higher for large, followed by medium and small scale commercial cattle fattening respectively). This indicated that large scale feedlot operators have the largest size of the finished beef cattle exported to Middle East market where as the length of fattening periods across the farm scale were significantly (p<0.05), lower in medium followed by small scale production. The current study is in line with the finding reported by Hutcheson (2008) for Boran bull that has been fed for about four months finished and would produce a desirable carcass and neither over finished nor under finished they had enough fat covering to improve the flavor of the meat and possibly will be marbled adequately. Similarly, Tsegay and Mengistu [10] noted that most feed lot operators fed their animals for three up to four months. ##### Major feed resources and feeding system commercial feedlot All commercial feedlots operators (Table 5) depend on purchased feed resources for fattening, because of shortage of land for feed production. The current study is in accordance with that reported by Adugna and Tsagay and Mengistu [4,10]. The type of roughage and concentrate feeds utilized depend on accessibility, importance for fattening such as the feed nutrient content and palatability, and based on ability to be purchased by the farms due to their cost and availability. Use of agro-industrial feed by-product as a major concentrate source and teff straw as a principal roughage by all scale of commercial feedlot operators is an attribute of the availability of the feed resource in the area and cost advantage from using this feeds. All the commercial feedlot feed animals in confined feeding system. The practice of offering roughage first and concentrate next and rationing the concentrate offer into morning and afternoon meal is a normal practice and may reduce the risk of digestive disorder. Same amount of roughage offer by all scale of farms imply that the fattening operators uniformly utilize the limited amount of roughage feeds, which is in agreement with earlier reports [10]. The type of feed offered is according to the ingredients availability and price, and stage of fattening. This finding is similar with the criteria noted by Alex (2015) for selecting feed ingredients are easily available in their locality, most preferred by the animals and cheap and easier to transport. ##### Housing system Cattle should be protected from extreme hot or cold conditions. Hence, proper housing and equipment are important in successful fattening operation. The experience of housing system of commercial feed lots in the study area was presented in table 7. Open overhead shelter with enclosure was the predominant type of housing in large (100%), medium (intermediate) (75%) and small (72.2%) scale commercial cattle fattening operations studied. But, open without overhead shelter housing system was used by few medium (intermediate) and small scale fattening operators .The environment where the commercial fattening of the present study was located in hot particularly during some seasons of the year. Hence, it is important to provide shed where the animals will get access to be protected from excessive heat. According to Koknaroglu [21], providing an overhead shelter in open lots improved ADG of beef cattle in warm and hot environments via increased dry matter intake and feed conversion Efficiency (Table 7). Housing System of fattening animals in the study area. ##### Common health problems and health service availability to feedlot fattening operation Animal diseases are still a major constraint to livestock productivity there by affecting economic growth and the effort made to attain food security Yimenu [22]. The occurrence of various diseases across the fattening farms could be one of the problems that can jeopardize the business if sufficient attention is not given. As the survey result the most feedlot operators lumpy skin disease are the major concern and this result is disagreed with the finding of (Gezahign and Samson ,2014) 83.9 % the disease of most concern was foot Mouth Disease and 16.1 % the most disease concern was Lumpy Skin Disease. While the other disease in few percent for example disease comes from nutritional deficiency is the blindness some the reason of this all feedlots use dry feed only not fed their like green feed. Efficient and reliable animal health services constituent an essential pre-requisite to the cattle fattening. Leonard and Cornelius (1984) old reference stated that animal health care services can be evaluated based on accessibility and service delivery efficiency. Feedlot operators in the study area were used both private and government animal health services. The majority of the respondents were preferred use private veterinary practitioner as first rank than the government veterinary services rendered at the study areas (Table 10). The preference of feedlot farms for private veterinary practitioner could be due to the accessibility and better service although the government veterinary and clinic is low price. According to [22], the most feed lot operators (71%) of them used private veterinarians (private veterinary practitioner) which is in line with the present study. ##### Major constraints of beef cattle fattening The major constraints of commercial cattle fattening in the study area was ranked and presented in table 11. According to the respondent’s response feed scarcity, marketing problem, water scarcity, animal health problems, and lack of appropriate stock for fattening were identified as the major problems that are hindering the performances of the fattening operations in all scale of production. According to the respondents, productivity and fattening activity of livestock were affected by shortage of feed, market, animal health problems, water shortage, lack of animal supply, management skill, and housing system. The present finding is in line with the finding of Bezhign [23], who noted poor nutritive value of available feeds, its limited availability and high price of supplementary feed associated with poor management practices and seasonality of market price for fattened cattle as a major constraint to cattle fattening business in Chiro area of Western Hararghe small holder cattle fattening operation and similar constraints were also listed by Tsagaye and Mengistu [10]. ##### Performance of fattening cattle under commercial feedlot management The average initial weight of beef cattle purchased for fattening was almost similar for the different farm sizes; with animals purchased by small scale farms weighing a little (P < 0.001) lower as compared to the medium (Table 12). Animals with age ranging 5-6 weighed significantly higher as compared to those aged 3-4 years. The average initial weight of beef cattle at entry to the fattening schedule in different commercial feedlot was 254.3±4.55 with a range of 226±4.55 to 330 ±4.55 kg. The higher initial body weight recorded for the medium scale farms is an advantage for producer to achieve desired market weight of beef cattle. The initial live weight recorded in the three commercial feedlot in the present study, was higher than reported by Mohammed et al. [24], for Baggara bulls used for feedlot experiments which lies in the range 165 and 185kg, and also higher than was reported by Girma et al. [25] for Borana bulls which recorded 182kg average initial body weight but lower than reported by Bezhign (2014) [22], for Hararghe high land cattle which lies between 415.41 ± 4.8 and 333.9 ± 7.4 for traditional fattening and small scale fattening operation, respectively and also the final weight was lower for small scale fattening farms than the medium. The total and average daily weight gain was significantly (P<0.001) higher for large and medium than the small scale commercial fattening farms. In terms of the age of the animal, age range of 5-6 recorded higher final weight, but lower total and daily weight as compared to age range of 3 -4. The average daily weight gain of about 1 kg recorded for all size of the commercial feedlot is a good rate of gain by the standard of experimental reports for Zebu cattle. For example, Eltarhir et al. [26], reported a daily weight gain of up to 1.13 kg for feedlot finished indigenous Western Baggara bulls in Sudan. This finding was little higher than ADG recorded for Borena cattle (0.889 kg per day) under station conditions in Tanzania [27]. The higher average weight gain for large and medium compared to small farm size can be attributed to sufficient nutrients contained in diets (feeds), which were able to meet body maintenance requirements and gain. The average condition score recorded during late fattening period were shown 7.42± 0.14, 7.13 ± 0.14 and 6.92± 0.14 for large and small medium scale and small scale commercial cattle fattening respectively (Table 13). The body condition score result show that animals in large scale farms laid down more fat compared to the medium and the small scale fattening farms. The mean body condition score obtained for the present study was comparable with the condition score of 7.67 ± 0.12 and 7.69 ± 0.11 for fattened Hararghe highland oxen measured on farm during the main rainy season which was managed under experienced and less experienced farmers group for fattening cattle, respectively [28]. The mean values of condition scores for this study was lower than condition score values of 7.82 ±0.49 and 7.462 ±0.62 for Hraraghe highland oxen fattened under traditional system and small scale cattle fattening system, respectively [22]. ### Conclusion and Recommandation Generally, East Shoa zone was the area where most commercial feed lot (cattle fattening) were practices and suitable area. The present study result indicated that the commercial feedlots operators of study area were preferred to fatten Borena bulls immediately purchased from open market of Borena zone in the season of October to December and January to March by considering these months as suitable time to start fattening operation. The management practices regarding utilization of existing feed, feeding system, housing system and health care of animals varies across large, medium and small scale commercial fattening and Limited feed availability and high price of supplementary feed, market fluctuation of fattened cattle, water shortage, and disease out breaks were the most challenges that faced the cattle fattening in the study area. The average daily weight gain of about 1 kg recorded for all size of the commercial feedlot is a good rate of gain and average daily gain within age also vary younger age fattened cattle have better gain than mature finished Cattle. Based on the finding of the study, the following recommendations are set for environmental friendly commercial cattle fattening operation business in the study area. Currently, the major breed involved in the fattening operation in the Eastern Shoa Zone districts is Borena breed. However, exploiting this breed without any intervention on breeding strategies endangers the breed. Therefore; the government should design proper livestock policy to optimize the population of animals used for beef and replacement stock at its original area. Moreover, use of breeds that have good feedlot performance like the Borena should be encouraged. Animal health is also the major constraints of beef cattle productions in the study area. Therefore the government should design community based livestock health policy in the districts to maximize the profit of the producers and export healthy and highly demanded beef cattle at international market. Because, producing safe products are a forefront strategy to protect the health of consumers and maintain export market trust. Investment on meat processing and beef cattle production sectors should be encouraged. ### Acknowledgement The Oromia Research Institute and SIDA Haramaya University Female Students Scholarship Program is duly acknowledged for sponsoring the study. I would like to thank All Staff of Adama Animal and plant Quarantine Station for their assistance and facilitation during my study. We would like to acknowledged Ragasa Begna, Sisay Beleta and Ademu Zeleka for their valuable advice in preparing this paper. 1. CSA (2015) Statistical Report on area and production of major crops Part VI: Ethiopian Agricultural Sample Enumeration, 2014 / 2015 (2008 E.C.). Central Statistical Agency, Addis Ababa, Ethiopia. 2. IGAD (2011) The Contribution of Livestock to the Economies of IGAD Member States. Study Findings, Application of the Methodology in Ethiopia and Recommendations for Further Work. IGAD LPI Working Paper No 02 – 11. Link: https://goo.gl/jPm17B 3. Yitaye A (2008) Characterization and analysis of the urban and peri-urban dairy production systems in the N-W Ethiopian highlands. Link: https://goo.gl/ZvyQsq 4. Adugna T( 2008) Feed resources and feeding management: A manual for feedlot operators and development workers. Ethiopian Sanitary and Phyto-sanitary Standards and Livestock and Meat Marketing Program (SPS-LMM). Link: https://goo.gl/AWfWGJ 5. Kerstin K, Zander A, Ellengowan D (2011) Attitudes of Livestock Keepers to Breeding Strategies – Threats and Opportunities for on-farm Conservation of the Borana Cattle Breed. Journal of Agricultural Science: 3. Link: https://goo.gl/t6QVLy 6. Habtamu A, Mulugeta K, Chala M, Arse G, Ulfina G, et al. (2008) Proceedings of the 16th Annual Conference of the Ethiopian Society of Animal Production (ESAP) held in Addis Ababa, Ethiopia. Link: https://goo.gl/Luc6g9 7. Habtamu Y (2012) Risk analysis in cattle fattening in North West Ethiopia: Empirical evidence form two liemittobit model. International Journal of Development and Sustainability 1: 240-254. Link: https://goo.gl/ZHmr6U 8. Sarma PK, Ahmed JU (2011) An economic study of small state cattle fattening enterprise of Rajbari district. Journal of Bangladeshian Agriculture Univ 9: 141-146. Link: https://goo.gl/itaC8C 9. Sintayehu GebreMariam, Samuel Amare, Derek Bake, Ayele Solomon (2010) Diagnostic study of live cattle and beef production and Marketing, Constraints and Opportunities for enhancing the system. Link: https://goo.gl/J4FbTm 10. Tsegay T , Mengistu U (2013) Assessment of commercial feedlot finishing practices at eastern Shoa, Ethiopia. Open Journal of Animal Sciences 4: 273-280. Link: https://goo.gl/UHf6EM 11. ACR (2010) Ethiopian export performance review. Addis Ababa, Ethiopia. Link: https://goo.gl/zVEwZt 12. Yoseph M, Azage T, Alemu Y, Umunna, NN (2002) Evaluation of nonconventional agro-industrial by-products as supplementary feeds for ruminants: In vitro and metabolism study with sheep. Small Ruminant Research (USA) 44: 25-35. Link: https://goo.gl/XYKWp7 13. Erol A, Cavit Y, Engin S (2014) Measuring the Performance of Cattle Fattening Enterprises withData Envelopment Analysis: Comparative Analysis of Enterprises in the Northeast Anatolia Region (TRA).Journal of Animal Science 20: 719-725. Link: https://goo.gl/rtTYFt 14. Musa L, Peter K, Ahmed M (2006) On farm characterization of Butana and Kenana cattle breed production system in Sudan. Livestock research for rural development. Link: https://goo.gl/EU9TCD 15. Getachew L, Hailemariam T, Dawit A, Asfaw N (2008) Live animal and meat export value chains for selected areas in Ethiopia: Constraints and opportunities for enhancing meat exports. Link: https://goo.gl/irtxYX 16. Mohammed N, Tesfaye L, Takele F, Hailu D, Tatek W, et al. (2008) Comparison of body weight gain performance and carcass characteristics of the two Ethiopian cattle breeds under natural pasture grazing management. Livestock Research for Rural Development 20. Link: https://goo.gl/NzycyF 17. Malede B , Yilkal M (2014) Survey study on Feeding Strategies, Challenge and Marketing of Beef Cattle Production in Ethiopia. Middle-East Journal of Scientific Research 21: 669-674. Link: https://goo.gl/4WWtQV 18. Field RA (1971) Effects of castration on meat quality and quantity. Journal of Animal Science 32: 849-858. Link: https://goo.gl/76Lii6 19. Webster AJF (1986) Factors affecting the body composition of growing and adult animals. In: Symposium on comparative aspects of body composition of farm and laboratory animals. The Queen's University of Belfast, Bristol BSI87DU. pp. 45-53. Link: https://goo.gl/nyqiTF 20. Romera AJ, Mezzadra CA, Villarreal E, Brizuela MA, Corva PM (1986) Productivity of grazing Angus steers of different structural size. Animal science Journal 67: 455-460. Link: https://goo.gl/NMhmeR 21. Koknaroglu H, Loy DD, Wilson DE, Hoffman MP, Lawrence JD (2005) Factors affecting beef cattle performance and profitability. The Professional Animal Scientist 21: 286-296. Link: https://goo.gl/uHL6qB 22. Yimenu H (2014) Biosecurity hones in Central Ethiopian cattle feedlots: Its suggestion for live dairy cattle export. International Journal of Biosecurity and Bioterrorism 1: 041-046. Link: https://goo.gl/D61qXE 23. Bezehegn A (2014) Small scale beef cattle fattening practices on farm performance evaluation and opportunities for market orientation in western harrghe zone chiro district. Haramaya university. 24. Mohammed H, Ahmed B, Salih A, Babiker M, Mohammed S, et al. (2014) Feedlot Performance of Sudan Baggara Zebu Bulls Finished on Urea-Treated Sugar-cane Bagasse. ARPN Journal of Science and Technology 4. Link: https://goo.gl/2N6TC4 25. Girma D, Mieso G, Tesfaye A, Arse G, Frehowit M, et al. (2015) Effect of different feeds option on growth response and carcass characteristic of two years aged Borana bulls for export market weight gain at Adami Tulu agricultural research center. Basic Research Journal of Agricultural Science 4:139-145. Link: https://goo.gl/HsgqpS 26. Eltahir IE, Babiker SA, El Khidir OA (2000) Feedlot performance and carcass characteristics of Western Baggara and Friesian crossbred bulls. 1. Slaughtered at 300-kg live weight. Journal of Animal Production 13: 1-10. 27. Mwilawa AJ (2012) Effect of different diet on weight gain of, carcass and meet quality characteristics of two indigenous cattle breeds of Tanzania. A PhD thesis submitted to Sokoine University of Agriculture. Department of Animal Science 245. 28. Fekedu A (1999) Assessment of the feed resource based and the performance of drought oxen of the traditional fattening practices of small holder farmers in the eastern haraghe highlands. 29. Belechew H (2009) Donor Supported Livestock Marketing Initiatives: the Experiences and Achievements of SPS-LMM Program in Ethiopia, Paper presented at the Regional Symposium on Livestock Marketing in the Horn of Africa: Working Towards Best Practices, 21-23 October, 2009, Kenya Commercial Bank Training Centre, Karen, Kenya. 30. David H (2006). Ethiopia sanitary & Phytosanitary standards and livestock & meat marketing program (sps-lmm). Feeding to produce export quality Ethiopian beef requirements and recommendations. 31. Perdana T (2003) Competitive and Comparative Advantage of Beef Cattle Fattening in Bandung Regency. Link: https://goo.gl/sHJhhv
2019-04-26 06:29:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2282915562391281, "perplexity": 8673.665948141997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578760477.95/warc/CC-MAIN-20190426053538-20190426075538-00306.warc.gz"}
http://math.iisc.ac.in/seminars/2017/2017-10-05-anirban-basak.html
##### Venue: LH-1, Mathematics Department I will describe our work that establishes (akin to) von Neumann’s conjecture on condition number, the ratio of the largest and the smallest singular values, for sparse random matrices. Non-asymptotic bounds on the extreme singular values of large matrices have numerous uses in the geometric functional analysis, compressed sensing, and numerical linear algebra. The condition number often serves as a measure of stability for matrix algorithms. Based on simulations von Neumann and his collaborators conjectured that the condition number of a random square matrix of dimension $n$ is $O(n)$. During the last decade, this conjecture was proved for dense random matrices. Sparse matrices are abundant in statistics, neural networks, financial modeling, electrical engineering, and wireless communications. Results for sparse random matrices have been unknown and requires completely new ideas due to the presence of a large number of zeros. We consider a sparse random matrix with entries of the form $\xi_{i,j} \delta_{i,j}, \, i,j=1,\ldots,n$, such that $\xi_{i,j}$ are i.i.d. with zero mean and unit variance and $\delta_{i,j}$ are i.i.d. Ber$(p_n)$, where $p_n \downarrow 0$ as $n \to \infty$. For $p_n < \frac{\log n}{n}$, this matrix becomes non-invertible, and hence its condition number equals infinity, with probability tending to one. In this talk, I will describe our work showing that the condition number of such sparse matrices (under certain assumptions on the moments of $\{\xi_{i,j}\}$) is $O(n^{1+o(1)})$ for all $p_n > \frac{\log n}{n}$, with probability tending to one, thereby establishing the optimal analogous version of the von Neumann’s conjecture on condition number for sparse random matrices. This talk is based on a sequence of joint works with Mark Rudelson. Contact: +91 (80) 2293 2711, +91 (80) 2293 2265 ;     E-mail: chair.math[at]iisc[dot]ac[dot]in Last updated: 18 Mar 2019
2019-03-23 18:55:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8641589283943176, "perplexity": 304.35361635148246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202924.93/warc/CC-MAIN-20190323181713-20190323203713-00029.warc.gz"}
http://blogs.ajc.com/uga-sports-blog/2011/10/17/mcgarity-meets-with-grantham-sec-reviewing-postgame-confrontation-at-vanderbilt/?cp=all
# UPDATED: McGarity meets with Grantham, SEC reviews confrontation at Vanderbilt Georgia defensive coordinator Todd Grantham (R) confronts Vanderbilt head coach James Franklin after Franklin allegedly yelled at one of the Bulldogs' players at the end of Georgia's 33-28 win in Nashville on Saturday. UPDATED WITH COMMENTS FROM VANDERBILT COACH JAMES FRANKLIN (1:33 P.M.) AND GREG McGARITY AT TOUCHDOWN CLUB OF ATLANTA MEETING (1:45 P.M) . UPDATED WITH COMMENTS FROM TODD GRANTHAM (11:25 A.M.). . . ATHENS — Georgia Athletic Director Greg McGarity has already met with defensive coordinator Todd Grantham regarding the postgame confrontation with Vanderbilt head coach James Franklin on Saturday and the Southeastern Conference is in the process of reviewing the incident. McGarity, who was in Nashville on Saturday, said he discussed the situation with Georgia head coach Mark Richt by telephone on Sunday, then met with Grantham face-to-face at Butts-Mehre Heritage Hall early Monday morning. Meanwhile, both Georgia and Vanderbilt have been asked by SEC Commissioner Mike Slive to submit written reports to the league office for review. “I spoke with the Commissioner earlier today and they are in a sort of fact-finding process now,” McGarity said. “We’ll submit a report either verbal or in writing. I’m sure Vandy will do the same and then, basically, we’ll see what happens at that point in time.” As for his meeting with Grantham, McGarity said, “I’d just say those discussions will remain private, but I’ve addressed it with both Coach Richt and Coach Grantham. There were lessons to be learned here. Obviously the conduct that was displayed was not representative of how we want to conduct ourselves after a hard-fought game.” Grantham and Franklin were caught on video in a heated, finger-pointing exchange immediately after Georgia’s 33-28 win over Vanderbilt had concluded. Moments before that, Franklin is observed yelling at Georgia safety Shawn Williams. The Bulldogs were flagged three times for personal fouls in the game and, in one instance, it appears Georgia noseguard Kwame Geathers tries to punch a Vanderbilt player. But McGarity said UGA’s reports to the SEC will not address anything that took place during the game. “The conference does that anyway,” he said. “That’s part of their normal, standard operating procedures where they’ll go through it. Each team submits plays to review and they review things without either team submitting them. That’s just basic standard operating procedure there, things that happen during the game. I’m sure they’ll review the clips and I’m sure they’ll review the personal fouls. They do that all the time. That’s part of their weekly tasks.” SEC Associate Commissioner Charles Bloom confirmed the postgame confrontation is being investigated. “The conference office is gathering information from all parties about the incident,” Bloom said via email. “Too early to comment on it.” Bloom indicated the SEC might have a response by Tuesday. Grantham, in his second season as the Bulldogs’ defensive coordinator, was reprimanded by UGA last year after directing a choke signs toward Florida kicker Chas Henry before he tried a game-winning field attempt in overtime. Grantham told the AJC Monday morning he regretted his actions after Saturday’s game. “First of all, I love my players and appreciate their hard work and investment in our program,” Grantham said. “I feel a responsibility and loyalty to protect and stand up for them. However, I feel it’s important to educate them in all areas of life. While my intentions were genuine, I feel it was unfortunate that things escalated to a confrontation. However, I’ll use it as a learning experience for myself as well as my players so that we all become better men.” Meanwhile, Franklin was asked at his weekly news conference on Monday if he had followed up on the incident. He said he Richt talked via telephone about the situation on Sunday and indicated there had been some correspondence with the SEC office. “We have a very, very talented commissioner and they’ll do their job and do what they think is right and we’re going to move forward,” Franklin said as video of his 1 p.m. news conference was streamed live via the school’s website. “I have tremendous respect for the University of Georgia, their history, their program, their coach. We had a hard fought battle with a lot of passion, with a lot of emotion. . . . Some things got a little bit out of hand and I know myself and Coach Richt — I don’t want to speak for him — would have liked for the game to have ended a little differently and been able to just walk off the field.” Asked specifically if he had any personal regrets with the way he behaved at the end of the game, Franklin said: “I’m not a guy who really has a whole lot of regrets. I’m pretty calculated and pretty well thought out for the most part with the things I do. I am an emotional guy, as you guys know.” Speaking to the Touchdown Club of Atlanta today, Greg McGarity had more to say about the matter: “I think the whole story will come out at some point in time. Right now both institutions are basically filing a report to the Southeastern Conference office on really what happened from Todd’s standpoint and, I’m sure, from Coach Franklin’s standpoint.  That will go to the commissioner, and then we’ll see where things go after that. But I don’t think there’s any question that . . . at the end of the day that’s not the way you want to end athletics competition. It’s kind of like Mark said after the game: ‘At the end of the game, you want to go over there and shake the opponent’s hand and move on to the next game.’ “I won’t get into specifics, but I think if you ask Todd if he had to do it over again, I think he definitely would do some things differently. I think that’s not the way you want to represent your institution on either side. At the end of the day, I think you’ll see that. But I know Todd is in the process of making a statement. We want to get ahead of this and deal with it up front to where there’s nothing out there that we need to be worrying about the rest of the week. “It was kind of like we saw at the end of the NFL [49ers-Lions game] yesterday –- I think we had a situation there with a couple of coaches. Things get heated. There probably are a lot of things people would like to take back, but at the end of the day that’s not what needs to happen in college athletics. We need to talk about the student-athletes and what they did that day — and not reduce it to where things off the field are the topic of discussion during that day.  So it’s something we’re really not proud of, and we’ll deal with it and learn from it and also teach our student athletes on the positives and negatives of situations like that.” AJC reporter Tim Tucker contributed to this report. |– Chip Towers and Tim Tucker Dawg Girl October 17th, 2011 11:20 am Get over it. Move on, and get ready for Florida. Go Dawgs! VolsRule October 17th, 2011 11:20 am Good to see som passion by both teams after a hard fought gam – however, need to keep it professional. Good for Vandy – this coach has his team playing the entire game. Good for U of Georgia – they survived a game that appeard to be heading to a blow out. Bad for Tennessee – we have a lot of work to do – or Vandy will beat us. Best of luck the rest of the season. Go Vols! doodoobailey October 17th, 2011 11:20 am The guy yells at our player and we get in trouble? F that, and first doodoobailey October 17th, 2011 11:23 am To everyone “ashamed” and “embarrased” about Grantham’s behavior? You should be embarrased and ashamed that you have helped keep this idle and soft attitude at UGA with savior Richt. He yelled at our players, CTG did a good job of restraint to not knock him out. That, actually is a good role model. Don’t write a check your a$$can’t cash Vandy. Gary October 17th, 2011 11:23 am Never a shortage of drama with Grantham. I appreciate the level of intensity he brings to coaching the defense. Reminds me of Erk. Real NC Dawg October 17th, 2011 11:24 am Love the fire of our defensive coordinator. Wish the entire team had that same fire before the game. It was evident as the team came out of the tunnel before the game they were not ready to play. Boom Boom October 17th, 2011 11:27 am Take care of the ball #11 and spend some serious time on our special teams. two missed field goals, fake punt and kick off return. Kick the dang ball out of the endzone! German Shepherd Dawg October 17th, 2011 11:28 am Kudos to VolsRule. Classiest post by the “opposition” that I’ve read in recent memory. Though I can’t stand UT during game week, when it’s over all is forgiven. Even when we lose. At the end of the day, it is just a game – not life or death. Good luck against SC and the rest of the season. Vandy may have found themselves a QB in Jordan Rodgers. He looked a lot like Aaron Saturday night. Woof woof! doodoobailey October 17th, 2011 11:29 am To all of you who agree with supporting everything CTG did Saturday night, don’t bother going over to the Junkyard Blawg. He is beeyatching at everyone comparing Erk’s fire and CTG and says they are completly different. What a jackwagon homer who still talks about a Bama game in ‘66. AJC, FIRE BILL and get someone this side of Social Security. WinderDawg October 17th, 2011 11:29 am It is part of football…show a little passion. GATA coach… Robinwood October 17th, 2011 11:29 am I just hope that the team gets fired up about all this and gets back to form on December vs the lizards. Maybe we’ll find some answers on offense, too. 17 of 20 October 17th, 2011 11:29 am Georgia has some great coaches and I would like to see them stay and coach together for at least 5 more years. Honest Abe October 17th, 2011 11:30 am Grantham should be congratulated for sticking up for one of his players. An opposing coach should not go after a player. If he has a problem, he should address it with his counterpart coach, not the kid. Vandy coach James Franklin was dead wrong. Looks like Coach G. is the only one on CMR’s staff with any tenacity. Our “D” is starting to get back some swagger which has been the missing element for a few years. Our offense is so mild and so vanilla. Give Grantham a raise and a commendation and hope that some of his fire rubs off on CMR and Bobo ! Andy in WR October 17th, 2011 11:32 am I agree with several of the above posts-i love what Grantham has brought to our program. Didn’t love the choke sign, but appreciate the fire and passion and desire to perform with excellance. Just wish we could get Bobo on board.- DTC October 17th, 2011 11:32 am There were bad, overreactions to numerous plays on the part of both teams. The play where Sean Williams was flagged, you have to observe the entire incident to get a complete perspective of that incident. He did overreact to the incident in that the other UGA player had it under control. Williams should not have gotten involved, but he did come to the defense of his teammate. In this incident, the Vandy player hit the UGA player, helmet to helmet, well after the play had ended. What I thought was kind of humorous about this incident is the Vandy player kind of bounced off of the UGA player. The UGA player did not seem to be effected, other than knowing he was hit helmet to helmet, so he pushed the Vandy player backwards, and I think to the ground, do not hold me to that fact. My apologies but I do not remember the names or numbers of these two players. It appears that Sean Williams then came up and slugged the Vandy player, in the helmet, thus the flag. The announcer, Couch, did not give a proper assessment of the incident, he blamed it all on UGA. Rather one sided on his opinion of this incident. 4th and 57 October 17th, 2011 11:32 am Georgia fans need to lower their expectations. They habe been irrelevant in the SEC for quite a while. Me October 17th, 2011 11:32 am I would rather have Grantham at the helm than Richt. Robby October 17th, 2011 11:32 am Don’t apologize. Get to work making them ready for Florida. I don’t want to lose again. Our defense is what is going to beat UF. dwayne October 17th, 2011 11:33 am Aint it funny how things work? UGA was worried about offense and if Grantham’s D would he better. Kicking game was supposed to be the only thing going for them… Buffus October 17th, 2011 11:34 am Grantham is an embarrasment. We have the finest university in the SEC and should conduct ourselves better. cb October 17th, 2011 11:34 am Mentioning Grantham and Erk in the same breath is a disgrace. Grantham is a hothead with no humility. He can’t grasp a concept like “TEAM me”. Please do not ever compare him to the great Erk Russell. That is just foolish. Fair and Balanced October 17th, 2011 11:34 am Can we be done with this already? aj October 17th, 2011 11:35 am Great so see fire out of Coach Grantham need to see it more often from restof staff. Hate that he is being forced to apologize. I wonder if he and Richt really get along behind closed doors????? Nick Fairly October 17th, 2011 11:35 am Grantham needs a trademark move like wrestlers have. He should do a throat-slash and then grab his nuts or something. Andy in WR October 17th, 2011 11:35 am and as a coach, I always felt is was my DUTY to defend my players. I always tried to look at my players like they were my own children–Way to go Coach Grantham–GATB-no matter whose it is! rational October 17th, 2011 11:36 am I’m glad McGarity and Richt have handled this like professionals. The Vandy coach and Grantham haven’t reached that level of maturity yet apparently. Who Cares??? October 17th, 2011 11:36 am If that was a baskeball teaming fight, this blog would be off the chain with hatred. DawginLex October 17th, 2011 11:36 am I thought Grantham did right That was Murray’s best game IMO. He saved our butts Get rested and get ready for florida jd October 17th, 2011 11:37 am Another example of college failing to provide students with skills needed to find and keep a job. The graduation rate of college football players is horrible! and job placement even worse. Ban football — teach em a trade like plumbing! Dave October 17th, 2011 11:38 am I really, really like Grantham’s fire. I wish we had more of it. Imagine Grantham as head coach! I will always support Richt, but Grantham is a much needed personality on this coaching staff. Reminds me of Van Gorder! GATA or STFU! Chris October 17th, 2011 11:38 am No need for Grantham to apologize. He’s changed the image of UGA from being soft as to actually playing with a fire, something that was definitely lacking. Props to him for sticking up for his guys. Need more coaches out there who display their passion instead of trying to keep their emotions in check… dawggirl October 17th, 2011 11:38 am Well, I appreciate that others appreciate the “fire” that CTG brings to the program. However, they will have to accept any discliplinary measures that the SEC may choose to implement. I had really thought that all of the personal foul penalties were behind us, but apparently not. I sincerely hope it does not cost us in an upcoming game. Seriously October 17th, 2011 11:39 am Franklin is the problem here, he lied at the press conference afterward saying he was looking for CMR, watch the tape he was after Shawn Williams from the get go. Watch the tape – CTG is walking over sees Franklin pointing out and apparently yelling at Williams – the tape shows what really happened – unfortunately the league will accept Franklin’s bs version and we’ll get walked all over again. We’ve become the candy a** doormat for SEC and NCAA enforcement agents because we just rollover and take it. Don’t see that happening at Bama, FL, LSU, or Auburn. They all throw it back in their face. Thanks to CTG for getting in the middle of it standing up for his players – this is football – not a political debate. Guess everyone would like to have CTG run to the headmaster CMR to turn in that bad guy Franklin who was saying bad things to our players in the appropriate potilically correct fashion while Franklin continues to attacks our kids! To many talking heads now involved – this will get so convoluted no one will ever know the truth. CTG will tell the truth, but Franklin and his cohorts will deny. RedandBlackDawg October 17th, 2011 11:39 am It is a shame when they even have to take the time to investigate incidents like this. You would think they would use their time more wisely, and investigate some of the real issues going on in the league. The DC was reacting to the Vandie coach calling out some UGA players after the game. UGA officials say they aren’t going to address what happened in the game that lead to that confrontation after the game. Makes perfect sense to me. At least we have a DC that stands behindhis players and doesn’t let a rookie coach tell off his players and confront them. It seems to me that the Vandy coach escalated the situation by acting less than professional. He could and should have addressed anything he thought was so terrible to the SEC office, not on the field. I guess he forgot that next year, they have to come into a real stadium and play the DAWGS. GO DAWGS and GATA TroupDawg October 17th, 2011 11:40 am Great Job CTG!! McGarity! layoff CTG and find out why our Special Teams Suck!! I applaud CTG and wish we had more like him. Observer October 17th, 2011 11:40 am Erk Russell was a gentleman with an intense passion for football. Todd Grantham is a buffoon. The best part of the entire incident was watching Grantham run away after Monahan from Vanderbilt stepped between him and Franklin. Dap01 October 17th, 2011 11:40 am I wish the rest of our coaching staff has as much passion for the game. I wish Bobo was not the UGA offensive coordinator. 1eyedJack October 17th, 2011 11:40 am They won’t be exchanging Christmas cards. ormewood October 17th, 2011 11:41 am Grantham needs to calm down. He’s supposed to be setting an example for his players. I’m all for passion, but his is misdirected and inappropriate. Joe October 17th, 2011 11:41 am Geez…finally a coach that stands up for the players and he gets shot down……as much crap as UF does and he gets in trouble for a choke sign….geez….perhaps McGarity can address the vandys chop blocks in his response to the SEC…..perhaps vandy’s coach can DTC October 17th, 2011 11:42 am I agree with Todd Grantham in this incident, he was coming to the defense of one of his players. I would prefer to have this kind of a coach coaching my kid. I was the head coach of my son’s baseball team about 20 years ago, and had to put up with some “we are better than you” from another team that had soundly beaten the stuffings out of our team. I found the other coach and made him, who coincidently was a friend of mine, and I convinced him that he had better bring his kids back in and apologize to my team for their actions, or I would file a formal, public complaint on him, his son, who started the incident, and his entire team, to the county board. He knew me well enough to know that he knew I meant what I said. I did not lose my cool, Grantham was border line, but it was man to man, not man to student. I would like to make a comment in reference to the anti-Christian comments about Coach Richt. For full disclosure, I am Christian. I have been more than disappointed with CMR’s coaching the previous two years. However, I am a bit fed up with all of the anti-Christian rhetoric about him. His Christian faith should not come into the perspective of his coaching abilities. I do believe that he has done a much better job as the head coach this year than previous years, and does seem to have regained his focus as the head coach for this team. Let’s just keep the anti-Christian semtiment out of this. a Also, VolsRule is indeed the exception to the case, of most posters on the entire AJC website, from all schools. chief1 October 17th, 2011 11:42 am Any Coach orSchool of higher learing that are involved in Sports does not stand up for thier player dose not need to be a coach or Ect CMR. needs to have Bigger Balls and let those that play for him know that he is there for them on the playing Field as well as off the playing Field.Thanks Coach Grantham for standing up for your Boys. GO DAWGS!!!!!!!!!!!!!!!!!!!! GATA October 17th, 2011 11:43 am Beat Florida! If not this year, WHEN? gofortwo October 17th, 2011 11:44 am The Vandy coach looks like the type who causes problems and then points the finger at someone else. Of course, his players were throwing cheap shots throughout the game which was alright with him. It might have gotten really wild if we had lost and the Vandy coach confronted Grantham about Williams. And his story about not being able to find Coach Richt sounds phony. Sam October 17th, 2011 11:44 am Georgia has a wonderful coaching staff, from CMR, CMB, CTG… all of them right down to the new strength coach are the best thing that ever happened to our program and I hope they stay at UGA FOREVER!!! Sincerely- Fan of any other SEC school 7576DAWG October 17th, 2011 11:44 am Grantham doesn’t deserve any punishment in this case. Franklin started this and Grantham is not a wimp or what you call a beta, someone that is not a leader but one who sits back and takes anything thrown at them because they can’t deal with confrontation. Grantham is an Alpha or leader and is not going to take any mess from anyone especially if it has to do with one of his players. Maybe Grantham could have handled it better but his defense had just gone from great in the first half to terrible in the second half and didn’t know really what happened. Grantham takes a lot of pride in the defense and was already on edge. Franklin is a head coach and instigated the whole thing. I will be very disappointed if anything go’s against Grantham. If we don’t start showing a little support to the best defensive coach that we have seen since Van Gorder was run off by RIcht the same thing is going to happen to Grantham. Richt needs to run off Bobo and hire a great offensive coordinator and a special teams coach. McGarity needs to address the things that are broken not the thing’s that aren’t and the one really great bright spot this year has been our DEFENSE and Grantham is the reason for that. BG October 17th, 2011 11:45 am Franklin started it! Franklin is lucky Grantham didn’t beat the crap out of him. outkasted October 17th, 2011 11:45 am to the Tech fans think about how ur coach throws his on players under the bus after losing… bye week at the right time…. get guys healthy and ready for the game in jacksonville… TampaDawg October 17th, 2011 11:45 am 4th and 57 October 17th, 2011 11:32 am Georgia fans need to lower their expectations. They habe been irrelevant in the SEC for quite a while. ———————– I know, since all the way back to 2008. That #2 finish in the polls seemed pretty irrelevant waaaaay back in 2007 too. papadawg October 17th, 2011 11:45 am Looks like Vandy found them a QB everyone else will have to deal with this season. DBall October 17th, 2011 11:45 am It’s UGA, what else do you expect?? Enough Said…. BG October 17th, 2011 11:45 am Koop lay down the crack pipe!! BG October 17th, 2011 11:46 am DBall take a hike idiot. Joe October 17th, 2011 11:46 am Agree with 7576 Dawg….so true……we are a bunch of wimps…vandy started it with their chop blocks……and we wonder why this team has no fire or desire…..i am over the crap…. what I think October 17th, 2011 11:47 am It’s as simple as this: Vandy had less than 50 yards rushing in the past two games, UGA gave up 4x that much Vandy scored only 3 points in the past two games, UGA gave up 9x that much If I was Grantham, I’d be embarrassed as hell too. You’re a grown man and you won, move along keep your ass covered. I care October 17th, 2011 11:47 am Typical of UGA, no class. Starting at the top with the coaches and going down to the players. Just what one would expect from this school lately. Let’s just see what the SEC will say…the TV and talk shows are all disgusted with what Grantham did. All I remember is October 17th, 2011 11:47 am that ‘choking’ episode in Jacksonville last year. Grantham has a real problem. DAWGMAN October 17th, 2011 11:47 am It took me a while to figure out that wrasslin ain’t real. My question is: Is Georgia Football real? UIGA Excuse Maker October 17th, 2011 11:48 am UGA fans are so unrealistic and unimformed. Making excuses for TG shows that the level of coomon sense does not exist with some UGA fans. How can a coach tell his players to stay under control when he cn’t. Name another asst coach in the SEC who will have been reprimanded for behavior twice in a year? icallbs October 17th, 2011 11:48 am I bet the UGA defensive players will run through a wall for their coach. They know he is unequivocally on their side. It’s Richt’s job to maintain order; I love having a defensive coordinator who will go nuts. I’m glad things didn’t escalate, primarily because we can’t afford to lose players to suspensions with FL coming up, but this incident will prove to have been constructive. UGA Insider October 17th, 2011 11:49 am I have watched football for many years and I have NEVER seen a worse special teams unit than the 2011 UGA Dawgs. I have stated on this blog several time that it would cost us another game this year and if it weren’t for Drew Butler I would be stand correct. However I still think it will unfortunately because CMR is so slow to correct deficiencies. Because we are so lousy on special teams. You can tell we don’t even practice at it. derek October 17th, 2011 11:49 am Im glad to see that kind of emotion. That is what we need here at Georgia. It has been way to long with no emotion. I say if we get rid of Richt then Grantham would be a great head coach. TampaDawg October 17th, 2011 11:49 am ormewood October 17th, 2011 11:41 am Grantham needs to calm down. He’s supposed to be setting an example for his players. I’m all for passion, but his is misdirected and inappropriate. ————————————— He is supposed to lead by example yes. But first, you need to prove to your players, as a leader, that NO ONE goes after them except for THEIR OWN coach. CTG did exactly what he should have done. I guarantee the other players noticed it and will play their butts off for him forever. Calhoundawg October 17th, 2011 11:50 am What I don’t understand is Franklin’s remark that he was “looking for Richt, but couldn’t find him.” What happened to the customary handshake at the end of the game? I guess then that when he “found one of Richt’s assistants”, he found the wrong one. Two wrongs don’t make a right, but in viewing the pictures of the incident, it is clear that Franklin’s posturing was telling Grantham to “come get some!” It is also clear to me from Franklin’s news conference that he is a sour grapes whiny prick who was far more concerned with Georgia player’s behavior than his own. “We are not going to take anything from anyone!” Whatever. He should have gotten him self off the field instead of walking around addressing Georgia’s players. He started it — Grantham finished it. By the way, check out the warm, fuzzy lovefest between the two head coaches after the Lions/49ers game yesterday. God Bless Coach Richt. God Bless Georgia. Go Dawgs !!!! Will October 17th, 2011 11:50 am These things are not really that difficult to deal with IF immediate supervisors really want to deal with them. When employees misbehave, supervisors assign consequences. The purpose of the consequences are two-fold – to punish the misbehavior and to deter future acts of misbehavior. Graham misbehaved on the sidelines during the Florida game last year – punishment was assigned (reprimand) and for most employees, the punishment (and the public humilation that followed) would have been enough to deter similar acts of misbehavior in the future. So…..Graham misbehaves again. The initial consequences obviously did not deter him from misbehaving in a similar manner so why would you assign a similar punishment. Does he needed to be terminated from employment? Probably not but the consequences must be somewhere between what has already been give (reprimand) and termination IF UGA really is interested in detering similar acts of misbehavior in the future. From what I have learned about Graham, he is a hot-headed person whose demeanor and temperment is better suited for the NFL, where he last coached. He was very upset last year to receive a public reprimand and threatened to resign. I doubt UGA is willing to risk his departure, thus I would not expect UGA to be that concerned with detering his misbehavior with a more substantive punishment. In other words, Coach Richt has stated that this has been “dealt with”. Unless the SEC says differently, I would imagine that will be extent of Graham’s punishment, regardless of whether “dealing with” this misbehavior deters similar incidents in the future. BG October 17th, 2011 11:51 am Franklin was yelling at Shawn Williams and Grantham saw this and approached Frankiln and put Franklin in his place. End of story! Parade of Knuckleheads October 17th, 2011 11:51 am Willy Martinez did have composure and class. BG October 17th, 2011 11:52 am Will the name is Grantham! Get the facts right partner. TampaDawg October 17th, 2011 11:52 am I care October 17th, 2011 11:47 am Typical of UGA, no class. Starting at the top with the coaches and going down to the players. ———————————— Yeah, and it’s much better the head coach of another team to go after the players from the opposing team. Definitely some idiots on here today. Going after CTG while giving Vandy coach a free pass. You guys are just trolls looking for something to complain about. Nice work hypocrites. BG October 17th, 2011 11:52 am Tech lost this weekend! icallbs October 17th, 2011 11:53 am @Calhoundawg: spot on! VADAWG October 17th, 2011 11:53 am Todd Grantham is on the hotseat because he stood up for one of his players who was being put upon by that by an opposing coach! I would not want a coach who would let his players be abused and not do a thing about it. I guess it is just a sign of the times when we have to act like pansies in this over-regulated nanny state that is now our country. rally October 17th, 2011 11:54 am Glad that DAWG fans do not run the world. Grantham is a total jerk and a poor example to your players. Mark Richt is classy and you wackos are never happy with him. I guess most UGA fans have as little class as your players and Goon Grantham. Ihope October 17th, 2011 11:54 am Grantham points some of that fire at the UF head man.Will would hang him out to dry! BOOM puppydawgs!!!!!!!!!! 1eyedJack October 17th, 2011 11:55 am Was this worse than Coach Fishfry yelling, “Are you that f*^&(#g stupid?” at one of his own linemen? georgiagirl October 17th, 2011 11:58 am Now you know that they are going to try and put this all on Georgia and if so the fans need to stand up and be hear. Do not sit back and let it go. GO DAWGS!!!!!!! SiddyBoy October 17th, 2011 11:59 am Did Coach Franklin have a gun or knife? Why the “protecting my players” bit? The guy may have been yelling at Sean Williams but CTG’s response was too “over the top”. I’m still embarassed !!! 1eyedJack October 17th, 2011 12:01 pm Grantham did not go looking for Franklin. The opposite cannot be said to be true. alex October 17th, 2011 12:02 pm @icallbs,the only walls these guys will be running “through” are prison walls..Unacceptable behavior by a coach in a university team that has had multiple episodes of poor bevior in the past.Needs to be fired asap or their is NOP accoutablity on team.. Boca Baby October 17th, 2011 12:03 pm For all the Dawg haters: Get ready. It is going to happen. Carolina will lose at least one more game. Georgia will beat Florida (who isn’t “Florida” this year). Georgia will beat Auburn. Georgia will beat Kentucky. Then you can start all the talk about how the winner of the SEC West is going to destroy Georgia. Well stranger things have happened in the SEC Championship game. Like 2005? georgiagirl October 17th, 2011 12:03 pm I CARE, Who cares Diehard October 17th, 2011 12:04 pm A player said Franklin was cussing #36 for Georgia and Grantham took over from there, I dont think he gave the lame go after me, I’m a man, I’m 40 speech but he got his point across. This just in October 17th, 2011 12:05 pm Hey doodoobaily. Is that what you have for brains uga doodoo? Roaddawg80 October 17th, 2011 12:05 pm The only problem I have with Grantham is he apologized (he was probably made to). GRANTHAM FOR PRESIDENT 11′ crusher October 17th, 2011 12:05 pm There comes a time when you have to stand up for what is right. Thank you Coach Grantham for doing that. If the SEC is going to review the coaches, then they need to review the officials. Then you might discover the true problem. Instead of helping to handle the situation they ran like scalded dogs. Get with the program…………… 87DAWG October 17th, 2011 12:05 pm I love my dawgs but wrong is wrong no matter how you look at it.S.Williams decked a Vandy player and the Vandy Coach was letting him know it was a cheap shot.Grantham could have just said he would handle Sean,But he went nuts AGAIN and was a total embarrASSment to our University.Time to let this nut job walk. SanDiegoJacket October 17th, 2011 12:06 pm What the Ugag fans/alums won’t admit that the unruly behavior, personal fouls, coach’s behavior is endemic of the types of people Ugag recruits. If you run a low rent program you get low rent behavior!! TO HELL WITH GEORGIA, EVERY DAY IN EVERY WAY!!!!!!!!!!!!!!! dawgfacedboy October 17th, 2011 12:06 pm According to Butler on the morning show yesterday it had to do with the chop blocking by Vandy. Apparently UGA coaches said something to the officials who blew them off and told them they would “call the game as they saw it”. I imagine that Grantham was not too happy about this and if Franklin said something to a UGA defender it was probably the straw that broke the camel’s back. Don’t see a problem with what Grantham did at all. Hes’ an intense guy who stuck up for his players when he saw some BS happening. I promise you he gained a ton of points with his defensive players! Beast from the East October 17th, 2011 12:07 pm This just in……Mike Slive has ordered Grantham to spend 3 weeks in the off-season with Muschamp learning proper sideline etiquette! ClarkGriswold October 17th, 2011 12:08 pm Not sure whose fault it was…Regardless, I stand behind Vanderbilt’s coach 100%. He’s teaching his kids that they are not a doormat to anyone. Meanwhile, Grantham looks like the typical classless redbird UGA fan. BuzzGT October 17th, 2011 12:09 pm Looks like he is just defending his players. Vandy guys come off as the sore loser in this one. SEC Observer October 17th, 2011 12:09 pm This is how low Georgia football has become: the Monday morning conversation is about a skirmish with Vandy. That pretty much says it all. rlinaug October 17th, 2011 12:09 pm they’re not always ready to play, but always ready to dance. I find msyelf hoping for that one signature loss that will finally lead the powers that be to say, “Enough.” And CMR can go coach Colorado. Football Wizard October 17th, 2011 12:09 pm Hey SanDiego Jacket, at least we don’t lose to UVA. LOL! Diehard October 17th, 2011 12:12 pm Teams that can’t match up physically cut block. Boise was very good at it and the other team left on the schedule who does it is Tech. Roaddawg80 October 17th, 2011 12:13 pm I wish the rest of our coaches had the fire in them that Grantham does! Diehard October 17th, 2011 12:13 pm Enter your comments here sogadog October 17th, 2011 12:13 pm The post game incident is being blown out of proportion and it is wrong to place all the blame and scrutiny on Grantham. 1. Franklin was way out of line when he pointed at one of Georgia’s players and yelled at him after the game and he was the instigator of the incident despite the fact he holds himself out as the victim. 2. You cant see Franklin’s face during the incident given the camera angle and only Grantham and the players close by know what Franklin was saying. Given Grantham’s body language and Franklin’s pointing, it is clear Franklin was the initial aggressor. 3. Grantham should be applauded and not punished for standing up for his players when the opposing coach is pointing and yelling at them after the game. 4. Grantham has performed better than any of the assistant coaches at UGA this season and he is very mush responsible for Georgia being in the hunt for the SEC East championship. 5. UGA is very lucky to have Grantham who will likely save CMR’s job. If UGA punishes Grantham when he was not in the wrong, it may lose him when the NFL or some other college comes calling and that would be a huge loss for the program. 6. The UGA program has not shown the fire and passion Grantham brings since Brian Van Gorder departed. Grantham’s defense has bailed Bobo’s offense out all season. Case in point, at the end of the game, the offense could not muster a first down and the defense rose to the occasion and shut down Vandy’s offense after the blocked punt. UGA’s offense continues to play very inconsistently and special teams are awful. Without Grantham’s defense, the Dawgs could easily have lost to Miss State, Ole Miss and Tenn. Bottom line, everyone needs to get off Grantham’s case and lets get ready to play the Gators. NOT SO WELL DONE October 17th, 2011 12:14 pm What is so cool about Grantham going postal? Strike two. Ginger October 17th, 2011 12:14 pm I do like Grantham’s fire and I hate he got in trouble–I like the guy and the passion he brings to UGA…We needed some passion……But he has to learn that you have to be smarter. Fire is great–but you must control fire—fire out of control gets you in trouble…You must let your fire burn under control in the right way…..How well I know—speaking from my own experiences and firey attitude. The players will want to play hard for him…which is great….But hope that he learns to be smarter….all he had to do was talk without the pointing and yelling…Just talk to Franklin and say hey I stand up for my players and stand behind my players and walk off……and Let Franklin get in trouble for jawing with him and Williams….Let Franklin be the one to get in trouble. Then got back and tell Williams to play hard on the field–but don’t say anyting to other teams coaches….no win situation…you have to take care of these things on the field in the right manner. October 17th, 2011 12:14 pm [...] oh, Grantham got called into the principal’s office. Georgia Athletic Director Greg McGarity has already met with defensive coordinator Todd Grantham [...] Typical October 17th, 2011 12:15 pm Georgia redneck Ginger October 17th, 2011 12:16 pm Amen…..Get off this and get ready to play Gators….and turn up the heat on some Gators Dawg_Mike October 17th, 2011 12:17 pm Dear Mr McGarity, Please tell CTG attaboy and please hire a special teams coach. that is all- thanks. Remember October 17th, 2011 12:17 pm This is the same guy giving the choke sign to the Florida kicker before he booted Georgia into another loss. Go Gators! Toccoa Dawg October 17th, 2011 12:17 pm Give me a coach like Grantham who will take up for his players over a PC coach like Richt any day. Stand your ground Coach. cookieman October 17th, 2011 12:18 pm Will is a idiot and on top of that can’t spell. Way to defend your players, Coach Grantham. G-Dawg October 17th, 2011 12:18 pm Buffus…you’re a doofus!! Passiona and fire is what our fan base lacks…our head coach lacks….but not our DC..and apparently its rubbing off on our defense. Cause they’ve been the only stellar part of the team this year…including this past incoming freshman class. Offense still sputters…which is more with gameplan and coordinator. and Special teams…well…sure aint that special anymore, except with Butler!! Abnerish October 17th, 2011 12:18 pm Why is Franklin getting a pass here? Based on the video and reports on the incident, it appears that Franklin is the one who started it by having a verbal argument with Shawn Williams. Other than to say, “good game”, what does Franklin have to say to a player on the other team?!?! I have to guess here because no one is divulging what was said, but it seems like Grantham was simply sticking up for his player that was getting an earful from the opposing coach – which is not his place. If Franklin had an issue with Williams, he should have addressed that with Coach Richt, not the player himself. I will say this…we all know who the coach is at Vandy now. can’t say I knew his name before the game started. I guess he got what he wanted out of it… georgiadawg70 October 17th, 2011 12:19 pm If I was a player, Grantham is the kind of coach I would want looking out for me. The whimps at Butts-Merh may be mad with Grantham but his players are ready to fall on a hand grenade for him. That’s exactly what’s been missing. G-Dawg October 17th, 2011 12:19 pm Dawg Mike…said it Right!! LOL Pup Fan October 17th, 2011 12:19 pm Beast…. Good one! bart October 17th, 2011 12:19 pm Grantham should be fired immediately and Richt needs to be told he will be gone at the end of the year. Our team and coaches have no discipline on or off the field and that falls on Richt. I don’t care if Franklin did “start it,” Grantham ought to be man enough to walk away. He is an embaraasment to the team and the University. Our program is at the lowest point it’s been in, even during the Goff years. There are coaches out there who know how to coach their teams to be physical but still have class win or lose. October 17th, 2011 12:20 pm [...] Mark Richt knows as much. Athletic director Greg McGarity, who has met with Grantham according to our Chip Towers, knows as much. The SEC is reviewing the [...] GTBob October 17th, 2011 12:20 pm Why am I not shocked that UGA fans applaud Grantham here while the national media and everyone else thinks he looked like a maniac on national TV? You guys really don’t have any concept of normal behavior do you? You must be the same people who laugh when drunk frat boys cuss out children and old women of the opposing team after games. 01HAWK October 17th, 2011 12:21 pm GRANTHAM and MUSCHAMP have anger issues. Dawg_Mike October 17th, 2011 12:22 pm @ sogadog- yeah, what you said. well put. JRabDawg October 17th, 2011 12:22 pm CTG for Head Coach in 2012. He has the passion the team needs on offense. collegeballfan October 17th, 2011 12:23 pm For several years Georgia has had problems with personal fouls on the field. How can you correct players for on the field personal fouls if the coaches are doing the same after the game is over? Grantham needs to go if his contract allows it. If not, he needs to be put on a “one more time and you are gone” probation. And I still maintain that a man who cannot control his emotions is a poor excuse for a man. McDawg October 17th, 2011 12:23 pm BFD go Dawgs Diehard October 17th, 2011 12:23 pm 01hawk- Muschamp has a losing issue. Dave October 17th, 2011 12:23 pm I just watched the video and I’m glad Grantham did that. You could see where Grantham looked in the direction the Vandy coach was pointing and yelling and then Grantham stepped in. All you sunday school teachers can just kick rocks. I want Grantham on my team and LEADING my team! GATA!! 01HAWK October 17th, 2011 12:25 pm Diehard ………………………………When did UGA get to be a winning program all of a sudden. coachchad October 17th, 2011 12:25 pm Coach Grantham was in the right. A head coach should never go directly to the opposing player. The center for Vandy was chop-blocking Gathers all night, and when you try to take a knee and really hurt another player, thats uncalled for. I don’t blame Gathers for attempting a punch. #60 tried to end his career. Coach Franklin better watch himself, Grantham seems like a guy that would wind up on your door step in the middle of the night.. murfdawg October 17th, 2011 12:26 pm What was CTG supposed to do? Tell Franklin,”I’m sorry you lost, I’m sorry Shawn Williams played like a crackhead, I’ll put him in time out and thank you for being so concerned about our players.” Coaches should not talk to opposing assistant coaches. However, if a rookie coach makes this mistake, he gets what he deserves and will not make the same mistake again. Breathe in, breathe out, move on. Spike 80DF October 17th, 2011 12:26 pm good job CTG for standing up for our guys, wish he was HC. YankeeDawg October 17th, 2011 12:26 pm McGarity get over it. This is typical politically correct crap. A guy got after one of our guys and he defended him. McG must want our boys to be berated by other teams coaches and our guys just take it. Spineless AD. ca dawg October 17th, 2011 12:26 pm don’t lose your cool against florida. they chirp a lot more than vandy and they’ll be baiting williams. they probably won’t dive at kwame’s knees as much as vandy did, though. vandy got away with a lot, but our kids have to be more focused and let that high school stuff go. watch bama’s defense–they never get involved in that kind of nonsense. they let you chirp and just turn around and crush you on the next play. that should be a huge point of emphasis these next couple of weeks. the “energy bus” slogan should be replaced with a new one: “ignore the nonsense and do your effing job.” VININGSDAWG October 17th, 2011 12:26 pm Grantham should be suspended if not fired. He is an embarrassment to our University. Also Geathers should be suspended for at least a half. We have led the league in personal fouls the last two years and this year is no different. GTBob October 17th, 2011 12:27 pm CTG for Head Coach in 2012. Between this and the choking incident I think CTG has pretty much ruined any chance he ever had of being a head coach anywhere. Beast from the East October 17th, 2011 12:27 pm “Grantham seems like a guy that would wind up on your door step in the middle of the night.” Only if you leave doughnuts outside. Diehard October 17th, 2011 12:27 pm Saban goes crazy regularly, Spurrier went nutty on a reporter last week, it’s from the pressure of being in the toughest league, the SEC. VADAWG October 17th, 2011 12:28 pm HEY grantham will never be great I do not have any problem with shaking hands after any game, but I do have a problem taking a lot of crap from a no class dirtball that is going after one of our players. Evidently you do, so I suggest you change your online name to PANSY! ca dawg October 17th, 2011 12:28 pm @ the rest of the world: you know there is a world beyond north avenue, right? gdawginkalamazoo October 17th, 2011 12:28 pm sogadawg, spot on in that 12:13 post. And why wasn’t Geathers thrown out for clearly throwing a punch? Because he got clipped or chop blocked that’s why. And the refs saw it as a severe enough cheap block that #60 deserved to get slugged. Kudos to Geathers, he knows enough about football that that block could have ended his career. Kudos to Grantham. I don’t agree with getting involved with the fans but standing up to that punk a coach that Vandy has was the right thing to do. “I was looking for Coach Richt..” how about hold onto his hand during psot game handshake and bend his ear then.. Punk move on the Vandy coach and he didn’t get away with it. Dogs Smell October 17th, 2011 12:29 pm Buffus , Finest ? Really? 01HAWK October 17th, 2011 12:30 pm MUSCHAMP looks like he is headed to a mental ward along with GRANTHAM. ca dawg October 17th, 2011 12:30 pm @beast from the east: good one, lol. BurytheBone October 17th, 2011 12:30 pm Just got back from the Bahama’s. Damn, I’m gone for one week and all kind of drama happened. Didn’t get to watch the game. TexasVS Oklhoma state was the only game on the ship. Kinda glad I didn’t. Not sure I could take our Dcoach showing some passion. Hell I might of thrown myself overboard. D BAG TROLLS!!!!! October 17th, 2011 12:31 pm @college ball fan: You are big sloppy stinky dbag! @viningsdawgs: you win D BAG of the DAY! Diehard October 17th, 2011 12:31 pm 01 hawk- 13th all time in winning percentage. Is that all of a sudden enough for ya? Calhoundawg October 17th, 2011 12:31 pm @ grantham will never be great — Dude, you left out Woody Hayes, who tackled an opposing team’s player during the game. Just sayin’ … @ lhope — Grantham could take out Mouthchamp with one hand tied behind his back. I hope we bomb Florida back to the stone age. Washington, Ogletree, Mitchell, all back, healthy and rested. Given all the “drama” that is Georgia Football this year, the Florida game has never been bigger, and unless I miss my guess, will be just as intense as the Commode-doorsgame. By the way, whatever happened that Easley guy who allegedly attacked an unarmed former ‘Bama player? Probably nothing. Oh, well… ………… God Bless CTG. God Bless Georgia. Go Dawgs !!! Abnerish October 17th, 2011 12:32 pm So Muschamp is “passionate” for his on field antics, but Grantham is a psychopath?? Whatever… Mr. Krinkle October 17th, 2011 12:32 pm beast from the east: Your ship is sinking loser! 01HAWK October 17th, 2011 12:32 pm Assistant Coaches should not be making news. Let the Head Coach be the one to talk to FRANKLIN. Sugar Hill Dawg October 17th, 2011 12:33 pm I’m UGA Class of 1975. I watched Erk Russell head butt his D – Line before every game. Not only did it fire up his players, my fraternity stopped mixing long enough to bark proudly! CTG may not have the same “Southern manners” we’re used to in Athens, but I’ll take him seven days a week and twice on Sunday! Go Dawgs! Get after the lounge lizards in jorts! Dave October 17th, 2011 12:33 pm Viningsdawg, you are a spineless moron that should be just concentrating on the cheerleaders. Beast from the East October 17th, 2011 12:33 pm “MUSCHAMP looks like he is headed to a mental ward along with GRANTHAM.” 01HAWK, You may be right. He even called a timeout after the no-call on the first muffed punt so he could chew the ref’s a$$ out for 5 more minutes! Imagine wasting a timeout in order to throw a tantrum? He’s got to learn to dial it down or he won’t last long in this league. Of course, I used to say the same thing about Spurrier, and he’s stood the test of time. azdawg October 17th, 2011 12:33 pm Wonder if the SEC, after its review, can authorize player suspensions (Geathers), etc, for incidents, fouls during the game? GA will need all its starters and backups for the FL game. I hope that is not the case. Renee October 17th, 2011 12:34 pm Grantham is an embarrassment. I thought someone might reign him in after last year’s “choke” signal during the Florida game, but that doesn’t appear to have happened. What a role model… surething October 17th, 2011 12:34 pm Grantham really does a super job? Are you kidding me? Have you looked at your D penalties this year and the t hug way your D is playing. I will agree,it comes from their Coach. CTG should be somewhere else and soon,NOT at UGA! This guy is a problem,not a fix dawg fans. If you can’t see it please stop drinking the red&black mixture. dawgfacedboy October 17th, 2011 12:35 pm According to Butler on the morning show yesterday it had to do with the chop blocking by Vandy. Apparently UGA coaches said something to the officials who blew them off and told them they would “call the game as they saw it”. I imagine that Grantham was not too happy about this and if Franklin said something to a UGA defender it was probably the straw that broke the camel’s back. Don’t see a problem with what Grantham did at all. Hes’ an intense guy who stuck up for his players when he saw some BS happening. I promise you he gained a ton of points with his defensive players! 01HAWK October 17th, 2011 12:35 pm Abnerish …………………………..MUSCHAMP is not going to do well at FLORIDA. He is not winning and he goes crazy every game. SABAN is winning and has been winning for a long time. SABAN can do what others can not do. JUST WIN BABY Beast from the East October 17th, 2011 12:36 pm “beast from the east: Your ship is sinking loser!” Mrs. Krinkle, Yep….and it will probably fall to the bottom and land on top of your ship that sunk several years ago. Drew October 17th, 2011 12:36 pm A coach standing up for his players? What a concept!! We might not have handled the skirmish properly, but at least our players and coaches have each others backs. Right or wrong, it’s called being a TEAM. Go Dawgs! ca dawg October 17th, 2011 12:36 pm grantham had to do something to stand up for himself and his kid–it’s clear that franklin was the instigator and sought both of them out. but he overreacted. got to have more discipline and focus than that. you have to exemplify what you want to teach your kids, and grantham missed an opportunity to SHOW them how to react when someone else is being an idiot. and let’s be clear: james franklin was acting like an idiot. you absolutely don’t do what he did, and his whole “we’re not going to take it anymore” act in the press conference was pathetic. no one but vandy fans and UGA haters are buying it. Gator Mike October 17th, 2011 12:36 pm Good to see some intensity by coaches. Grantham reminds me of when Woody Hayes was terrorizing everyone who played Ohio St. Perhaps we can have an encore after the game with Grantham taking on Will Muschamp. That would be a good one. Of course I am just kidding. BigTimeTechFan October 17th, 2011 12:36 pm Grantham was funny going off on Vany coach, he looks a lot like Chaz Bono. Wonder??? HaHa tech October 17th, 2011 12:37 pm This Vandy coach lies! He clearly was going at a UGA player. This guy is clearly smack talking a player and tells everyone is a presser that he was trying find CMR. What a Joke! KOOL October 17th, 2011 12:37 pm Go Grantham!!!! You have nothing to apologize for, bro!! Red Stick October 17th, 2011 12:37 pm I can see where Dawg fans are happy to see Grantham stand up for his players but the point is both coaches embarrassed their universities which is far more important. Grantham and Franklin both should be reprimanded. 01HAWK October 17th, 2011 12:39 pm BAMA is # 6 program in winning percentage ………………………………Go back and come up with something else. I will be waiting on you. gdawginkalamazoo October 17th, 2011 12:39 pm How do we know that the Vandy coach is a liar? Look at the 2:20 mark on that video. CMR walks right in front of him. He could have talked to him then. Or is he too stupid to know what Coach Richt looks like. Yep, go back and look at it. He’s a liar. “I was looking for Coach Richt…” He is either stupid, wrong, chicken or a combination of all three. RIGHT NEXT TO HIM. Big Wally Butts October 17th, 2011 12:40 pm I think that the next time that we play Vandy at Vandy, we should just continue to play “lights out” on defense. Innocent Bystander October 17th, 2011 12:40 pm Speaking of butt smear… Todd Grantham needs to grow up. October 17th, 2011 12:41 pm [...] spoke with the Commissioner earlier today and they are in a sort of [in a] fact-finding process now,” McGarity said. “We’ll submit a report either verbal or in writing. I’m sure Vandy will do [...] 01HAWK October 17th, 2011 12:41 pm BAMA played VANDY 2 weeks ago………………………………..NO FIGHTS………………..I guess it was because we kicked their ARSES ………………….34-0 Mr. Krinkle October 17th, 2011 12:41 pm beast: speaking of donuts have you seen your OC? You make every single UF look like a complete idiot! dawgfacedboy October 17th, 2011 12:41 pm Sure thing- they have had very few penalties this year and since he took over the D they have improved dramatically each year. Going into Sat. they were the 6th rated defense in all of college football. Is this not what you would call doing a super job? schmeckdawg October 17th, 2011 12:41 pm McGarity, Richt & Slive…all candy asses!!!!!!!!!!! Way to defend your players Coach G.!!!!!!!!!!!!!!!!!!! Hunker Down & GATA!!!!!!!!!!!!!!!!!!!!!!!! JB October 17th, 2011 12:43 pm Hum….A Richt coached team usually apologizes for winning and losing. I’m saying Grantham gone after this year if Richt stays, ….his choice. JohnD October 17th, 2011 12:43 pm Grantham is such a hippocrite. Apparently he’s got no issue mouthing off to opposing players – but he wants a 2nd grade rumble if another coach calls out his squads cheap shot play?? After the Aaron Murray crybaby show last year about Auburn’s antics – UGA fans ought to be real proud of how TG is coaching them up there in Athens. What an idiot. DawgfromsouthLEX October 17th, 2011 12:44 pm I think Grantham did right. I wish he had slapped one of the cheer leaders while he was at it. Murrays best game, now get ready for Florida guys, maybe punch a teacher or something. At Georgia we want tough hardnosed players.Let the other teams be the sissys. Go dogs, go dogs, go dogs. Mel October 17th, 2011 12:45 pm Fire and passion are good but bring a level of professionalism to the game is necessary. These coaches are role models for these kids. Ever wonder why UGA can’t keep its players out of jail? Maybe if there were stricter rules and better role models, these things wouldn’t happen. A disgrace. JB October 17th, 2011 12:45 pm Dawgfaceboy……………I like it also, but those defensive stats were collected on teams that are 1-13 during this 4 game win streak. Vandy, by gaining 349 yards and 21 second half points, kinda exposed our D a little I’m afraid. gdawginkalamazoo October 17th, 2011 12:46 pm “Yep….and it will probably fall to the bottom and land on top of your ship that sunk several years ago” Beast, now that was a good one. LMAO. Army Vet October 17th, 2011 12:46 pm That Vandy player should have pimp slapped Grantham across his big mouth. He’s a clown like the rest of the UGA players. We barely beat Vandy…pathetic. I hope UGA wins the east so they will have to play a ranked team…..that will be ugly! Mr. Krinkle October 17th, 2011 12:46 pm hawk: NOBODY CARES MORON! CTG ROCKS! Big Wally Butts October 17th, 2011 12:47 pm Boy, this blog has some real dumbshorts on it. Think that I will go do something productive. Tom G October 17th, 2011 12:48 pm Pansy & Renee – Ladies(?), this is football played by young aggressive men not powderpuff or flag football. Tempers flare and boys will be boys sometimes. Not a sport for the passive gentle types like yourselves! Coach stood up for his players and men understand that! Now go back to the kitchen or your knitting! Bye!! Diehard October 17th, 2011 12:48 pm 01hawk- you know why God sends tornadoes to places like Alabama, He’s telling y’all to move OUT of those trailers and GET A HOUSE. Until then RTR – roll trailer roll. oldfart October 17th, 2011 12:48 pm 1. It isn’t the AJC or Tech fans that are sustaining the debate over the Franklin/Grantham shouting match, it is the SEC. I doubt if this is just going to go straight under the rug like many on here wish. This was a dangerous situation and could have been much worse. There is a GA high school coach in the hospital today with facial reconstruction to attest to that fact. 2. Before you jump on the Grantham for head coach bandwagon shouldn’t you consider the mediocre performance of the defense in his tenure? I’m not dissing the kids but the fact that they are undisciplined and often perform individual celebrations of their mediocrity speaks volumes on their coaching. 3. Does he have the temperament to be a head coach or the credentials to maintain an offense or special teams as well? Being a hot head does not qualify you for head coach. Granted it can help in a Vince Lombardi kind of way but it can kill a program as well. Woody Hayes was quite successful until his famous right hook. 4. No matter what the trailer trash contingent thinks the actions of both coaches have no place in the college game. This is not the NFL or rasslin’. Franklin’s actions are Vandy’s problem. Unfortunately Grantham’s actions are UGA’s. Sinking to this level does not reflect well on the schools, period. 5. Bulletin board stuff doesn’t work in the pros, it is just part of the show. 18-22 year olds are a different matter entirely. You can bet the Gators probably will skip the bulletin board and put up a poster sized picture of the choke sign. fantasy weekend October 17th, 2011 12:49 pm UGA wins! UF and Yech are losers!!!!!!!!!!!!!!! Bama Mike October 17th, 2011 12:49 pm Georgia AD appears to be a no nonsense kind of guy. I am sure he voiced his continued support but probably conducted a little bit of administrative coaching as well. It’s over lets move on. 01HAWK October 17th, 2011 12:49 pm WOW……………………………….Such a hard time winning against the ALL TIME DOOR MAT of the SEC IN VANDY. By the 3rd quarter, TRENT RICHARDSON was on the sidelines enjoying the 2nd, 3rd, and 4th stringers battle it out. EAST IS LEAST dogfan October 17th, 2011 12:49 pm Hey Chip, Nice job writing a one sided story and showing only CTG yelling. Are you aware UGA is in the same state you are? Maybe you could support a state school! How about writing and showing both sides of the story next time. Roy Wood October 17th, 2011 12:50 pm Most of the negative comments must be from Tech.Florida or Auburn supporters.I compare tod to Van Gorder.I hope they do not run Grantham off I would hate to go back to the Willie Martinez defense.I love coachs with a little fire in their blood.A Texas Dawg jim928 October 17th, 2011 12:50 pm What kills me is we have a coach that defends our kids and our university and now it seems like he got called to the principal office. Regarding our personal fouls they was retaliation ans second man was flagged. If u watch the game u will see poor Vandi player through a punch at one of our player. What the SECOND needs to look at is why the officiating crew let this all happen through out the game and maybe thwy should be called to the principal office. GO DAWGS you never disappoint and thank god I’m a bulldog ugafans October 17th, 2011 12:50 pm better chill when they get into Jacksonville.Trust me on this,With the bad economy you will not be able to get away with things you have in the past.The local PD is licking their chops on seeing how many they can lock up this yr.I know people that have been in special meetings just for this.Heads up and especially if you wear red.And fighting fines have been made X3 the normal fine. flatsdawg1 October 17th, 2011 12:52 pm What actually happened and was actually said is extremely interesting, as fans we should be able to get the real story……anyone know……??? Probably never know…… Anyway I read comments by Mike Bobo how we tend to sit on a lead and hope the defense can hold them. I mean, if you have noticed Richt has taken some chances on fourth down against tennessee and then before the half against Vandy with the touchdown, and I think its great, so hopefully that killer instinct is developing, because we will need it against Florida and Auburn. Bama Mike October 17th, 2011 12:54 pm Diehard, I have family that lost more than just their homes from that disaster. SEC family across all universities stepped up to the plate in providing support. Shame on you sir/madam. Your comment embarasses the passionate Georgia supporters who just want to see their team compete well. johnny October 17th, 2011 12:54 pm The NCAA is investigating this??? This is the same NCAA that just found no evidence in the SCAM Newton case, give me a break. Go Coach Grantham… We love you at Ga.. October 17th, 2011 12:55 pm It seems that it is okay for a coach to yell, scream, grab, point a finger in the face of a student athlete, not even talking about this incident but instead what we witness in almost every college football game, but it is a big deal when a coach confronts another coach? what a bunch of malarky 01HAWK October 17th, 2011 12:55 pm Lets see………………………………I heard many say the best thing to come out of Alabama was I-20 many times. Well…………………………………I guess you overlooked your UGA loving VINCE DOOLEY who is from MOBILE, ALABAMA. ………………………….DO you put I-20 ahead of the under achieving DOOLEY. STILL WAITING FOR A CHAMPIONSHIP SINCE 1980………………………….DOOLEY BORN IN ALABAMA……………………………..SOUNDS GOOD DOESN’T IT………….LOL NEXT Todd Grantham October 17th, 2011 12:55 pm Bunch of crybabies!! This is SEC football!!!! How bout dem Dawgs!!! 1eyedJack October 17th, 2011 12:56 pm One thing is clear. All the Tech trolls on this blog don’t add up to half a whole moron. Here’s a novel idea, go engineer a solar panel that works so that our “president” don’t continue to throw good money after bad. You should have plenty of time on your hands since he killed the space program. Grantham, yeah, that’s what I’m talkin’ about! mcdaviddawg October 17th, 2011 12:56 pm More of a program totally without leadership and out of control. Just a Fan October 17th, 2011 12:56 pm Richt should be calling Franklin out today as he is a liar! Man up Franklin! Coach Grantham did what he had to do to protect his players! 01HAWK October 17th, 2011 12:57 pm DOOLEY…………………………………FROM ALABAMA !!!!!!!!!!!!!! murfdawg October 17th, 2011 12:57 pm Hey 01HAWK, What did Mike Price say to the Vandy coach after the game? I can’t believe a Bama guy/gal would come on a Dawg blog and criticize a coach. Have the names DuBose, Shula and Price been erased from the memory of every Bama fan? Good Grief! Grantham for Pres. October 17th, 2011 12:58 pm This guy rocks and you know it! He does it his way! Taking up for players is the right thing to do. This is clearly a Vandy issue. I am certain that all he was telling the Vandy coach is that he would gladly slice his throat and drink his blood if he would like. Is that so wrong? 01HAWK October 17th, 2011 12:59 pm TRENT RICHARDSON would love to play this version of the DOGS HIGHTOWER, UPSHAW, MOSELY, and others would love to get a hold of CROWELL. October 17th, 2011 12:59 pm Before Saban, you could not hardly find a bama fan!! What a joke of a college program! But….. one heck of a semi pro team. Too bad they don’t actually have classes there? Diehard October 17th, 2011 1:00 pm Bama Mike- yeah and you’re strangely silent about the classiness of Bama trolls like 01hawk’s comments hanging out on a Georgia blog. Double standard. 01HAWK October 17th, 2011 1:00 pm DOOLEY ………………Was born in ALABAMA……………………………..SAY IT SLOWLY Do I need to call 911 for heart attacks out there. TheItalianDawg October 17th, 2011 1:00 pm Grantham did the right thing standing up for his players. Franklin started it GATA October 17th, 2011 1:00 pm Looking at the video, Grantham wasn’t coming for Franklin. But you do see Franklin gesturing towards #36 and CTG turning his head all the way around to see who he’s talking about. Pretty obvious Franklin was mouthing off about the player and at that point CTG, who was probably charged up like everyone else, was defending his player. I would be embarrassed to have a coach that DID NOT stand up for his players, especially from another team’s coach. No punches were thrown, just 2 grown men in a verbal exchange. How fracking PC have we become that AD’s and SEC heads have to get involved?? Now make the most of the off week and go stomp the gators! Mr. Krinkle October 17th, 2011 1:02 pm 01Hawk: NO ONE CARES MORON! 01HAWK October 17th, 2011 1:02 pm Dooley is a graduate of Auburn University (bachelor’s degree 1954, Master’s in history 1963 LOL 01HAWK October 17th, 2011 1:03 pm TheItalianDawg Grantham did the right thing standing up for his players. Franklin started it Do you need a box of tissues !!!!!!!!!!!!!! whosaid October 17th, 2011 1:05 pm ALL Georgia fans love Dooley? You better check yourself on that one pal. 92DAWG! October 17th, 2011 1:05 pm Franklin appears to be an arrogant prick, He instigated the whole incident and got scolded by Grantham, Lucky thats all he got, He had no business confronting a Georgia player after the game, Maybe Grantham over reacted but I have no idea what was said, He could have had reason to react as he did in the heat of the moment, But the fact remains that the whole incident would never have happened if Franklin had kept his mouth shut. Mr. Krinkle October 17th, 2011 1:05 pm @dawg+what ever comes to mind: You are clearly a Tech student sitting in your dorm. 01HAWK October 17th, 2011 1:06 pm Just to think that you owe the coaching and education of VINCE DOOLEY to the GREAT STATE OF ALABAMA. IMPRESSIVE Happy Warrior October 17th, 2011 1:06 pm To Coach Grantham, I for one support you and your actions. Coach Franklin approached you! not the other. He needs to coach his players, not ours. He should start with his #60. Floze October 17th, 2011 1:07 pm Move On (.) org Mr. Krinkle October 17th, 2011 1:07 pm Franklin just admit that you lie!!!!!! You remind me of that BEAST moron that is constantly on UGA blogs! F UF! 12 CRIMINAL CHARGES, 6 wins, 7 LOSSES October 17th, 2011 1:07 pm Todd Grantham is a low life, scum, a$$h*le. Half Century Dawg Fan October 17th, 2011 1:08 pm Beast, your 12:07 post was a classic, I needed that laugh. midtown123 October 17th, 2011 1:08 pm @4th and 57 Why lower expectations at all? Should we just sit idly by like GT fans and not care about our team? Your defeatist attitude is an infection that college football doesn’t need. Diehard October 17th, 2011 1:09 pm 01hawk- why are you talking about Auburn, because y’all lost 7 out of last 9? Like y’all blowing that 23 point lead. That was awesome. dogbone October 17th, 2011 1:09 pm The UGA program as hit a new low……CMR is a good man,but this un-professioal and passive coaching attutude equals a medicore teams….. And UGA has some great skilled players,but they never seem to get coached-up as Ala.,LSU.AUB. and even Fla………….. Coach Bobo’s offense is mostly vanilla as ice cream…………….WAKE UP UGA FANS…….Would Coaches Saban,Miles and Meyer put up with this MEDIOCRITY?????????????????????? NOOOOOOOOOT!! Thats why they have top 10 programs…………. We are not even a top 25!!!!!!!!!!! WAKE_UP!!!!!!!!!! JB October 17th, 2011 1:10 pm The next 6-7 weeks in Athens will determine a lot. Stumble across the finish line and Richt comes back with an extension, and with a new Defensive Cord for next year I bet. I really don’t think he and Grantham have every seen eye to eye. Richt calm, above the fray, always apologizing for winning etc and Grantham a loud mouth lunch pail kinda guy, a screamer etc. If we lose 2 of last three, it won’t matter. They will all be new next year. GeoffDawg October 17th, 2011 1:10 pm If the events are as described, then kudos to Coach Grantham. Who does Vandy’s coach think he is? Woody Hayes? I hope the commissioner remprimands him for going after another team’s player personally. That’s completely unacceptable. CPJ October 17th, 2011 1:10 pm Grantham should have punched him in the face. Nick October 17th, 2011 1:12 pm Grantham is no Erk. I think this is being handled properly and I don’t think anyone should draw comclusions until all the facts are sorted out. I hate it in that it’s an unnecessary distraction for UGA and VU. Also, I’m sick to death of hearing about the choke sign crap. Way too big deal was made and is being made of it. Even Florida’s kicker didn’t think it was a big deal and it was he that it was directed toward. Fact is, the media is making it and this situation (albeit more of a big deal) more than it really is. 12 CRIMINAL CHARGES, 6 wins, 7 LOSSES October 17th, 2011 1:12 pm CPJ October 17th, 2011 1:10 pm Grantham should have BEEN punched him in the face Fixed your typo Diehard October 17th, 2011 1:12 pm Dogbone- we’re currently ranked 24. What time did you wake up. JB October 17th, 2011 1:13 pm Diehard………..You are so right….and who can ever forget the Tuberville ‘Fear the Thumb” era at Auburn, beating Bama like a rented mule year after year. I remember…LOL Steve October 17th, 2011 1:13 pm The guy is a redneck d-bag. A loose cannon ready to go off. Piece of garbage. Suprised that CMR puts up with it. 01HAWK October 17th, 2011 1:14 pm Last 2 BCS CHAMPS………………………………………….From the GREAT STATE OF ALABAMA………………………………………..LIKE DOOLEY IS T. October 17th, 2011 1:15 pm 01HAWK ALABAMA OWWES THE GREAT STATE OF ARKANSAS FOR PAUL BEAR BRYANT. Winning October 17th, 2011 1:15 pm UF losing!! LMFAO 01HAWK October 17th, 2011 1:16 pm PUNCH, CHOKE……………………………..Look at the comments from UGA fans………………………..No wonder the players do the same thing on the field. Bama Mike October 17th, 2011 1:16 pm Looking ahead. Grantham vs Weiss will be fun to watch. I look for Florida to try and take away the run, pressure Murray and rely on press man coverage from the corners. One thing that has gone unnoticed and not mentioned is the job OL coach Will Friend as done with the offensive line. Look for the line to pick up the zone blitzes and allow Murray enough time to make some plays. Gators will try and establish run to control clock in get game into the 4th quarter close. My prediction: Dogs 24 Gators 9. zman22 October 17th, 2011 1:17 pm This is hilarious…and it all boils down to one thing. SOV used to stand for Same Old Vandy. No the Dawgs, their coaches and their fanbase is SOV…Scared Of Vandy. This is truly hilarious. Within 3 years, Vandy will be beating Georgia annually. 01HAWK October 17th, 2011 1:18 pm T. ………………………………………….ALABAMA OWWES THE GREAT STATE OF ARKANSAS FOR PAUL BEAR BRYANT. BAMA fans do not have a problem with that. NEXT !!!!!!!!!!!!!!!! 19 out of 21 October 17th, 2011 1:18 pm Winning October 17th, 2011 1:15 pm UF losing!! LMFAO Right. JB October 17th, 2011 1:18 pm Serious question…………Has anyone ever seen Richt do ANY coaching during the game? Not talk on the head set, but go over and meet with the offense…or defense and get involved? I see others SEC head coach’s do it all the time. 01HAWK October 17th, 2011 1:19 pm UGA……………………………………….Your fans have ANGER MANAGEMENT PROBLEMS……………………………………….I guess when you see how your team behaves you take the same attitude. gdawginkalamazoo October 17th, 2011 1:19 pm “I was looking for Coach Richt to explain……” 2:20 mark on the video CMR is right in front of him. Yes, our players are taught to slap other players for no reason. It is a Bulldog tradition. It is coreographed to occur in 3rd down and long situation in order to keep the game close and exciting. If you watch the sidelines the coach calling in the defensive plays uses a had slapping motion to indicate when to do it. Really, the opponent should watch for this and brace for the slap. rally October 17th, 2011 1:20 pm Funny that Alabama had no problem with Vandy’s tactics just one week before. I guess they have a real coach and some real men playing football. Georgia? A bunch of crying girls who go crazy like lunatics. Yep October 17th, 2011 1:20 pm Are all tech people dimocrats? It certainly appears that there are quite a few lurkers on this board. They have a coach that threatened to smack anybody that talked about this program and yet they attack a man truly protecting his player? Grow a pair techies and figure out how to beat a BCS school before lurking on another’s board. Grantham did what any other MAN would do in his situation, protect his people regardless of their fault. Much like the dimocrats, techies lie to your face and then turn it up a notch when you are not present. Let’s drop the rhetoric and move on to football, that pointy oblong thing normal players use to advance the game, not the lawyers and babysitter role models used at tech. T. October 17th, 2011 1:20 pm Georgia fans don’t have a problem with Dooley being from Mobile.He bleeds red and black now even with his son in checkerboard land. Diehard October 17th, 2011 1:20 pm 01hawk- there’s a reason Dooley moved here and Saban owns his second home in Georgia, nobody wants to live in Alabama, well except people in Mississippii. Whiskey Breath October 17th, 2011 1:21 pm Are any of you Ga people embarrassed over this incident? You may be the only bunch in the history of the SEC that has gotten into it with Vanderbilt. Knowing the AJC they will spin this incident and Vandy will the bad guy. Play to the home crowd AJC. So you Dawgs wonder where people come up classless? Ga consistently does something every year to be ashamed of. But then again, Ga has no shame. Knowing the GA people, they will claim Richt handled the situation in a Christian manner. JB October 17th, 2011 1:22 pm Darn Bama Mike, Is Florida that bad? You did see our game with Vandy didn’t you? Florida does have good athletes……………….My prediction…..Muschamp is a gamble that Foley will lose on. Emotion over coaching and brains will not work. I like Will, but I’m just don’t think he’s HC material. GATA October 17th, 2011 1:22 pm Wonder if Grantham and Muschamp will exchange pleasantries after the game? Should be fun to watch. Franklin's Marketing Ploy October 17th, 2011 1:23 pm Vandy’s head coach Franklin made a big deal about this game b/c he knows that Atlanta (GA) produces tons of football talent. He exploited the game (TV, Media) to try and position Vandy as a up and coming program in the SEC. “We won’t lay down ever to anybody” type of mantra. Vandy played tough; I give them credit. However, the post game news conference gave Franklin the platform he strategically wanted so he could market his program. H October 17th, 2011 1:23 pm I love it. I love his aggressiveness. Its about time we show some fire on that side of the ball. I’ll 100% behind coach G. Sure the game was chippy…but we showed no back down. We need this type of toughness, take no prisoners attitude. Enough of getting hit in the mouth and backing down. We are hitting back this year and I like it. Finally. stnwall October 17th, 2011 1:23 pm Grantham should have been yelling at his players who actually stood in front of the VU benching taunting Franklin during the game. The same players were taunting after the game and Franklin pointed to them. SO her eis the sequence … GA player yaps at Franklin and VU all game … Same player get 2 personal fould penalties and should have had another .. same player clearly threw a punch and should have been ejected .. Franklin calls him out after the game becasue he still acts like a fool … then Grantham, who was invisible during the game and had no control of his players wants to shout at Franklin. I fight would have been fun because I say Franklin takes down that Grantham with one punch. JB October 17th, 2011 1:23 pm Whiskey…………The Vandy coach started it…..The SEC is investigating and you will see. Was Grantham wrong, yes, but he reacted. T. October 17th, 2011 1:24 pm Funny that Alabama had no problem with Vandy’s tactics just one week before No but Alabama had a problem when Stafford threw that TD in overtime last time they in Tuscaloosa.Bama redneck started throwing their whiskey bottles. Abnerish October 17th, 2011 1:25 pm I was born in Nerw Jersey but I bleed red and black. You can’t control where you are born, just where you live as an adult and who you choose to associate with. This topic is pointless. Let’s move on, cause I don’t want to listen to two weeks of this… cdog October 17th, 2011 1:26 pm ANOTHER STUPID SITUATION.WHEN YOU WIN, WHY NOT JUST GET OFF THE FIELD?YOU DID WHAT YOU CAME TO DO SO WHY PUT YOURSELF AND TEAM IN TROUBLE? stnwall October 17th, 2011 1:26 pm JB .. The GA players started it not the coaches. I would be embarrassed to be the fan of GA these days. St. Richt October 17th, 2011 1:26 pm I’m a Tech fan but the AJC needs to let this one go.. I like a little fire out of my coaching staff. And I don’t know if we’ll ever get the full story about what happened as far as what was actually said on the field. Clearly, both coaches felt like that had a legitimate beef with the other. Again, just let it go AJC.. South GA Dawg October 17th, 2011 1:27 pm I think that kind of fire and intensity from our coaching staff is exactly what we need! Nuff said! flatsdawg1 October 17th, 2011 1:28 pm Florida gets two weeks to get Brantley healthy,and Muschamp has a little heat on him to win this one, and Richt has to deal with all this and somehow get his team to believe it can go down there and win,I’m already stressing about this one the way we are playing. Dawgs will have a hard time winning with a vanilla offense,but the defense is playing well and now with Granthum backing them up they should play well enough. This should be a game for the ages, really. Also, Tenn is hungry and meets with Spurrier without Lattimore…..very winnable for them too. Dawg Tired October 17th, 2011 1:28 pm If, and that is a big if, Franklin started hollering at a UGA player, he should bear the brunt of situation. CTG’s response iwas inappropriate, IMHO, but if Franklin started the whole mess, the primary blame should fall on him. In fact, if he started it, he should fess up. Otherwise, who knows what will happen. Coach Vince "Coke Head" Dooley October 17th, 2011 1:28 pm I suggest that Grantham take some angry management classes. CPJ October 17th, 2011 1:28 pm That Grantham guy is a puss. Buzz Sawya October 17th, 2011 1:28 pm Typical UGA, players and coaches have no class. I hope the rest of the country don’t think all Georgians are like this. T. October 17th, 2011 1:29 pm Paschal Lamb must be a huge UGA fan, I mean he is on the Junkyard Blawg.He must bleed red and black. Bet he is a major money donner, probably leading the efforts to get Sanford Stadium’s capacity raised to 120,00. Diehard October 17th, 2011 1:29 pm 01 hawk is gone. Must be time to feed the hogs. old dog October 17th, 2011 1:29 pm Grantham needs to just work on our defense. Vandy is the same team that gained 4 yards rushing against SC, about 70 total yards for the game. And to the fan that thought Vandy had found a quarterback, that Rodgers looked a lot like Murray on Saturday night – Rodgers was 4 for 19? Ihope Murray is better than that. YES SIR!! October 17th, 2011 1:31 pm Hey Richt, please take notes from Grantham! Richt=No Emotion..No Guts..No Intensity..No Nuts…NO NATIONAL CHAMPIONSHIP Kirby Smart October 17th, 2011 1:32 pm This kind of situation just demonstrates why UGA and Vandy are second tier SEC. I’m ashamed to say I was every associated with UGA. Diehard October 17th, 2011 1:33 pm Or the mere mention of Stafford beating Bama in Tusky caused him to soil his overalls. Again. T. October 17th, 2011 1:33 pm Whiskey Breath Franklin shouted at one of Georgia’s players FIRST. Any coach worth a damn will stick up for his players. Sunny October 17th, 2011 1:33 pm Whenever I hear that there was a fight at the end of an SEC game, I know UGA was involved. When this happens repeatedly during your games, the problem is you. Diehard October 17th, 2011 1:33 pm Or the mere mention of Stafford beating Bama in Tusky caused 01 hawk to soil his overalls. Again. War Eagles October 17th, 2011 1:33 pm Another example of lack of class at UGA. Kauai DAWG October 17th, 2011 1:35 pm There was controversy leaving Vandy in 07. Some players danced on the V after a game winning field goal. Richt was angered and we came back and kicked Florida in the teeth and danced in the end zone after Knowshon leaped in. Could this be a preview of good things to come? 01HAWK October 17th, 2011 1:35 pm BAMA beat VANDY pretty good…………………………………I guess when UGA plays a close game they will have to see who the REAL BULLY IS. DON’T TRY THAT WITH BAMA……………………………………Unless you want a BEAT DOWN. CLASSLESS UGA T. October 17th, 2011 1:36 pm Buzz Sawya Here a suggestion for Tech, drop Georgia from your official name. Nick Saban October 17th, 2011 1:36 pm I hope UGA wins the Eastern Division so that Alabama can beat UGA like a rented mule. A complete lack of sportsmanship. flatsdawg1 October 17th, 2011 1:37 pm Jordan Rodgers is the real deal, I think that is safe to say for the future of vandy. If we can get Mitchell and ogletree back, I believe we are in good shape for Florida. Marlon Brown will play and could be key, it’s time to play the best game of the year Dawgs! If they can put it all together , we should win handily. Calhoundawg October 17th, 2011 1:37 pm @oldfart and surething; After a game against the #5 team in the nation on opening day and a 4-1 record in the SEC, CTG has the #8 over all defense in the nation. Quit trying to sound intelligent by using broad, stupid, and erroneous statements which have no basis in fact. Oh yeah, you can add that UGA is again ranked in the AP Top 25. @sogadog — Nice job!! Beast from the East October 17th, 2011 1:38 pm I’m confused. It was okay for Grantham to make inappropriate hand gestures and say repeatedly “f’ing choke” to an opposing player, but it’s not okay for a Vandy coach to say anything to an opposing player. Which scenario is the coach in the right? Can’t have it both ways. 01HAWK October 17th, 2011 1:38 pm STAFFORD, DOOLEY, 1980………………………………All in the past………………………………..UGA always living in the past. Can not wait til AUBURN goes to the EAST so that we can have a BAMA – AUBURN SEC CHAMPIONSHIP. I bet this will make UGA FANS ANGRY T. October 17th, 2011 1:39 pm 01HAWK Hey Richt has a winning record against Bama and Auburn. In 05 everybody thought LSU was gonna clobber the Dawgs in Atlanta. Didn’t work out that way. GT Joe October 17th, 2011 1:39 pm The only ranking that matters: the BCS. Tech? Present. UGA? (crickets) 01HAWK October 17th, 2011 1:40 pm # 8 DEFENSE and you do not play LSU, BAMA, ARKANSAS……………………..YEA, you should feel a little empty about that. GT Joe October 17th, 2011 1:41 pm UGA’s wins come against teams that are 1-13 in conference play. Nice job guys! 01HAWK October 17th, 2011 1:41 pm T…………………………….Hey Richt has a winning record against Bama and Auburn. In 05 everybody thought LSU was gonna clobber the Dawgs in Atlanta. Didn’t work out that way. Once again………………………………………ALWAYS LIVING IN THE PAST 05, STAFFORD, DOOLEY, ETC. j merrell October 17th, 2011 1:42 pm this guy is the worst form of garbage- if that is what some georgia people want represnting our school they can have it. for me he would be fired today- what a waste of skin. we can be better than this .go dogs. T. October 17th, 2011 1:42 pm 01HAWK Hey dummy, This past Saturday’s game is history. The Duder October 17th, 2011 1:43 pm Use this as the opportunity to fire Grantham and bring a real college defensive coordinator. T. October 17th, 2011 1:44 pm The only ranking that matters: the BCS. Tech? Present. shortlived. tide roll October 17th, 2011 1:45 pm Any truth to the rumor that Richt’s been offered the lead role in “Night of the Living Dead”? How would you like to be a Dawg player when the game’s in the balance, and you look to the sidelines and see Coach Flat Line. What a program! Bama Mike October 17th, 2011 1:47 pm JB, I look for signs of improvement. Georgia overcame 11 penalties almost 100 yards and won. Same scenerio last year they do not win. Vandy while lacking in talent plays sound fundamentals and makes you beat them. At the end of year this may be a bigger win that you give yourself credit for. Florida is lacking on point of attack on both sides. Dogs really want it. Keep the emotions through each play and Georgia is developing into a solid team that no one is going to want to fool with. The Most Interesting Man in the World October 17th, 2011 1:48 pm Franklin is a tool! The guy runs his mouth, and he hasn’t done a thing in this league. What an idiot! swampjacket October 17th, 2011 1:48 pm Stay Classy Dawg Nation!! Todd Grantham October 17th, 2011 1:50 pm WATCHA GONNA DO FRANKLIN WHEN THE GRAMSTER AND MY NECK VEIN RUN WILD ON YOU! Joey October 17th, 2011 1:51 pm Waaaaa! Grantham cussed and hurt somebody’s feelings! Waaaaa! Harbaugh and Schwartz amost had a fight yesterday! Amazing. The sissy-fication of the US is almost complete. Thank God for the real men, like Grantham, still left . . . Half Century Dawg Fan October 17th, 2011 1:52 pm Nick Saban October 17th, 2011 1:36 pm I hope UGA wins the Eastern Division so that Alabama can beat UGA like a rented mule. A complete lack of sportsmanship. This coming from a fan base that kills trees, you got to laugh at that. clickit October 17th, 2011 1:52 pm grantham is a tool. he looks too much like mrs. doubtfire to be taken seriously. Dream Weaver October 17th, 2011 1:53 pm Coach Grantham may be “Hot Headed” but at least he has a pulse! You idiots who are criticizing him for defending one of his players need to get a life. I bet if he was defending your child you’d be thankful he loves his players that much! Hipocrits! AltamahaDawg October 17th, 2011 1:53 pm I believe that the SEC ruling will be something to the effect: Coach Franklin appears to be better shape, faster, and while likely to land a few punches, The UGA assistant would have gone psycho and beat the crap out of him. T. October 17th, 2011 1:54 pm he Duder………………. Use this as the opportunity to fire Grantham and bring a real college defensive coordinator. Use this as the opportunity to promote Grantham and make him head coach and return CMR to QB coach or head of the scout team. TampaDawg October 17th, 2011 1:55 pm 87DAWG — This should have been a Coach talking to Coach situation. That’s it. If I trust a coach to send my son to play for him and he let’s some other POS come up and start yelling at him like that and does nothing, I will support my son the first time he mentions transfer. Coach should talk to coach, if they see eye to eye on right versus wrong, then that players coach needs to deal with that player. Never should an opposing coach go after the kid. This is just basic leadership but too many of you who know NOTHING about leadership would have rather had the Vandy coach just say whatever he wants to Williams. Wrong answer, it will always be wrong, and idiots who side with Vandy coach will always be wrong. Have a nice day. tide roll October 17th, 2011 1:55 pm Grantham needs to own up to the fact that his emotional outbursts are as a result of trying to coordinate an SEC defense while two CHILDREN are in charge of the offense. Bobo screws it up, and Richt just stands there! Who wouldn’t go nuts! Joey October 17th, 2011 1:55 pm That’s okay GTJoe, at least our losses came to ranked teams. Your loss came from . . . cough, cough . . . virginia. (crickets) Nashville Cat October 17th, 2011 1:55 pm Someone needs to smack his fat mouth with a helmet. Grantham is all mouth. dawginGulfshores October 17th, 2011 1:55 pm Would everyone get over last year’s choke sign! It is done all the time in sports, stupid nuts, it is not like you are going to put your hands on a players throat and physically choke him dumb arses!! It is a sign meaning to “FREEZE” up and miss a play…not choke the life out of someone! Would all the non-sport, panty waisted wimps go read a book! this is tough, war, NFL training grounds and wimps should stay home! Hell we did the choke sign as kids to each other and it was funny!!! homers… TampaDawg October 17th, 2011 1:56 pm Half Century Dawg Fan, Nick Saban on here is a tech fan in disguise. McDawg October 17th, 2011 1:56 pm another BFD although i love how the players put their helments back on Half Century Dawg Fan October 17th, 2011 1:56 pm Beast from the East October 17th, 2011 1:38 pm In my opinion what he did in last years Florida game was wrong. DawginLex October 17th, 2011 1:58 pm Hey #1 Hawk dumba$$, master of the obvious. Pat Dye graduated from the University of Georgia It is well documented that Auburn and Georgia exchanged coaches. Your comment about dooley just shows how stupid you are What should I expect from a toilet paper toting retard….. October 17th, 2011 1:58 pm Hey tide roll…you and 01Hawk need to get together..you’re both made for each other..neither one of you knows sh-t about football and loves to hear himself talk….I’ll bet both of you are real ladies men…. both hung like a field mouse…..grow up! dawginGulfshores October 17th, 2011 1:58 pm If you look at the film, Franklin was the one leaning in toward Grantham, Franklin lost his cool and Grantham didn’t back down…a head coach should be ashamed and embarrassed for acting the way he did…why is he calling Richt so much if he wasn’t out of line!! Franklin lost his cool, so man up and apologize…you are the face of Vandy… McDawg October 17th, 2011 1:59 pm how come everytime i hear the pharse “tide roll” i think of toilet paper October 17th, 2011 1:59 pm Gotta give GT Joe credit. He’s got guts. UGA loses to two ranked teams, GT loses to .. hmm hmm, VIRGINIA, and he still runs his trap about the BCS. So let’s see, when Clemson disposes of GT then quite possibly VTech, how’s the BCS look for the bugs then? DawginLex October 17th, 2011 2:00 pm Bama mike My apologies to you. One of my best friends went to Alabama and your school is capable of producing classy folks unlike tide roll and the hawk dude. JAMES FRANKLIN October 17th, 2011 2:01 pm would have jacked Grantham into next week,NO doubt!!! The guy is in much better shape than Todd and his twinkie eatin’ self. Grantham is the type of guy that one day one of his own players will clock him OR someone will.UGA better watch this guy he seems kinda out of it. Joey October 17th, 2011 2:01 pm Hey Nashville Cat, set your GPS for the Butts-Mahre building in Athens. Bet you can go to Grantham’s office without an appointment. Go in and “smack his fat mouth.” Thanks. J.T.Keene October 17th, 2011 2:02 pm Why is everyone so tight? We need to lighten up. Grantham loves his players, Richt loves Cro-wellll and Cro- welll loves being the #1 tailback at Georgia. Had Cro -welll gone to Alabama he “might” have been the #4 tailback, certainly not 1, 2 or 3 as we can all see they are pretty good and don’t seem to mind playing a little hurt. October 17th, 2011 2:02 pm I think there was less drama during the actual event as they has been in the fan discussion following. NOBODY is ashamed or embarrased about it. But NOBODY wants our coaches to do that on TV. Everybody understand why he did, likes it, but understand the reality of how representative have to act in public. Folks need to stop trying to exagerating it one way or the other. DawginLex October 17th, 2011 2:02 pm Joey Yeah, just let him walk in and say “I’m here to defend James Franklin” Would be a short meeting……… T. October 17th, 2011 2:03 pm Right on baby! Good response. All these people calling for Grantham’s job are idiots.Franklin had NO BUSINESS yelling at one of our players. The jerk is another Lane Kiffin. DawginLex October 17th, 2011 2:04 pm Who is the one lacking class? Grantham says he defended his players and went a bit too far Franklin says he regrets nothing even though he started the whole thing. October 17th, 2011 2:05 pm By the way GT Joe, how did that 10 spot slide feel in the BCS? Obviously, they don’t love your jackets nearly as much as you do. Joey October 17th, 2011 2:05 pm lawrenceville guy (Jeez what a post name), you’re late for your manicure . . . boots October 17th, 2011 2:05 pm Vandy’s Franklin needs to shut up and act like a head coach. His team was chop blocking the entire game and he has the nerve to yell at our player? Whatever. Bring on the Gators! Mr. Dawg October 17th, 2011 2:06 pm I think… at the end of the day, when you consider what was said, and how things were handled, that at the end of the day…this is a freaking joke and being blown out of proportion and we should just drop it and move on….at the end of the day. T. October 17th, 2011 2:06 pm Had Cro -welll gone to Alabama he “might” have been the #4 tailback Wonder why Alabama wanted him so bad back in Feb? October 17th, 2011 2:07 pm T. “The jerk is another Lane Kiffin.” .. wow dude, that’s harsh. Accurate, but harsh. Dawghater October 17th, 2011 2:07 pm Grantham and UGA deserve each other! He fits UGA program perfectly. In jacksonville you will have two team playing for the right to get you behinds kicked by either Alabama or LSU! mrfixit October 17th, 2011 2:09 pm @doodoobailey – I don’t think you will ever have to worry avbout Vandy writing a check they can’t cash. I think that’s more likely to be a Georgia problem. October 17th, 2011 2:09 pm How do you think your team would fair in the SECCG game, dawghater? Joey October 17th, 2011 2:10 pm AD, If a head coach is yelling at a UGA player, I WANT any UGA assistant to get in the coach’s face about it. TV or not. I’m proud of Todd Grantham. T. October 17th, 2011 2:11 pm JAMES FRANKLIN would have jacked Grantham into next week,NO doubt!!! Dream on Baby Grantham would have been on his a$$like on ugly on an ape. Live From New York October 17th, 2011 2:14 pm This whole episode could have been avoided if Coach Grantham had simply followed the Golden Rule. When he saw Coach Franklin’s behavior toward one of the Dawgs, Grantham could have marched over with a smile (whether genuine or forced) on his face, shaken hands with Franklin and whispered in Franklin’s ear, “Let’s discuss this matter privately. I don’t appreciate you taking out your frustrations on my players. We can invite Coach Richt and any other of our coaches you’d like to join us. But leave our players out of it. You and I are the adults on the field.” That would be the mark of a gentleman. Instead, he acted as a hothead spewing profanities. He had the opportunity to defuse the situation, but caused problems for himself and embarrassment for the university he represents. His apology today was halfhearted, at best: “First of all, I love my players and appreciate their hard work and investment in our program. I feel a responsibility and loyalty to protect and stand up for them. However, I feel it’s important to educate them in all areas of life. While my intentions were genuine, I feel it was unfortunate that things escalated to a confrontation. However, I’ll use it as a learning experience for myself as well as my players so that we all become better men.” Yes, “things escalated to a confrontation” because Grantham let emotions get the better of him. He should have admitted that explicitly. It takes two to tango and two to engage in a confrontation. (Of course, I realize that Grantham’s “apology” was actually written by UGA staff lawyers. But still.) UGA 92 October 17th, 2011 2:15 pm If Franklin is yelling at one of our players, then Grantham had every right to get in is face. JB October 17th, 2011 2:15 pm How would you like to be a FSU fan about now? Wow, what a disappointment. Just thought I’d mix it up FLA DAWG October 17th, 2011 2:15 pm I can’t belive I am defending a coach who let Vandy hang 28 on us but I will. Grantham is a D Coordinator – Cry Havoc And Let Loose The Dawgs Of War – is the type of attitude a D Coordinator every college team needs. I only wish Richt had half as much fire in the belly as Grantham. JB October 17th, 2011 2:16 pm All this happened when Mark was trying to find coach Franklin to apologize for winning. T. October 17th, 2011 2:16 pm The jerk is another Lane Kiffin.” Mr.Helicopter flying in to HS football games, got that from Kiffin.What a jerk. Terry October 17th, 2011 2:16 pm Dawghater, Any thing can happen in a game, Alabama loss to South Carolina, Auburn, and LSU last year, even thou they were suppose to win the NC last year. At least if UGA wins out in the SEC East and South Carolina loses, they will get a chance to play for the SEC Championship and anything can happen. StuDawg October 17th, 2011 2:16 pm Go Grantham! GATA! GO DAWGS!!!!!!!!!!!!!!!!! CABO October 17th, 2011 2:17 pm Todd Grantham is Chuck Norris Approved! Shannon in Athens October 17th, 2011 2:18 pm I’m more mad at CMR for saying Sunday that he didn’t know whos story was the truth. Way to stick up for your players CMR. gt4ever October 17th, 2011 2:18 pm What a complete joke some of these posts that validate TG as protecting his players… Give me a break, this is why the UGA football program is in the toilet! This joke of a coach should be FIRED, and probably would be at any other university… It’s NO wonder that this program is in shambles, and the laughing stock of all college football! T. October 17th, 2011 2:20 pm Live From New York So I guess your saying Franklin bears no responsiblity in this matter. GTPHISH October 17th, 2011 2:20 pm Grantham is the biggest hothead I have seen in quite some time; 2nd time in 2 seasons he has shown his *ss in public, and the way he emasculates his head coach, he ought to be embarrased. CTG=Sgt. Hulka the analyzer October 17th, 2011 2:21 pm Grantham reacts in anger because a team that can’t score put 4 td’s on the board – 2 on long drives late in the game. Dawgs never sustained a drive that resulted in a touchdown. I’d be upset after that, too. SenoiaDawgs October 17th, 2011 2:21 pm So Franklin basically say’s nothing in his news conference which is the dodge the question because I F’D up and am an emotional guy. CTG says I stood up for my player and it was clear from the moment it happened if you watched this is what he was doing. HMMMMM… way to go CMR and McGarity for not publicly standing up for you own. T. October 17th, 2011 2:22 pm Shannon in Athens Amen, Richt should have said,This is what Grantham told me,far as I’m concerned, it’s my story and I’M STICKING TO IT. nolabigdawg October 17th, 2011 2:22 pm Aside from this unimportant sideshow, can anyone tell me how the defense actually played? I didn’t get to see the game. Is the team improving week-to-week? Please only serious replies. October 17th, 2011 2:23 pm [...] Mark Richt knows as much. Athletic director Greg McGarity, who has met with Grantham according to our Chip Towers, knows as much. The SEC is reviewing the [...] TidePride October 17th, 2011 2:23 pm Grantham is an ignorant hot head and worse, complete douche. The lack of class displayed was a great lesson to all young people watching the game. Exactly what an uneducated and ill bred society needs to see on a weekly basis…..but should we expect anything less from the classless JAWGA Bulldawgs? Spike 80DF October 17th, 2011 2:23 pm eff Franklin, they’re still the Commodoremats of the East. time to move on to Florida week! gdawginkalamazoo October 17th, 2011 2:23 pm “The Bulldogs were flagged three times for personal fouls in the game and, in one instance, it appears Georgia noseguard Kwame Geathers tries to punch a Vanderbilt player” Uh, that was an offsetting personal foul (after the play was over), the Vandy player got flagged too. Let’s call it what it really was. LHarding Dawg October 17th, 2011 2:24 pm Gee, gt4ever, I did’nt know our program was in shambles. Thanks for the info. How many times has the program in Atlanta beat us the last 10 years? Just asking! Spike 80DF October 17th, 2011 2:24 pm riiiight tide pride, coming from a team that chants “we just beat the hell out of you” after a win? real classy… Whiskey Breath October 17th, 2011 2:24 pm The Vandy coached started it? No chance the punch by Gaithers and other cheap shots by the Dawgs had anything to do with starting it? Have you ever considered taking the high road and just packing up and going home? T. October 17th, 2011 2:24 pm GTPHISH….. 2nd time in 2 seasons he has shown his *ss in public It Happens.It’s mild compared to Bobby Knight or Woody Hayes. notsure October 17th, 2011 2:25 pm who I want to see get beat on first,Grantham or Cammy Cam? But I would pay to see some of their teeth flying across a field or sideline. SenoiaDawgs October 17th, 2011 2:25 pm Hey TidePride… I am sure Saban would have cowered in a corner when an opposing coach shouts at one of his players after an emotional game!!!!! Great coaches stand up for their players no matter what and Saban is certainly one of those. ramblingbuzz October 17th, 2011 2:26 pm Hey Todd, punch em in the mouth. That’s what CPJ recommends. T. October 17th, 2011 2:26 pm No chance the punch by Gaithers and other cheap shots by the Dawgs had anything to do with starting it? Oh, the Franklin thing was the only thing Vandy did all night? So now we know. DawginLex October 17th, 2011 2:27 pm whiskey and I’m sure you ignored the cheap shots and chop blocks by the vandy players all night long We get it. you hate Georgia 2 weeks til you get to wear all of your gator stuff Gwinnett Fred October 17th, 2011 2:27 pm To be quite honest, I wish Grantham would spend some of that energy yelling at his players. Consider: Vandy had scored 3 points total in their last 2 games, then scored 28 Vandy averaged 115 rush yards a game before running for 200 v. UGA Vandy averaged 244 total yards a game before totaling 349 v. UGA I’m more concerned about results than theatrics and distractions. Play like that against Florida and we lose! Big Albany Dawg October 17th, 2011 2:28 pm Finally, I see we have at least one person associated with UGA that isn’t “SOFT”. I extend my thanks, TG! Chris Welton October 17th, 2011 2:28 pm Gary – that reminds you of Erk? Are you referring to Coach Russell. Grantham and Coach Russell do not reside in the same universe. Coach Russell was a winner, a leader of men with an impeccable record of intense but respectful behavior, and one of the greatest role models who ever lived. I played for Coach Russell. You clearly never met him. Grantham is no Coach Russell. JB October 17th, 2011 2:28 pm It almost that time of year when Tech fans and Carolina fans head back into hiding after the cupcakes are out of the way and the losses start.Tech, you’ve got the Duke win to look forward to. gdawginkalamazoo October 17th, 2011 2:28 pm Oh yeah and Muschamp is the classiest DC to ever exist, what an angel. http://www.youtube.com/watch?v=JZSy2JH8iu4 Oh yeah that’s called being a fiery coach. He can do it but Grantham can’t when defending his player? T. October 17th, 2011 2:29 pm notsure Why don’t you put your white pearly’s in front of Grantham and talk your trash? notsure would become damnsure. gdawginkalamazoo October 17th, 2011 2:32 pm “No chance the punch by Gaithers and other cheap shots by the Dawgs had anything to do with starting it?” Whiskey, yeah right Geathers started, my arse. The Vandy candy put a nasty cheap chop block on him. How do you think he ended up under Geathers. The refs 10 out of 10 times automatically throw a player out of the game for throwing a punch like that. This time they thought it was justified due to the career ending type of blocking that #60 has been taught. TampaDawg October 17th, 2011 2:34 pm gt4ever “Give me a break, this is why the UGA football program is in the toilet! ” ———– Seriously, your team runs a triple option, been on probation (still are) twice in 10 years, has a QB go 2 for 8 for an entire game, and still can’t beat Virginia. If our program is in the toilet, GT must be setting up residence in the sewer. Joey October 17th, 2011 2:34 pm Will doofuses that keep harping on the D, deep in mind that once again the STs were responsible for 14 of those points? Thanks in advance. Joey October 17th, 2011 2:35 pm …keep in mind… chattadawg October 17th, 2011 2:35 pm IMHO, Franklin is in idiot, but that is no excuse for Grantham losing it. He totally lost all composure. And I understand the emotions etc. But, two wrongs don’t make a right. Mad Dog One October 17th, 2011 2:37 pm Let me git this right CTG choked a fla kicker? Beat the vandy couch frankenberry to the ground? To hear some of these d bags on this blog he did. I think he did what anyone with any ba!!s would have done so STFU. McG. Grow a pair. As always GO DOGS & GATA 0ld timer October 17th, 2011 2:37 pm Way\to go richt!!Fire Grantham and hire you another little puppy with it’s tail between it’s legs to go with you and bobo.YOU MAKE ME SICK mr. dOORIGHT. October 17th, 2011 2:38 pm [...] was not representative of how we want to conduct ourselves after a hard-fought game.”* McGarity spoke with the Touchdown Club in Atlanta today and had the following to say to that group:“I think the whole story will come out at [...] THE DAWGS D RULES !!!!!!!!!!!!!!! October 17th, 2011 2:40 pm chop blocking vandy, the sec should look at the film,vandy desparate for a big win reverts to chop blocking and cheating, you can end players careers,if vandy can’t compete fairly in the sec go to the sun belt or a smaller conference.vandys coach should be put on the carpet at the sec office . THE DAWGS D RULES !!!!!!!!!!!!!!! October 17th, 2011 2:40 pm chop blocking vandy, the sec should look at the film,vandy desparate for a big win reverts to chop blocking and cheating, you can end players careers,if vandy can’t compete fairly in the sec go to the sun belt or a smaller conference.vandys coach should be put on the carpet at the sec office . St Simons - we're on Island time October 17th, 2011 2:42 pm I’ll pay Todd’s fine, if he’ll knock him out next time. And this is gonna help me make up my mind on CMR & Greg, too. They better stand behind a staff member protecting a student at my university. rally October 17th, 2011 2:42 pm Coach Franklin on Nashville radio right now: http://www.1045thezone.com/default.asp Evansdawg October 17th, 2011 2:42 pm Grantham is the exact emotional OPPOSITE of Coach Richt. There now exist a pleasant balance between harmony and ‘going ballistic.’ I love it! Now, people can either complain that Richt is too icy, or Grantham is too off the chain. Whats not to love???? Go Dawgs!!!!! really? October 17th, 2011 2:42 pm @Mad Dog One – some one with balls would have backed their mouth up. Dawg Gone October 17th, 2011 2:43 pm @gt4ever admit it Bug you’d sell he soul of your first born for a DC like Todd…you couldn’t stop Virginia when you had to…. bulldogsteve October 17th, 2011 2:43 pm I get a chuckle out of the Auburn and Alabama fans coming on here. Auburn, who turns fire hoses on UGA fans, and Alabama fans who kills trees and then brags about it on the radio! And then the Tech fans, who’s coach tells fans to punch UGA fans in the face. Walker October 17th, 2011 2:45 pm Proud of Georgia Coach G. It is about time someone showed some sign of standing up for our players. I like the fire in his belly. We need a new special teams coach. What has happened to our all-american kickers? I thought Blair was Mr. Automatic. It is all out in front of the Dawgs, do they have the fire and compassion to take it? This will be the test for Coach Richt to see if he can get these guys to play for him and win the East. We are going to have to focus ,play smart and show some attitude to beat Fla. and Auburn. Go Dawgs let’s prove to all the doubters you can play and bring some respect back to Georgia. TampaDawg October 17th, 2011 2:46 pm BuckDawg, yeah, we know what it looks like. Do you? Do you not understand that’s what was causing defensive lineman to end up on laying on top of vandy lineman? That doesn’t happen during “great” blocking. DawgnLex October 17th, 2011 2:46 pm Georgia runs the table. Florida, Auburn, Tech. Along the way “Crowell” will to continue to act like a spoiled PUNK that could not tote Hershel Walkers jock strap in a wheel barrow. Grantham will graduate to domestic violence and Kevin Butler will CONTINUE to be the greatest college kicker ever! Oh the smell of football is in the air kande87 October 17th, 2011 2:48 pm While I do not condone the altercation, I think it is foolish to jump to the conclusion that the incident was Todd Grantham’s fault. You can clearly see in video that before the incident, Vandy’s coach was pointing at and shouting at Shawn Williams while Grantham was involved in post game handshakes. Grantham was defending his player. A coach should not single out a player from an opposing team and yell at him. That said, I believe that Grantham should have walked away instead of adding fuel to the fire. However, I believe that the incident itself was the fault of the Vandy head coach. Rick October 17th, 2011 2:49 pm That little bast… Franklin needs an arse whipping that is the only thing that seems obvious to me. You don’t walk around the field pointing at players after the game. TG should have busted his arase! ddb October 17th, 2011 2:49 pm Grantham is what Georgia needs. We need junkyard dawgs not poodles. Muschamp and Gators are not going to back down. Georgia needs tenacity. Flat Tire on I-95 in Jacksonville October 17th, 2011 2:50 pm Grantham is just frustrated Frustrated he has to deal with conservative play calling on offense that can never score when they get inside the 30 yard line Grantham I understand, Im frustrated and baffled too Nick Fairly October 17th, 2011 2:50 pm Shawty wanna be a GANGSTA. DawginLex October 17th, 2011 2:51 pm Nice try at 2:46 pm You are so stupid, you can’t even spell my blog handle correctly!!! Mad Dog One October 17th, 2011 2:53 pm @ really How stupid are you if he had touched the vandy coach it would have been A/B. You can only take any thing so far but when that rent-a-cop put his hands on CTG that was one step to far. no class October 17th, 2011 2:53 pm Richt hasn’t had control of the players for a long time – now he can’t control his coaches. The Devil Himself October 17th, 2011 2:53 pm I agree with DirtyDawg1979. If you don’t like the way I’m driving stay off the sidewalk. And if you don’t want to get punched in the mouth, don’t chop block on our guys. End of story. Bob Kelly October 17th, 2011 2:54 pm I have been a season ticket holder for many years and the actions of this coach over the last two years have been unacceptable. And quite honestly, his apologies have proved to only be lip service. It is never acceptable for two coaches to exchange heated words or blows during or after a game in youth, high school, college or in the pro’s. What kind of example is this setting for the teams? This is two strikes for UGA coach Todd Grantham. Its time for Grantham to go. He has shown that he cannot conduct himself is a manner that is acceptable to the proud tradition of the Athletic department of the University of Georgia. wild bill October 17th, 2011 2:55 pm As if the Ga. Tech dorks don’t have enough problems of their own? Getting whacked by a Virginia team that beat “mighty” Idaho by one point in overtime, and who also won a “great victory” over powerful William & Mary. A Tech team who violates the rules and has their mamby-pamby ACC championship stripped from them…….for cheating! A Tech team that cannot fill up their dinky little stadium except when Georgia and Clemson come to town and take it over. A Tech team whose ‘cheering’ section is a bunch of immature little geeks in yellow wigs that congregate in the endzone, giggle and make obscene gestures and shout four-letter words at opponents. Tech, where students have to carry an AK-47 around campus to prevent being mugged and robbed. Tech, where a good looking girl is obviously a visitor…….and these Tech bloggers, who get their orgasms from hanging out on the UGA websites, have the gall to deride the Bulldawgs? What a bunch of pathetic little pinheads! And don’t forget, Techies, you haven’t beaten UGA since about the time Eisenhower was a corporal….. Todd G. October 17th, 2011 2:55 pm Here is what I said: “Da whoopin’ you fixin’ ta get…gone be SHAMEFUL, Daddy.”~ LawDawg October 17th, 2011 2:55 pm So a Vandy coach gets in a UGA player’s face and Grantham is supposed to…what? Let that happen? Ignore it? His reaction may not have been perfect, but his was the retaliation, which is what always gets you in trouble, but does not mean you are in the wrong. Mike Bobo's Press Secretary October 17th, 2011 2:55 pm General Grantham is planning a coup de tat. Someone must warn President Richt. DawginLex October 17th, 2011 2:55 pm bob Kelly What color purse did you take to the game? LawDawg October 17th, 2011 2:56 pm Bob Kelly – no one believes you are actually a UGA fan. Go back to the Tech blog. gt4ever October 17th, 2011 2:58 pm Thanks Chris, maybe some of the in breds on this blog will learn the difference between a man with passion and integrity and a man who has an anger management problem, and NO class. really? October 17th, 2011 3:00 pm @Rick – if Grantham was so sure he could of bust Franklin’s arse, why didn’t he try it? Maybe because he’s a loud mouth, just like his mangy defense???? huh? Evansdawg October 17th, 2011 3:00 pm Mr. Kelly, our players get motivation from a coach that sticks up for them. Franklin was verbally attacking Shawn Williams, and that is not the right place to voice grievances. Grantham simply stood up for his player. Its quite obvious that Franklin says something to Grantham that adds gas to the fire. Franklin isn’t innocent and neither is Grantham. Deandog October 17th, 2011 3:02 pm So basicly according to Vandy’s coach……it’s all Grantham’s fault! So Franklin wants the privilege to get down and pick shi* with the chickens on the field and then take the moral high ground in press conferences. The rest of georgia’s coaches need to get fired up or just fired!!!!!!!!!1 Flat Tire on I-95 in Jacksonville October 17th, 2011 3:06 pm Vandy is a joke program Who cares Lets move on; the rest of the SEC moved on from Vandy decades ago Foots McToe October 17th, 2011 3:08 pm What do you expect from a bunch of ill raised, illiterate, redneck trash. lovewthemdawgs October 17th, 2011 3:10 pm “I’m not a guy who really has a whole lot of regrets. I’m pretty calculated and pretty well thought out for the most part with the things I do. I am an emotional guy, as you guys know.” So Vandy’s coach expressed no regret for his part in this, yet Grhantham did. Doesn’t say much for Vandy’s head coach in my opinion. TampaDawg October 17th, 2011 3:11 pm gt4ever, so you want to crack on Georgia fans and Grantham for their actions, but then you start in on the “in breds” crap. That stuff flying out the window is your credibility. Chase it if you want, but don’t send your slow defense to get it for you. AltamahaDawg October 17th, 2011 3:12 pm gt4ever, would you admit that a lot of tech fans in our blog do not demonstrate that they know the difference? gdawginkalamazoo October 17th, 2011 3:13 pm Franklin the Turtle is a liar. Plain and simple. He went into his shell when coach Richt walked right in front of him after the game when Franklin was supposedly looking for him. Right in front of him. He could have whispered his name and Richt would have heard him he was that close. Watch the film people. If he lied about that he is lying about the rest of it so taking his statement is like Bill Clinton giving a deposition. AltamahaDawg October 17th, 2011 3:14 pm Enter your comments here Jay in Thomasville October 17th, 2011 3:14 pm As a football fan who was pumped up watching that game, screaming at the TV… I loved it. Don’t take it Grantham!!! One thing this football team needs is a little passion to get out there a kick some butt with no excuses…Go Dawgs! Sic Em gdawginkalamazoo October 17th, 2011 3:15 pm At least Grantham isn’t lying about what he was doing. Dawg Gone October 17th, 2011 3:15 pm @gt4ever you mean a HC that would tell his fan base to punch opposing fans in the mouth…right you know class… TampaDawg October 17th, 2011 3:15 pm Foots McToe October 17th, 2011 3:08 pm What do you expect from a bunch of ill raised, illiterate, redneck trash. —————————————- This is what happens when your mom leaves you at home alone at the age of 8 with a keyboard, pc, mouse, monitor and internet. AltamahaDawg October 17th, 2011 3:17 pm I am suddenly overcome with the urge to Dance. Perhaps in a very well defined and properly indicated space. ARdawg October 17th, 2011 3:18 pm I suppose in the Big 10/12 a coach can rant and rave all he wants with no consequences. In the SEC when you rave you might get the recipient on that @zz. There is a lesson here for Franklin gdawginkalamazoo October 17th, 2011 3:18 pm AltaDawg, try the endzone in two weeks. gdawginkalamazoo October 17th, 2011 3:19 pm AltaDawg, with 85 of your closest friends. Hey 4th and 57.... October 17th, 2011 3:19 pm Irrelevant for quite some time? Really 4th and 57? How so? we have two titles in last 10 years with a confrence of 12 teams. hmmmmm those numbers dont add up with your dumb statement. Maybe if you had said last two years you would be correct. outside of that we went to Sugar and captial one bowl, doubt we were invited to those bowls for being irrelevant. Idiot. dawg3 October 17th, 2011 3:19 pm Just more of the sad state of affairs with this football program…Graham needs to be fired and thats two times now–one more and he’s out.. We don’t need these problems we need to be getting ready for UF.. ARdawg October 17th, 2011 3:20 pm Oh the posers. Nerds dressed up like complaining Dawg fans. Quite sad, really. gt4ever October 17th, 2011 3:21 pm AltamahaDawg, Yes it goes both ways….. TampaDawg, If the shoe fits…. Your defense is NO better than ours… We will find out shortly… St Simons - we're on Island time October 17th, 2011 3:22 pm eat some shrimp & grits have fun, stay off my lawn. Speed limit’s 25. Grrr Hey really... October 17th, 2011 3:22 pm hey …”really?” that mangy defense is 26th in the nation. Not the best, but not mangy. Tom October 17th, 2011 3:22 pm TG’s an as sclown. Same song, next verse. Where’s the story? Evansdawg October 17th, 2011 3:23 pm Uhhh, dawg3? Its Grantham. Go back to your Dungeons and Dragons tournament with the rest of the techies. You think its okay for the head coach of a school to verbally chastize a player for the opposition? You MUST be a girl. AltamahaDawg October 17th, 2011 3:24 pm Muschamp would probably smile off camera about it. Not write a book about it. TampaDawg October 17th, 2011 3:26 pm dawg3, for you and ALL the other FAKE dawg fans on here, first off, it’s GRANTHAM, not GRAHAM. Second, if being a Tech fan is that embarrassing, go sulk on the tech blogs with everyone on there talking about firing Al Groh and a monumental “what the hell” regarding a QB going 2 for 8 with 2 pics. DawginLex October 17th, 2011 3:26 pm We really ned IGNORE buttons on these blogs. We REALLY need to be able to post under one e-mail/IP address with a blog monitor working full time. Without it, these blogs are bcoming a big joke and not worth anyone’s time CPJ October 17th, 2011 3:27 pm Good work Todd. We’ll have a few dogs at the Varsity after I run over your pitiful Defense in November. Big Mike Dawg October 17th, 2011 3:28 pm Fire Bobo… He’s the leader of the shortbus, retard… Grantham should take out all that anger on his butt after all thoses sorry offensive plays he called Move on October 17th, 2011 3:28 pm Coach TG did nothing wrong! What is the difference in what he did and any coach yelling at an ump or official. Nothing. its part of the game to get mad and voice your feelings. it builds rivalrys. move on Mike Bobo's Press Secretary October 17th, 2011 3:28 pm This year’s game in Jaxonville is gonna be a jorts-out game. Every dawg fan wear jean shorts. Show your spirit. Go Dawgs!! bigcalidawg October 17th, 2011 3:29 pm CTG……….tell ‘em I said to Kiss BigCaliDawgs Grits! Now GATA! This ain’t no pampered chef party. This is football. They were cut blockin’ us all night. That kid got upset and punched back (which he should not have done), as he was upset at being cut and crackbacked all night. TampaDawg October 17th, 2011 3:30 pm gt4ever “If the shoe fits…. Your defense is NO better than ours… We will find out shortly…” ————————— Time for a drug test for ya buddy. Yes, UGA’s defense is LIGHT YEARS better than the tech sieve. You’ve got to be kidding right? I mean, on UGA’s worst day, they are still than tech on D. Harbaugh and Schwartz October 17th, 2011 3:32 pm Grantham’s our kind of guy, not a dress wearing pansy like the critics on here. TampaDawg October 17th, 2011 3:33 pm Mike Bobo’s Press Secretary October 17th, 2011 3:28 pm This year’s game in Jaxonville is gonna be a jorts-out game. Every dawg fan wear jean shorts. Show your spirit. Go Dawgs!! ——————— We don’t want to blend in with the Gator fans so that’s not going to happen. bigcalidawg October 17th, 2011 3:33 pm Gt4ever, By the time we face y’all, our best defensive player will have returned and our defensive backfield will be back to normal. Y’all are gonna get a full-sized dose of some Tree. Can’t Wait! CPJ October 17th, 2011 3:33 pm Tampadawg….check what Vandy did to your great D. Furman Bishop October 17th, 2011 3:35 pm No blows exchanged. Just two guys mouthing at each other. Big deal. At least Grantham’s got some blood in his veins. That’s more then you can say for Richt. Let the matter drop. InstaZen25 October 17th, 2011 3:35 pm VolsRule, I second that and fear the Commodore as well. Go Vols! However, I don’t thing Grantham meets the definition of ‘professional’ at all. He behaves like one of those enraged little league/pee wee league parents. Hope Georgia fans like 15 yrd personal foul penalties with this jerk on the sideline. Chas Henry just laughed at his stupidity. Mike G October 17th, 2011 3:35 pm I live in Nashville and I am not a Vandy or Georgia fan, but I can tell you that Franklin is a black version of Lane Kiffin. He is pure trash. AltamahaDawg October 17th, 2011 3:36 pm ouch Move on October 17th, 2011 3:37 pm What did Vandy do to our great D? they scored a TD on a kick return. They basically scored after a fake punt which was special teams. they really only scored 14 points on our D. So i dont think 14 points is anything to write home to mama about. gt4ever October 17th, 2011 3:37 pm yeah yeah yeah,,,, Well for all you Defensive guru’s….. How many yards rushing do we have to get before you can say that your defense stopped our High School offense…. Let’s put it in writing today! GBR October 17th, 2011 3:39 pm The Vandy coach sounded like he had a chip on his shoulder. His comments could also serve to goad his players to playing close to what is allowed by the rules because they have been the whipping boys for so long. As for Geathers punching the guy while on the ground, the punch could have been the result of chop block or even several of chop blocks.,. Move on October 17th, 2011 3:40 pm gt4ever, thats big talk after a spanking by Virgin-ia. lol TampaDawg October 17th, 2011 3:42 pm CPJ October 17th, 2011 3:33 pm Tampadawg….check what Vandy did to your great D. ———————— Yeah, did you see what your great offense did to UVA? And your defense? That scoreboard is speaking volumes right now. nwfladawg fan October 17th, 2011 3:44 pm i did think both coaches need to go to the corner and come out fighting….but game was like that…. beat vandy dawgs ….and we did…gators next…and todd will go after will…musrat Flat Tire on I-95 in Jacksonville October 17th, 2011 3:47 pm Well Grantham got over excited again Richt will put him in timeout so our defense will revert back to crap for a few games Hope Bobo can score inside the 30 for once we will need it since we have to deal a milk toast D for several games Dawg Gone October 17th, 2011 3:47 pm @CPJ news flash Bug Vandy is better than your Bug bunch… Word on the street October 17th, 2011 3:48 pm is Grantham is about to get a two game suspension from the SEC. oakwood dawg October 17th, 2011 3:51 pm just when i thought we had a pretty good team, especially the defensive side of the ball Vandy, of all teams, shows me that we are mediorce at best. the gaytors, as usual, will beat us and so will auburn, ga tech and possibly kentucky. lets face it—we have very good talent but not a very good team. I see us winning no more than 7 games this year. Big Dawg October 17th, 2011 3:51 pm Observer and others yes Coach Russell was a gentleman with fire, but I can tell you from first hand experience because I played at Georgia for Coach Dooley and Coach Russell he would have done the exact same thing that Coach Grantham did except he probably would have punched Coach Franklin out somewhere after the game. You did not mess with or say anything bad to one of his players. Next Coach Grantham showed these players that he has their backs and will not let them be harasshed by opposing Coaches. Coach Franklin insitigated this and should be reprimanded and punished by the SEC for unethical conduct in saying anything of a derogatory nature to an opposing team player. The correct manner would have been to meet privately with Coach Richt and Grantham to discuss. There were flagrant personal fouls by both teams, plus chop blocking by Vandy. Go Dawgs Flat Tire on I-95 in Jacksonville October 17th, 2011 3:51 pm gt4never You ready for the beatdown coming to a theatre near you. The Movie is entitled Mighty Miami and The Triple Option That Couldnt Southgadawg October 17th, 2011 3:52 pm I would vote for ctg for president of the usa lomelt October 17th, 2011 3:53 pm Coach Grantham admits he got caught up in the moment and should of handled things a bit differently. I don’t think anyone out there has not done something like that in their life. Hind sight is always better, right? I think there are bigger problems here that led to this event which involves more that CTG, but more on CMR. I support CMR and think he is the best coach UGA has seen in many years and would be very difficult to replace, but he needs to address the personal foul problem, there is no reason for punching in a football game unless it is on the street. We have to get that through to all the players. Secondly is when are we going to hire a great offensive co.? Bobo is not even a good coach but we need someone who is great. That is the only way we will ever see the likes of a National Championship or even SEC these days. I went to school with Bobo and i thought he was a mediocre QB and now as a coach he makes bad play call decisions. Example first possession against Vandy on 3rd and 5 from the 40 or so we go for the endzone??? why??? we could have just went for 5 yds and not asked Walsh to kick a 50 yd fg and took more time off the clock. He does sh** like that all the time. CMR and McGarity need to address this far more than CTG and i hope they will at the end of this season. Move on October 17th, 2011 3:54 pm no way anyone gets any suspensions after the game. No one threw any punches. they just yelled at each other. Whats wrong with that? geez do we need to buy them some pink sweatbands and tell them that its ok to wear them cause its breast cancer awareness week…. where are the men anymore. we got a bunch of pansys Tater Tot October 17th, 2011 3:55 pm Bob Kelly October 17th, 2011 2:54 pm I have been a season ticket holder for many years and the actions of this coach over the last two years have been unacceptable —————————————– Go stick a sock in your mouth Mr. Ungratefull TampaDawg October 17th, 2011 3:57 pm oakwood dawg=IMPOSTER DAWG FAN. No one bought it dude. Go trolling with the other techies on the clemson blogs. Leg Humping Mutts Stink October 17th, 2011 3:57 pm Your players are dirty, lying cheats, and the video will prove it. Stupid mutts….. Mike October 17th, 2011 3:58 pm RIght or wrong in his behavior… Coach G made it clear to his players that he has got their back. They will remember that moment. Truth October 17th, 2011 3:59 pm Bob Kelly played on the UGA National Championship team. He deserves your respect Most of you posters sre a bunch of trailer trash that has never seen the inside of a claasroom at UGA. ATL Barb October 17th, 2011 4:02 pm Wish Coach Richt had some of Coach Grantham’s fire! Maybe then the players would play with a better attitude…not look as though they are defeated before game begins. Once I’d like to see them play with ATTITUDE when the game begins. Bobo needs to go & a new play caller come in. Also Richt needs to be more aggressive with 4th down calls. TampaDawg October 17th, 2011 4:02 pm Leg Humping Mutts Stink October 17th, 2011 3:57 pm Your players are dirty, lying cheats, and the video will prove it. Stupid mutts….. ———————— The only thing the video will prove is that you’re still an idiot who turned the tv off as soon as it became apparent that the DAWGS were about to win another game. rdawg October 17th, 2011 4:02 pm CTG has brought fire and passion to the defense at Georgia that’s why so many recruits wanna play for him in his system. He is the reason Georgia is doing so well. Its like this if you where to yell at one of my daughters for something that you didn’t approve of I’d clime your ass to thats my job and thats what Grantham did. As a head coach Franklin was in the wrong take care of your team and your players not ours. We obviously have the right defensive coordinator now if we had an offensive coordinator with half as much fire as CTG we would be putting up big numbers and finishing drives. You no why we have the best punter and place kicker in the league, because no other teams kickers are used as much as ours. I say GATA and don’t apologize make these other teams respect us RMH October 17th, 2011 4:04 pm Georgia’s number 36 came to the Vanderbilt sideline after the game and began taunting (in his face)Coach Franklin. Coach Franklin told Grantham about it and it escalated from there. (I attended the game and saw it). Tater Tot October 17th, 2011 4:08 pm DawginLex October 17th, 2011 3:26 pm We really ned IGNORE buttons on these blogs. We REALLY need to be able to post under one e-mail/IP address with a blog monitor working full time. Without it, these blogs are bcoming a big joke and not worth anyone’s time ———————————————————– Your not kidding. We sure do. I at times wonder if some of the posters are actually the reporters at AJC bored off their ying yangs with nothing better to do than to stir the pot under aliases being they have the power to ban you off here as well if what is said is not to their liking. Just my two cents. Water Boy October 17th, 2011 4:08 pm Bulldog brothers, we have ourselves a defensive coordinator. What is NOT shown is the behavior of Coach Franklin. If Grantham is dynamite, Franklin is the lighted fuse who caused the situation to blow up. Now, take that passion to Jacksonville and let’s GATA!! Tater Tot October 17th, 2011 4:10 pm RMH October 17th, 2011 4:04 pm Georgia’s number 36 came to the Vanderbilt sideline after the game and began taunting (in his face)Coach Franklin. Coach Franklin told Grantham about it and it escalated from there. (I attended the game and saw it). ————————————————— Why are you reposting this from another blog. I read them all on here. I highly doubt you attended the game except in your drunken state of mind. TampaDawg October 17th, 2011 4:11 pm RMH October 17th, 2011 4:04 pm Georgia’s number 36 came to the Vanderbilt sideline after the game and began taunting (in his face)Coach Franklin. Coach Franklin told Grantham about it and it escalated from there. (I attended the game and saw it). ——————————- Williams was telling Franklin about his displeasure with the ankle twisting under the pile, the chop blocking, eye gouging and the attempts to achieve maximum wedgies. I was the camera guy standing next to Franklin and I saw it. Actually, let’s just let the video show what happened. Seeing as how parts of your story already don’t line up with the video. DAN October 17th, 2011 4:12 pm Vandy guys were taking cheap shots all night. UGA guys feel prey to the “he who retailiates gets caught” maxim. Vandy players are just an extension of no-class their coach. Have you ever seen a head coach go after an opposing player after the game? My hat is off to Coach Grantham. And, for crying out loud, isn’t it kinda funny to be talking about Vandy as cheap shot aritsts? gt4ever October 17th, 2011 4:12 pm Big Dawg, Pure HORSECRAP…. Erk would have never lost his cool with another coach! Brian Asselstine October 17th, 2011 4:12 pm If Franklin made any comment, Grantham should have dropped his butt on the 50. McGarity needs to stand by his coach. If TG was wrong, McGarity should handle it behind closed doors. UGA defense, though not perfect, has shown signs of maturity. Over the next year or so, they have poetential to be a top D in the SEC. Still not close to LSU or Bama but much improved. Didn’t everyone rip CMR for no emotion or fire? Well this guy has it and it looks like players are finally getting his scheme and playing hard for him. We have not seen that since Van Gorder left. Buzzzzzzzz October 17th, 2011 4:13 pm Georgia Tech has the nation’s highest average SAT score for football players, and also the highest average high school GPA. UGA, on the other hand, had one of the very lowest SAT averages, as well as a very low GPA average. As a result, three-fourths of UGA players are “special admits,” when means that they can not meet the minimum educational and intelligence requirements for admission to UGA, even as low as that is. Yes, it would be great if Tech was more competitive with UGA, but most Tech fans are proud of our heritage of having intelligent, responsible athletes who go on to rewarding careers and contribute to society. Contrast this with the very unintelligent t-h-u-g-s that UGA recruits. The 45 arrests over the last four years is proof enough, not to mention winning the Fulmer Cup. Most of these goons are tossed out after their eligibility is over, and many end up on welfare. UGA is clearly just a football f-a-c-t-o-r-y dedicated to winning at all costs, and most UGA fans are okay with this, with no concern about the embarrassment this brings to the State of Georgia. Mark Richt is the architect of this situation. With his 3.5 million salary, he will continue to recruit the illiterate brain-dead t-h-u-g-s who he thinks will win. Unfortunately for UGA fans, he’s not remotely in the same league with the top coaches in the SEC. So the bottom line is that Georgia Tech – win or lose – does so with honor and integrity, while the lowlife gangbangers in Athens continue to embarrass us all. DawginLex October 17th, 2011 4:14 pm This is all part of Richt’s master plan Good cop-Richt Bad cop-Grantham Behind closed doors they are laughing. Playing out exactly like they want it. McGarity is in the room laughing with them Half Century Dawg Fan October 17th, 2011 4:14 pm Big Dawg October 17th, 2011 3:51 pm Well said Big Dawg icedawg October 17th, 2011 4:15 pm Should not be so surprising in this day and age. Civility has been thrown out the door. Being a “gentleman” is no longer fashionable. It’s all about being in your face and intimidation. Macho is the best. We’ve become a “wrestler” culture. LCDawg October 17th, 2011 4:16 pm Franklin started the whole thing. He started screaming at Shawn Williams after the game and Grantham stepped in. Of course if Shawn Williams hadn’t acted like he made a super bowl saving tackle after every hit the whole thing would have never happened. DIT October 17th, 2011 4:18 pm Oh, here comes the tek rhetoric. We have better gpa’s than UGA. Must really stink to get clocked by an unranked, irrelevant football program like UVA. It’s a football blog. If you can’t handle the football blog, go to back to your cooking blog. TampaDawg October 17th, 2011 4:19 pm Buzzzzzzzz …. snoooooozzzzzzzzzzzzeeeee. Same tired, debunked, baseless and mostly factless crap. Way to hit the vomit button on your keyboard. If we are the “football factory” how come the buzzzzzzzzzzzzzz keeps getting hit with probation? Woodstock Dawg October 17th, 2011 4:22 pm Please do not compare Erk Russell to Todd Grantham. Erk was a gentlemen. You would never ever ever see an assistance of Vince Dooley address an opposing coach in that manner. It ain’t about who started it. They were both out of line. Grantham is a jerk. If the last 24 months havn’t told you that you have your head in the sand. Mark Richt has a 750K per year ticking time bomb on his hands. Erk represents the days when wasn’t about the money. I miss him. rdawg October 17th, 2011 4:24 pm @buzzzzzz Really lets see I don’t remember any of our players burglarizing anything or robbing anybody (Auburn and Tennessee ringing a bell yet) about half the players we recruit tech recruits at the same time and so do about 20 other schools. Its just they wanna play in the SEC not the ACC its ok to be mad I’d be mad to if my team lost DAWG4LIFE October 17th, 2011 4:26 pm Show me a you tube video or tell me of a time in the last 10 years two NCAA coaches had to be separated? UGA is setting itself up for a lawsuit if this crazed man puts his hands on a player now. Magic October 17th, 2011 4:26 pm Hey…Grantham can create a diversion while UF pounds the hapless Bulldogs into the ground. TampaDawg October 17th, 2011 4:31 pm BUG4LIFE “UGA is setting itself up for a lawsuit if this crazed man puts his hands on a player now.” .. Crazed man? Really? So the guy going after a PLAYER is in his right mind, but the guy defending HIS player is a crazed man? How does tech even pass the SAT with people thinking this kind of logic? No news October 17th, 2011 4:32 pm Grantham is a JERK. the Bulldogs October 17th, 2011 4:33 pm will be on suicide watch if UF pulls the upset this year. I am an nfl man myself, But will take the Gators cause them boys got nothing to lose at this point. Magic October 17th, 2011 4:34 pm Obviously TampaDawg is not a UGA grad….maybe not a college graduate. really? October 17th, 2011 4:35 pm I saw a statistic that said UGA has the 2nd most fans who have never ever been to Athens, right behind UT fans that have never been to Knecksville. Tater Tot October 17th, 2011 4:38 pm Buzzzzzzzz October 17th, 2011 4:13 pm Georgia Tech has the nation’s highest average SAT score for football players, and also the highest average high school GPA. ————————————————— I need to shave down here………….Buzzzzzzzz RGP October 17th, 2011 4:38 pm Schultz you must be a Tech fan or Tech grad. Every article you write regarding UGA has a negative spin on it. If Franklin started the yelling and finger pointing at a UGA player then Grantham has every right to protect his player. dawgfan October 17th, 2011 4:40 pm Hey 4th & 57, Thanks for the advice! We’ve all decided to lower our expectations! D***Head! Penelope B October 17th, 2011 4:40 pm UGA fans are the biggest crybabies in the conference… last year we heard what amounted to nothing but “big bad mean Auburn, dirty ole Auburn.. we are so classy! ” because Nick Fairley got a well deserved penalty for a late hit– articles in the newspaper about how the QB was sore and beat up after the game because Auburn was so mean… where is your class now UGA? How many penalties did you get? Who tried to punch someone in the pile?? WWJD afterall, you bunch of hypocrites that hold up your coach’s religious beliefs and mission trips…. what a bunch of whining losers … there is no defending any coach, Vandy or UGA , in this situation– but Grantham looked like a NUTJOB — add it to the oh so classy choke signal from last year…. Oh UGA– you are SO CLASSY! Makes me want to vomit.. GTBob October 17th, 2011 4:40 pm If Franklin made any comment, Grantham should have dropped his butt on the 50. Do you really think this is how adults in the real world operate? If that happened then Grantham would have already been relieved of his duties, and half the UGA team would be suspended for the Florida game. Not to mention that Grantham would have probably got beat up pretty good by several Vandy players. That is what you wish would have happened? TampaDawg October 17th, 2011 4:42 pm Magic October 17th, 2011 4:34 pm Obviously TampaDawg is not a UGA grad….maybe not a college graduate. —————————– Making a statement without the basis of proof or facts is a pure case of ignorance. Not to mention, there is no relevance here and makes no sense on a blog. Let me guess, it will come down to “well, my GPA is bigger than your GPA!” .. which, I might add, is certainly the key to success in the real world after all (sarcasm is coming out in an overflowing fashion). DAWG4LIFE October 17th, 2011 4:42 pm All I am saying Tampa is that some people now will see CTG with$$$$signs and try to blame anything they can on the guy.And the public has now seen him this way twice.Is that really hard for you to understand son? TampaDawg October 17th, 2011 4:43 pm Penelope B, “but Grantham looked like a NUTJOB — add it to the oh so classy choke signal from last year…. Oh UGA– you are SO CLASSY! Makes me want to vomit..” then go vomit then. No one asked you to come on here and run your trap or watch a UGA game. Free country also means you don’t HAVE to do these things. Dr. Morpheus October 17th, 2011 4:47 pm This minor incident will blow over by the next news cycle, but with UGA’s defense ranking #3 in the SEC and #7 in Division I-A, no surprise the un-Dawgs and trolls would like to see Coach Grantham go. Nub October 17th, 2011 4:48 pm Go to you tube and listen to franklins post game press conference … It will give you some insight as to what he was saying as he is POINTING his finger at a UGA player not a coach… This is being blown up way out of preportion…. Darrell Johnson October 17th, 2011 4:48 pm He was sticking up for his player. He has nothing to appologize for. Go Dogs. monty October 17th, 2011 4:49 pm Let’s see, CTG gives the Florida kicker the choke sign but gets all touchy, feely when the opposing coach says something to one of our guys after the game about dirty play? HUMMM. Choke signs are so cool, and a coach calling a guy, a dirty player isn’t.LOL! Gotta love that CTG, he’s just so EMO. alcoadawg October 17th, 2011 4:51 pm Hell yeah! I love it. I hope coach G walks across the field and smacks the sh** out of Muschamp just for being a Gator now. This team has needed some real fire in it’s belly for along time. I say get off the bus in Jville wearing camo a la Miami 1985. If Franklin doesn’t know how to take an ass whipin’ like a man then treat him like a punk. I wouldn’t want to play for Vandy next year when they come to Athens. Fleming Island Dawg October 17th, 2011 4:52 pm Yep, Watch the game the first personal foul was a retaliation punch after the Vandy player threw the first punch. The Vandy coach even ran out on the field and grabbed his player and chewed him out thinking the flag was on them. The second was a late hit after a player appeared to make a catch and looked worse in slow motion but was a bang bang play. The 3rd kwame attempt at a punch was due to the fact a player was trying to take his knees out by rolling up on them fropm behind and he lost his temper. This game was not 1 sided at all. Not sure why anyone is blaming the Vandy Coach/ I guess he just jogged out to shake hands right? CTG is a major October 17th, 2011 4:52 pm distraction who is about to cost us a win over uf two years in a row. Jerk! gt4ever October 17th, 2011 4:53 pm Minor incident sums it all up….. Here is why UGA continues to embarrass the state of Georgia… People that have NO clue…… Thanks Morpheus… You make our state proud! monty October 17th, 2011 4:54 pm I love my DAWGS but there were at least 2 other hits by our guys that could have easlily been called, even the announcer on TV kept talking about the “late hits” but he was really meaning the nature of the hits, picking a guy up and body slamming him. If it had been reversed, we would all be screaming bloody murder. Yep, the GA way October 17th, 2011 4:56 pm monty October 17th, 2011 4:54 pm AMEN, monty. An HONEST ga fan. I remember seeing Brett Favre “body slammed” like that, and the player was flagged and fined. The ga player was twice his size, and it was totally unnecessary. monty October 17th, 2011 4:58 pm Not taking up for the guy but the Vandy coach said in his press conference(read between the lines) that Vandy wasn’t putting up with the kind of crap they had been taking over the years from other teams. Don’t know exactly what that means, other than, they were going to give it as good as they got it. Weren’t going to be everyones punching bag maybe. BigDawg October 17th, 2011 4:59 pm Franklin was on a radio show here in Nashville this afternoon and he basically laughed the whole thing off and kept playing up the whole, “we’re not gonna roll over for anyone anymore schtick.” Look, I get it. He’s trying to pump up his fan base and show it”s not the “same old Vandy.” But that doesn’t give him the right to confront players or allow his players to use illegal chop blocks or play dirty. On the radio show he even admitted they were throwing, “a few illegal blocks” and then said they got flagged for them. Oh really? When? I was at the game and it was chippy on both sides and I don’t condone that, but Grantham is being made out to be the bad guy here when it was Vandy’s coaching staff that was acting bush league all night. Grantham shouldn’t have responded the way he did, but Franklin and the Vandy staff need to stop acting like little brother getting picked on and maybe win a few games before they earn respect. 1953dawg October 17th, 2011 5:00 pm McGarity is a joke. He is quick to jump on an asst. coach, yet he allows the worst head coach in the SEC get a free pass with yet another pitiful performance by his team. UGA is the worst coached team in the SEC, and Richt is the problem for this. Once again this team is totally undisciplined with 11 penalties, and 3 or 4 personal foul penalties. The worst special team in the SEC, which allows another team to fake a freaking punt, miss many field goals, and allows Vandy to return a kickoff for a T.D. Not to mention the worst O.C. in the country, who has to settle for 6 field goals against Vandy for god sake. McGarity should have been in Richt’s office this morning asking this idiot head coach what he intends to do to improve the performance of his coaches. But we all know that didn’t happen. McGarity & Richt=.less than mediocre leadership. UGA will never win with these 2 clowns in charge. 68Bulldogs October 17th, 2011 5:03 pm Two game suspension? Unreal!!! This is ludicrous. monty October 17th, 2011 5:04 pm All the SEC commissioner has to do is interview that (chain official) standing by the coaches in the picture, I bet that guy has an unbiased answer, you reckon? Flat Tire on I-95 in Jacksonville October 17th, 2011 5:07 pm really? I saw a statistic that said Tech had more foreign students going to school at Tech than American born students really? October 17th, 2011 5:12 pm @Flat Tire on I-95 in Jacksonville – LOL I see every day more foreign born in my ga town than US born. Mark PRicht October 17th, 2011 5:25 pm Go Georgia Bulldogs….and oh yeah, take the Falcons with you sicum' October 17th, 2011 5:26 pm No 68bulldogs that would be about right for us.Auburn walks on everything and our DC gets a two week vacation over nothing.Sounds just about right these days, Hot Dawg October 17th, 2011 5:27 pm With Bantley out, and Lattimore out, if you can’t win the East now then when?? dawgfan6 October 17th, 2011 5:28 pm Is this football or badmitten?? They are investigating two coaches arguing? Get over it already! Mark PRicht October 17th, 2011 5:29 pm do you have to block everything even if its the truth? Kyle Shelnutt October 17th, 2011 5:32 pm @4th and 57 You are obviously an ignorant college football saying the Dawgs have been irrelevant in the sec for “quite a while” and that Ga fans need to lower their expectations. How about a Ga fan expects how ever much they want to with the approval of some idiot who just “hops on the bandwagon” with every team that is “relevant” every year (and yes I’m talking about you.) Chaz Henry October 17th, 2011 5:33 pm I hope Grantham is not suspended for the Florida game, I want my Gators to have a chance. The Ole Ball Coach October 17th, 2011 5:35 pm To all 15 of Vandys fans, I am sorry our coach yelled back at your coach…….. Franklin seems so sorry he yelled…. Good Grief ….. Mark PRicht October 17th, 2011 5:35 pm Ga still wont win the East no matter who is missing from other teams. The dream team is just not good enough hotdawg October 17th, 2011 5:35 pm From what I know, which isn’t everything apparently, Franklin initiated the incident and Grantham responded with controlled anger to Franklin’s aggression. In retrospect, Grantham might just as well have defended his player with less animation. HOWEVER, Grantham’s competitive emotion is just the sort of think UGA needs at this time. I am a Richt supporter–he hired Grantham– but UGA might get to where they want to go faster if they’d make Grantham Head Coach. Historically, the UGA image has been one of class even when confronted by classless opponents, and I think the image is still intact (considering the wild anger shown by the hot headed Franklin). ga gator October 17th, 2011 5:36 pm Much ado about nothing. BWG October 17th, 2011 5:38 pm Why is this still news????? Mark PRicht October 17th, 2011 5:39 pm Is is ok to act that way, and if so is that how you want your school represented, there are two seperate issues here. As soon as he did it then half the UGA players go to the end zone and act the same way… Can the dawg fans not see how their program has changed? I am a outsider looking in, but at least when Dooley was there I did have respect for him 96Gator October 17th, 2011 5:42 pm If your DC is suspended over that it would be the worst thing that could happen for UF.All of your D players would be on fire.So I hope he does not for that reason.If we get Brantley and Demps back we may give you a game.If not,forget it cause we can’t score right now. Kyle Shelnutt October 17th, 2011 5:43 pm @ Cynthia How are you going to go along and talk about cussing about our defensive coordinator. It is obvious that you are just some person that knows absolutely NOTHING about this incident and you are trying to justify James Franklin. If anyone should be cussed it is Franklin. How dare he come to a player and Yell at him. What a childish, immature idiot. He better just crawl his a** back to Maryland or wherever he came from because we don’t act like this down in the SEC. LHarding Dawg October 17th, 2011 5:43 pm With a name like “Mark PRicht”, well that says it all about how stupid you are. I would guess maybe, a waiter in downtown Atlanta! Mark PRicht October 17th, 2011 5:44 pm hotdog……class????? are you kidding? Like dancing on the star a few years back and rushing the field on UF….the AD along with a ton of players locked up? Where do you see class in that….only if you are a man with no morals, the UGA fans I know that do have class are embarrassed by what is happening in Athens LHarding Dawg October 17th, 2011 5:45 pm 96 Gator – I agree with you on this one. If Coach Granthan were to be suspended for the Florida game, it would pump the hell out of the defense. Win one for the coach. Kyle Shelnutt October 17th, 2011 5:45 pm And everyone talking about CMR get over it. He is the best football coach in the league, and is a better human than you will ever be. So how about you just shut up about our Head Coach. He is a great one, and anyone talking about him is just as dumb as James franklin himself. Mark PRicht October 17th, 2011 5:48 pm as for my name I call em like I see em Mark PRicht October 17th, 2011 5:52 pm mutts cant win against Florida period not gonna happen Jalex49 October 17th, 2011 5:52 pm To the Georgia defensive players. I hope you are proud of a coach that has your back! If you do, then play your heart out for this coach and don’t let anyone run over you. He has shown his love and affection to you guys and you need to play “lights out” for him.. Make a statement ! Coach Grantham thanks for standing up for your player. The other coach had no right to engage the Georgia player and as a matter of protocol, should have spoken with Coach Richt or Coach Grantham directly about his concerns. The other coach started this whole thing by his actions. Coach Grantham has nothing to apologize for as he was trying to protect his players. and I for one, applaud him! GO Dawgs!!!!!! Mark PRicht October 17th, 2011 5:55 pm yep LIL you are right, come on down to the restaurant I have something special for your soup Mark PRicht October 17th, 2011 5:59 pm Lale…what about the choke signal from him last year, was that respect? It cant be both ways, if you say stuff to other coaches players then expect it back, act like a professional interesting October 17th, 2011 6:01 pm to all you people that compare TG to ERK–please stop– he aint no ERK. ERK earned and desrves more respect than that. Pretty Funny October 17th, 2011 6:01 pm Grantham looks like a stroke in the making. Big D October 17th, 2011 6:06 pm Just like the rest of UGA, class act. Dawg48 October 17th, 2011 6:09 pm I’m ready to whip some booger eat n gators! Bring the fire and pain to Jacksonville! Dawg48 October 17th, 2011 6:12 pm James Franklin is another lane kiffin in the making! If he really wanted to find mark richt he would have looked for him! I just wish their was a camera angle on franklins face! To see how his reaction was. tenn.DAWG October 17th, 2011 6:12 pm It’s all on tape,Vandy coach called out UGA player, CTG took up for his player,then Franklin balled faced lied about it in the press conference. my old lady is from tenn go dawgs October 17th, 2011 6:13 pm Who cares we won its time for Florida.It is what it is. TG = BR October 17th, 2011 6:15 pm Todd Grantham = Buddy Ryan Those who play for him love him, but everyone else thinks otherwise…dress ‘em up, but you can’t take him to anyplace nice…LOL David October 17th, 2011 6:15 pm When we play Vanderbilt up there in Nashville, things are not normal. One Georgia plays Tennessee a week before that stadium is huge. Everyone knows that. It feels like college football at its best in Knoxville. Georgia beats Tennessee this year. Then, I said this we better watch out for Vanderbilt. Now, who get fire up for Vanderbilt in football. Again, I am walking the campus in Vanderbilt, it just does not feel like a college football feel. I see a car show next to the baseball stadium, and I see no one really care if Vanderbilt wins or loses there. Their stadium is very small almost high school stadium small it has not change like the rest of the SEC stadium has. I sorry Vanderbilt, but it is Vanderbilt. During the game, Georgia miss FG’s, had a kick block at the end, cost some turnover on defensive; however, it just seem we could not lock up the game and it had to take a saving tackle off the block kick with 17 seconds left and our defensive had to play extra hard at the end to win the game. About what I saw after the game, was the players were greeting each other and usually both teams together would have a pray in the middle of field. I saw the Vandy head coach charging after some Georgia players who I believe were celebating the win after the game. Then I saw coaches from both sides argueing and trying to get their players off the field. Like, I said before it is Vanderbilt, I knew this was not going to be a normal game. For Georgia, let get ready for a very angry Florida Gators in a couple weeks!. tripwire October 17th, 2011 6:17 pm Wait until the UF game,the Gators are already pizzed off from the Auburn game.T G will probably have a coronary occlusion drop his drawers and moon every one. He needs to be punished regardless of what Franklin did or did not do . He is supposed to be the adult and represents UGA. Cheryl October 17th, 2011 6:19 pm Grantham is certifiably psychotic. Fire his ass now before he kills an innocent person. Don Smith October 17th, 2011 6:21 pm Can we just get a new special teams coach? How many blunders can you make in a season? Have we set a record? Richt…………..the fake punt happened right in front of you……………….do you know how to take a time out? T. October 17th, 2011 6:22 pm Cheryl You got cookies in the oven. Buclkead Bulldog October 17th, 2011 6:25 pm Mark PRicht – Why do you care so much about this topic? Did you go to or are a fan of Vandy or UF? I coach 2 October 17th, 2011 6:27 pm I’m a Gator fan, and a coach. An opposing coach once got into with one of my players…yells at one of my kids. I did the the same as your DC…gotta stand up for your players, no doubt. rod October 17th, 2011 6:30 pm maybe the conduct between both coaches / schools / players were not at their best by games end – HOWEVER – theres been a fire missing for years at Georgia , coach richt is great but shows so little emotions – maybe if he’d get fired up on the field like grantham did our players might show us more upbeat playing and confidence . i say get over it none of this is worth taking to the next level – get ready for the swamp lizards – maybe – JUST MAYBE —– GRANTHAM MIGHT CAN LIGHT A FIRE UNDER RICHTS BUTT FOR THIS GAME . T. October 17th, 2011 6:31 pm He needs to be punished regardless of what Franklin did or did not do So Franklin gets a pass?Umm…He’s not responsible for his actions?Sounds like a Nerd hating on Georgia. gt4ever October 17th, 2011 6:33 pm I coach 2, Well,,,, YOU shouldn’t Coach…. You have NO business coaching! Period! angry mangy mutt October 17th, 2011 6:34 pm @ I coach 2 Any coach who doesn’t should not be in that position. Too bad too many of these people do not understand that. I coach 2 October 17th, 2011 6:34 pm And I add, I’d take the punishment…gotta have your players back. T. October 17th, 2011 6:35 pm I’m a Gator fan, and a coach. An opposing coach once got into with one of my players…yells at one of my kids. I did the the same as your DC…gotta stand up for your players, no doubt. Hey Gator,good to hear the voice of reason coming from a rival school,these Nerds from Tech who claim to be so brilliant are eat up with stupidity on this issue. mark October 17th, 2011 6:35 pm we had a baseball coach here in Atlanta that would run out of that dugout like a Bulldog to defend his players and say a few choice words in the process and EVERYBODY wanted to play for him! I bet every player on that team will look at CTG alittle different and be glad he’s on their side, I know I would.That first day back at practice would be fun and I bet everybody has a little more bounce in their step I coach 2 October 17th, 2011 6:37 pm Sorry gt4ever,obviously you have never coached, and it looks like your DC did not on Saturday as well. the seat just got a little warmer October 17th, 2011 6:39 pm uh huh…..saturday was just another piece of evidence to show what needs to happen…. anybody see what bama did to vandy? anybody see what bama did to ole miss? oh, i’m sorry. it’s not fair to compare a program that competes for national championships with UGA. nor is it fair to compare St. mark to the great Satan. funny how all the talk about “we’re better than ‘bama” got silent all of a sudden. seems to be a little dry powder in everyone’s koolaide……. the only team UGA bows up to is Vandy. That’s a sad commentary. T. October 17th, 2011 6:39 pm CTG took up for his player,then Franklin balled faced lied about it in the press conference. If all this pans out as truth Franklin should be fired,if for no other reason than lying. Planet Claire October 17th, 2011 6:42 pm Hello…these Tech goobers never played sports so their opionion is void! Planet Claire October 17th, 2011 6:42 pm (sp.) opinion mark October 17th, 2011 6:42 pm Cheryl, if thats true then we need to give him a raise, you ever heard of jack tatum? or utube dick butkas? learn the game darling angry mangy mutt October 17th, 2011 6:44 pm It is clear here who has played the game and who has not. I hate seeing the card played almost as most as I hate the questioning of fan loyalty and the eight year olds posting “first” As somebody who grew up a son of a coach, played behind enemy lines at a Florida football power plant, and developed knee problems I fully endorse our DC. Shooting for the knees is cheap and those who have played the game know sooner or later retaliation is coming. In other words, in baseball if a pitcher slings a pitch with bad intentions, the favor will be returned. In boxing, if a fighter plays dirty and lets his gloves go low, an experience fighter eventually returns the proves he can do the same. Grantham just landed a few recruits. Likely those recruits with a blue collar upbringing whose dads have an influence on where his kid is going. Those who do not agree, keep your kids on the soccer field. Football will be too tough for them and you. Mulberry Dawg October 17th, 2011 6:45 pm Franklin displayed a lot of class… All third. For a head coach to attack an opposing team’s player is inexcusable. I applaud TG for standing up for his player. Using the F Bomb was inappropriate and unfortunate and I think that McGarity has to deal with it and the SEC will fine TG for a doing the wrong thing for the right reason. Franklin did the wrong thing for the wrong reason. BIG difference. angry mangy mutt October 17th, 2011 6:46 pm one day I will read what I type before I submit Tally Ho October 17th, 2011 6:46 pm Bust em in the mouth CTG. Tally Ho October 17th, 2011 6:47 pm Do the same for Muschamp, bust ‘em in the mouth LoganvilleDAWG October 17th, 2011 6:48 pm All of you idiots that are sailing CTG down the river are the same ones that expect your child to have the 1st place trophy even though they came in last place. Alot of the people in this country are a bunch of lame, complaining whiners, it is really sad. All of you complaining probably suuport the protesters around the country. The said thing is that it happened in front of fans watching the game. Now all you want to do is complain about CTG being a bad coach/person. Our 1st DC under Coach Richt would have done the same thing, Coach Russell would have done the same thing. Why can’t people quit complaing about everyone else & start looking at themselves… oh I forgot all of you are perfect!!!! gt4ever October 17th, 2011 6:48 pm You sir are WRONG! I grew up with a dad who coached and have coached all my kids… He is turning over in his grave at some of the stuff going on…. I really hope you are kidding… Some of the posts on this and other blogs are PATHETIC, as yours is the epitome of what is wrong with some of the youth of today…. Good Grief it’s NO surprise that we have youth laughing in the face of authority with coaches like you leading…. gt4ever October 17th, 2011 6:49 pm and then there was angry mangy mutt…. Your IQ must match your waist size! Tally Ho October 17th, 2011 6:51 pm Yeah, and Muschamp gets 10x worse on the Refs Saturday. I wish Grantham would have put that little A right on his back. That Vandy team went after the legs of Geathers throught the night. Cut block him on numerous occassions THAT is what the SEC should be reviewing. Planet Claire October 17th, 2011 6:51 pm gt4ever = so how do you defend CPJ’s BS? angry mangy mutt October 17th, 2011 6:52 pm @gt4ever Is that directed at me or I coach 2? Regardless a coach always has his player’s back, Always. Tally Ho October 17th, 2011 6:53 pm Loganville dawg has it right, bunch of friggin pansies, raised in a pansy society where wheeeee…Everyone is special, and winners too! @sportsguru95 October 17th, 2011 6:53 pm In the end, the refs will get the blame! The officiating was horrible! They let some things go that shouldn’t have been allowed! They missed several chop blocking calls! That should not happen! Grantham was out of line, however I do understand! Did Franklin not shake CMR hand after the game? What happen 2 adressing it then? No1 is talking about that! Franklin should not go after a player aggressively either! Had he approached Grantham instead of Shawn Williams, things probably would have been a little better! I coach 2 October 17th, 2011 6:54 pm gt4ever…. not gonna make a comment, you sir are entitled to your opinion….. have a good one. Nub October 17th, 2011 6:55 pm Long time season ticket holder…. I support CG! Vandy coach was wrong !…. Am feeeling better about FL…. Love the passion !!!!! angry mangy mutt October 17th, 2011 6:55 pm Enter your comments here Tally Ho October 17th, 2011 6:58 pm This Vandy bunch played Dirty all night. Just like CPJ line, they are good at destroying knees. THEY should be made to suspend Franklin for 3 games not for his ranting after the game, but him directing his players to cut block and go after Geathers knees. They did it all night long, and this stupid crew ignored it all night long. I coach 2 October 17th, 2011 7:02 pm First time I’ve agreed with any Dawgs! Toccoa Dawg October 17th, 2011 7:05 pm What is becoming perfectly clear us Coach Grantham should be our head coach. Toccoa Dawg October 17th, 2011 7:05 pm “is” not “us” sorry for the typo tell me again October 17th, 2011 7:06 pm All of this is quite funny – every week you see Saban on the sideline using more cuss words than a drunk sailor aimed at everybody including his own players and no one is even slightly concerned. And now the SEC – great investigator of all things wrong with the conference – is investigating a midfield spat after a game. Talk about misplaced energy and focus. Stupid much? angry mangy mutt October 17th, 2011 7:06 pm @gt4ever How can you be so critical of another’s actions while you yourself lack the dignity to conduct yourself respectfully here. I know you are safe sitting there in front of your computer playing with your Star Wars action figures and insulting somebody you do not know. I assure you my IQ does not get questioned – not even by a couple of GT grads I hang around with. Heck, I even dated a Stanford grad who never questioned my intelligence. Perhaps they have all been too nice to say anything. Toccoa Dawg October 17th, 2011 7:07 pm what also is becoming clear is the AJC writers need to eat some cheese to go with their “whine” Toccoa Dawg October 17th, 2011 7:09 pm angry mutt, amen brotha preach on Tom G October 17th, 2011 7:14 pm Get rid of Grantham, some of you people are out of your mind. You do rememeber that he was a coach with the Dallas Cowboys and he left to come to UGA and be with Coach Richt. He must have really wanted to come to us. Of course the 750,000 salary did not hurt too much either. Cut him some slack, he stood up for his guys, right or wrong, which is what leaders do. This is big boy football not cricket!!!! Sam October 17th, 2011 7:21 pm SEC can’t do squat. They let Fairley body slam UGA players last year, so have to let Jenkins & Williams this year. http://www.youtube.com/watch?v=6xljcMJxJZg&feature=related Todd Grantham October 17th, 2011 7:24 pm I shoulda whooped him! Intrepid October 17th, 2011 7:27 pm I am glad to see some fire in that defense! Reminds me of that junkyard alittle. Vandy and the rest of the country needs to understand one thing- ” This Dog don’t beg, this Dog don’t sit, its just a bad Dog!” NOTSO FAST October 17th, 2011 7:31 pm YOU HAVE TO JUST LOVE THE BULLDOG’S FANS. GRANTHAM IS A HERO. THE MISSING LINK FOR THE DREAM TEAM. IF HE COACH FOR ANY OTHER SEC SCHOOL, HE WOULD BE SOMEONE THAT SHOULD BE RELEASED. THERE IS NO PLACE IN COLLEGE SPORTS FOR THAT BEHAVIOR BY EITHER COACH. gt4ever October 17th, 2011 7:33 pm Yes that would seem possible, maybe you have a big waist….. In any case I feel sorry for both you and I coach 2…. You have a nice day…. Idiot Dawg October 17th, 2011 7:33 pm Coach G should have his own Police to push that other corch around some. That would have been better to see the Ga Police fighting the Tn Police. JoeFan October 17th, 2011 7:44 pm If this is representative of the types of individuals that are playing and coaching at the college level and in the NFL, then lets just forget about developing young men and turn things over to the street gangs and market football as a substitute for professional wrestling. Time for UGA to focuis on education and turn players and coaches out that are an embarrassment to the insitution and its graduates. Swampdawg October 17th, 2011 7:57 pm This Dawg Team would be doing a lot better if Coach Richt had some “vinegar” like Coach Grantham. I am definitely very disappointed with Coach Richt’s laid back”reactive” style. Coach Richt said in his press conference after the game. “I probably should have called a time out on the fake punt when he saw how they lined up?. Well… that kind of “Reactive Coaching” without enough “Vinegar and Motivation” has been going on too long and has cost us lots of “L’s”. Coach Richt needs to get some vinegar in his blood like Coach Grantham and be more proactive and more “animated”. He waits till it’s too late to get out there and motivate these boys with a finger in their face. They need a firm hand right. Too many penalties and poor execution. 200 yds on the ground for Vanderbilt. OMG! Are these boys saving their bodies for the NFL? Makes one wonder. Need some “Junkyard Dawgs!” New Requirement October 17th, 2011 8:05 pm I guess now the guys in the replay booth will have to man the cameras to record after game exchanges. Dawg October 17th, 2011 8:15 pm Just reported on espn. The NFL just issued a warning with no fine. Guys just don’t do it again. Dave October 17th, 2011 8:17 pm Folks – Again. UGA: the term is “loss of institutional control”………..really. Are you all really proud of your program??? Dave October 17th, 2011 8:19 pm Fairley body slammed a UGA player, got a 15 yd penalty and kept a UGA drive alive in which you all scored, what else could you have wanted??? Idiot Dawg October 17th, 2011 8:22 pm Hello Dave, I wish I had eyes like yours. I could be able to see really good then. I did not see the institution on the field. Maybe they were hiding and that’s why they were not able to control much. Where did you see them? Thanks Dave! bamadawg October 17th, 2011 8:22 pm Why was Rambo laying on the ground at the end of the game? Is that what this is about. BigDawg October 17th, 2011 8:23 pm I just love all this high and mighty crap from the preachers on this site. Please. Some of you act like there was an all out brawl with people hitting each other with helmets and knocking out teeth. A bunch of guys yelled some bad words at each other and that’s all that happened. Move on. Get over it. Some of these Tech and Fla fans bashing UGA and the coaching staff act like 1) their programs have never done anything wrong and 2) act like this was a major incident with blood and guts left on the field. Tech folks, maybe you can hire Grantham to patrol your campus so maybe your students can get to class without being shot, since Grantham is so fierce and evil. You know, when I was a kid I had an older brother. He picked on me all the time and finally I figured out how to get him back and trick my parents into punishing him and not me. I might have won a victory that day but at the end of the day I was still the little brother. That’s what is happening here. Franklin picked a fight and went running to Daddy (Slive) to help him out. Big brother might get a time out and little brother might get all smug and think he won but again, he’s still little brother. Prozac Dawg October 17th, 2011 8:25 pm That is a very good question bamadawg! I haven’t read, seen or heard a word about his injury. Prozac Dawg October 17th, 2011 8:28 pm @BigDawg Go to the ESPN site. Mr Franklin (he ain’t no coach) syas he’s never had a rule book to help him out in this kind of a situation. Just turn to the page where it says “Grantham about to kick a$$”… funny!!!! Swampdawg October 17th, 2011 8:33 pm The Ref should have ejected Fairley for 3 games for those dirty hits. He could have broken Murray back or neck by spearing him in the back. I still don’t know why a couple of the big GA linemen didn’t take care of his a* *. They should have put him on his a* * every play with a fist inside the cage. Those hits he put on Murray were the dirtiest I have ever seen and I have been watching College Football 55 years. ddawg October 17th, 2011 8:34 pm BigDawg October 17th, 2011 8:36 pm Here in Nashville the spin is Franklin is a big hero who is finally standing up to the big boys. So when he does it it’s OK but when Grantham does it he is a loose cannon? Whatever. Franklin is trying to reverse years of Vandy being the doormat and that’s fine, I have no problem with that. But it’s more than a little contrived to go after a UGA player and essentially stage a scrap to get your team to buy in. Just wait, Vandy will pull the same crap in Knoxville. Franklin sees a hobbled UT team and Vandy will go in and play dirty and talk smack the whole game and then act all surprised when UT smacks them back. Bush league. doctor October 17th, 2011 8:37 pm Bout time somebody started moving on that sideline—-it looks like a grave yard half the time————————-we need somebody with some fire!!!!—————————–Go Dawgs!!!!! Oh yea and screw Vandy and this other team coming up too!!!!———–its time to kick ass and take names later——- funny... October 17th, 2011 8:37 pm people are still on the “Fire Richt” thing, LOL ddawg October 17th, 2011 8:38 pm I was at the game. Vandy was cut blocking and holding all night but Officials were blind. They let the game get out of hand. The Vandy coach is so desperate to win he has turned a class program into a dirty one. doctor October 17th, 2011 8:38 pm kill em all and let God sort em out!!!! Coffee Bluff DAWG October 17th, 2011 8:38 pm Here is a quote from Vandy coach, “I went to find Coach Richt and didn’t find him, so I found one of his assistant coaches and…” Really? Does anyone believe that. Maybe if Vandy coach hadn’t been so preoccupied with finding Shawn Williams to scream at him he could have found Richt. Hey Vandy coach, Next time try walking to midfield – that’s usually where the other HC is waiting – not out of the way to yell at a 20 yr old kid. funny... October 17th, 2011 8:38 pm Rambo cramped on the last play. doctor October 17th, 2011 8:38 pm Go Dawgs!!!!!!!!!! Dawgie Poo October 17th, 2011 8:43 pm and you wonder why Dawgie players act like they do….great example coach! BigDawg October 17th, 2011 8:44 pm Gettem Grantham !!!!!!!!!!!!!!!!!!!!!!!! Love the Fire!!!! Mobile Dawg October 17th, 2011 8:47 pm Doesn’t sound like Franklin is going to take any kind of responsibility. Look at him, doesn’t surprise me. I wish CTG had controlled himself a little more. Have you ever whispered FU in someone’s ear. It carries a lot of weight. Last, I’m afraid CTG will be thrown under the bus, there are a lot of unseen, unsaid, politically correct issues on the table. Ga Wins October 17th, 2011 8:48 pm Political Correctness is ruining our country Prozac Dawg October 17th, 2011 8:48 pm I think the word is out on peckerhead franklin. I think Army puts the hurt on them say 35-7. Let’s see. Swampdawg October 17th, 2011 8:49 pm Was that Franklin guy ever the Mayor of New Orleans? Sure does look and talk and act like him. Prozac Dawg October 17th, 2011 8:50 pm @Mobile Dawg I hope you are wrong, but I suspect that you might not be and that’s just plain BS! sports October 17th, 2011 8:52 pm Put Granthem and Franklin in a cage and let setle their differences. Oh, and by the the way chippie “techie”, anything to slight Georgia…right! Swampdawg October 17th, 2011 8:58 pm Google up Franklin’s resume. OMG! 11 different schools. Wow! He must be good! LOL Laughing at Georgia and retarded fans October 17th, 2011 8:59 pm Georgia shows once again that the program is out of control. …From classless players to classless coaches…And, being pushed to the edge by Vandy. LMFAO carlos October 17th, 2011 9:00 pm What’s next for CTG, another Woody Hayes. By the way he will never be in the same class as Erk CTG October 17th, 2011 9:02 pm I’M A MAN…I’M 40 Swampdawg October 17th, 2011 9:04 pm Carlos…. You got any papers buddy? LOL Mobile Dawg October 17th, 2011 9:04 pm When Jesse Jackson shows up it will be signed, sealed, and delivered. Old Dawg Fan October 17th, 2011 9:07 pm Cudos to Grantham! He watches after his players. He wasnt going to take any crap from anyone who was berrating his players. This all started with the “head football coach” from Vandy who admits he doesn’t apologize “too much”. He can kiss our a$$. Go Grantham! FLA DAWG October 17th, 2011 9:11 pm I guess if Grantham were Saban or one of his sideline coaches it wouldn’t make the news. But if it has anything to do with The Dawgs then it’s front sports page / blog site. I submit to you my opinion that because there appears to be some degree of anti – religion among news agencies that Richt (and thereby UGA) is always under the microscope. I don’t deny nor condone the illegal and / or immoral actions of players or others at any school – especially my beloved UGA. One must consider however why UGA – more than most other schools, portrayed in the negative…………………………… ………………….that’s just my opinion of course and I could be wrong. Swampdawg October 17th, 2011 9:14 pm Chuck and Chernoff need to watch what they put out there too. They have been complaining about Richt being too soft too, but now they are jumping all over Grantham’s case before they know all the facts about what and how Franklin was trash talking. This Georgia team needs some damn “Vinegar” and Richt needs to take a few pointers from Grantham on how to motivate. Doc Dawg October 17th, 2011 9:18 pm Tried not to comment, I know it is a coaches place to be an example of good sportsmanship, but could not keep quite any longer- I love a coach that will SHOW EMOTION , especilly for HIS PLAYERS ! Enough said. Terry October 17th, 2011 9:25 pm Laughing at Georgia and retarded fans, Well aren’t you the one to be calling someone classless, what a classless name you gave yourself. ole riche October 17th, 2011 9:30 pm An opposing head coach has no business talking too on of our players in a negative way. Franklin is in the wrong here. Joey October 17th, 2011 9:33 pm No need to apologize Coach. We need some fire in this program. I for one applaud you. Coffee Bluff DAWG October 17th, 2011 9:34 pm ole riche, That was exactly my point earlier. Does Vandy coach not even know how to find the opposing HC after the game. Why is he going after a 20 yr old kid? Pitbull October 17th, 2011 9:35 pm I sure am glad that UGA and Vandy gave you girls at the AJC something to write about so you can sell more newspapers and get more hits on your website. Money makes the world go around, especially if you are Cox Enterprises. I just wish you sports writer girls wouldn’t pimp out college football to make a buck. But you have never been one to let a nonstory lie if you can make some cash off of it. 47 DAWG October 17th, 2011 9:40 pm hey i thought grantham did his job….we have a real defence now he reminds me of…ERK it was kinda of like watching him again. i am proud of you for taking up for my DAWGS. GO DAWGS!!!! GA-Aubie October 17th, 2011 9:43 pm Total lack of discipline and class from top to bottom. More signs of a program on the decline… Ryan October 17th, 2011 9:48 pm You know your program has hit the crapper when you are in a p1ssing contest with Vandy. cadawg October 17th, 2011 10:03 pm At last! A Dawg coach wiht gonads! Loyal Dawg October 17th, 2011 10:06 pm Here is the bottom line folks. Both coaches were in the wrong. Franklin was wrong for yelling at Wiliams and starting the whole thing. He should have kept his composure and taken his complaints to Coach Richt and/or the SEC Commission after the game. Grantham was wrong for confronting Franklin. He still has to remember who he is and hold himself to a higher standard. The SEC should hand out punishment to both parties. The whole incident could have gotten much more out of hand and resulted in a brawal with a lot of players getting suspended. That’s why it’s not acceptable for either coach to act in such a manner. For the idiots that have criticized CMR for his “lack of emotion and fire”, this is why it is good that we have a HC that doesn’t lose his mind. I am not going to be surprised at all if Franklin has more problems down the road. You can bet Muschamp may run into problems too if he does not keep a check on his emotions. Someone has to keep their composure or bad things can happen. Joe October 17th, 2011 10:06 pm In looking at all these comments from UGA fans supporting Grantham, its no wonder Georgia is considered the least classy team in the SEC. Troll Power Ratings October 17th, 2011 10:14 pm @Joe…and your school is? Hmmmm Georgia’s players, you know the kids that actually play the game, are going to be behind Grantham like you can’t imagine. The H with the rest of you. Love it!!! If some opposing coach yelled at my kid and his coach didn’t not defend him, I’d be in my son’s coaches face so fast it would make his head spin. Screw all this PC, nice BS. bulldawgbill October 17th, 2011 10:15 pm I HAVE GRANTHAM’S BACK ON THIS ONE!!!!!!!!!!!!!! NOBODY GETS TO YELL AT A UGA PLAYER BUT A UGA COACH OR THAT PLAYER’S PARENTS. HEY RICHT AND MCGARRITY, DON’T BE SUCH CANDY ASSES. Dawg48 October 17th, 2011 10:19 pm Who here is ready to take the fire to Jacksonville and slap some booger eat n gators? Blind Homers All October 17th, 2011 10:24 pm The Nashville site someone linked has the video from an earlier point and in slow motion. You see the UGA players spread all over a 30 yard area with helmets in hand and #36 skipping away trying to get high fives from his buddies as if he has done something clever. Franklin and the cop are then tracking him and trying to point him out and you know the rest. This gives credence to what Tampa Dawg said above but three wrongs don’t make a right. You’ve just won the game by the skin of your teeth. A kid running his mouth in any way to the defeated team’s coach should be nothing more than “good game”, we all know this is a farce even in Pop Warner but it is the way it has always been done. That being said Franklin should have been MAN enough to take this up at a later time via email to Richt or something along those lines. He didn’t and tried to air his grievance with Grantham whose fuse appears to be shorter than the hair on a gnat’s hind quarters. Grantham comes off looking the worst since he is obviously on 10 with veins popping while facing the camera. He looks like Meat Loaf on an overdose of both amphetamines & rhoids trying to belt out a song full of F bombs. Not a pretty picture no matter how you slice it. Overall a kid doing a kid thing instigated something that drew a coach from both sides to demean themselves. Number 36 needs some bleacher time for making his team and coach look bad. Franklin should get a hand slap from the SEC as should Grantham. Incident closed with the exception that maybe Vandy games will have more than a little fire in the future, hopefully only on the field during the game. Loyal Dawg October 17th, 2011 10:29 pm Franklin should get more punishment because he is the Head Coach and should know better than air his grievences in such a manner. Bobo is Not The Problem October 17th, 2011 10:31 pm Grantham is an embarrassment and Richt is a bigger embarrassment for hiring him. For all you testosterone-laced morons who think you have to scream and make a fool of yourself to be manly, Richt or Grantham could have challenged Franklin without looking like such a fool. They are an embarrassment. DawgSTILEfromLEX October 17th, 2011 10:47 pm Hey DawginLex dude, down boy!! Man, get a few wins under your belt and you start acting like the average Georgia fan. Get a grip, Bama, Florida, even LSU – you ain’t even in the same legue with em. Want a real winner? Check out Manny Diaz and the mens tennis team, you want winners now they are winners. bitter--often banned October 17th, 2011 10:48 pm saint richt and girly mcgritty–can kiss my as—s ————————————–coach ‘G”–rules—————–go dawgs JRW7 October 17th, 2011 11:05 pm CTG, I support you!!!!! TOO bad CMR doesn’t have any? redandblackdoig October 17th, 2011 11:14 pm If we had Erk and his bloody, head butting today there would be weeping and knashing of teeth by the politically incorrect saying it was too gory and not setting a good example for the student body. Good to see someone like Grantham that has some passion for the game and his players. Me thinks some more of this is needed by the head coach and his other disciples. Thank goodness Grantham has some cocoanuts and to hades with what everyone else thinks!!! I coach 2 October 17th, 2011 11:24 pm gt4ever, don’t feel sorry for me because I sure don’t feel sorry for you, don’t want any of your pitty. Dirty Dawg October 17th, 2011 11:38 pm I gotta say again…particularly since folks are laying this off on our #36, Shawn Willilams…whatever he said that Franklin over-reacted to…and then Grantham to Franklin’s reaction…take a moment to realize that Williams’ emotional response was in response to a game-long series of cheap-shots designed to ‘bait’ Georgia’s players and, ultimately, was behind Grantham’s blow-up. And while you’re thinking about that, ask yourself who was responsible for the baiting strategy…then you can call Georgia names if you like, and I agree that there were better ways of handling this from the get-go, but we won’t stand for any of their crap either. Joe October 17th, 2011 11:49 pm classless uga. and go figure y’all want to fire the one high-character person associated with your team. http://www.wsmv.com/story/15714196/sec-looking-into-coaches-postgame-confrontation Scott October 17th, 2011 11:56 pm Franklin went after a player, therefore, the scrutiny should be on him more so than on Grantham. I don’t blame Grantham at all no matter what the player did that ultimately irked Franklin. Franklin is trying to cover his hide and place the blame on Grantham. As much as I like and respect Richt, he should not have accepted Franklin’s version of the events (when they spoke on the field) without first watching the footage. He should have his fellow coach’s back. Franklin is a punk. Shane October 18th, 2011 12:06 am I bet Joe (11:49 post) was thrilled when he heard about or saw the fracas on the field just so he could taunt Dawg fans about it on a blog in the days to follow. I bet he stayed awake last night hoping for just the right blog to post links and call UGA classless. What a life this guy has. Wonder what team he pulls for…could it be Tech? You know I bet it is…Saturday’s loss to Virginia must sting (pun intended), so he had to post a jab at Dawg fans so that it would ease the pain of his team’s loss to an awful team. Not to mention he has to get the jabs in before UGA smacks Tech around (again) in November. Poor little troll. BUCCADAWG October 18th, 2011 12:45 am Good job coach Grantham good to see the passion and loyality to the DAWGS. GATA !!!! GO DAWGS!!! Dawg48 October 18th, 2011 12:50 am James Franklin is a poor mans lane kiffin Thomas Brown October 18th, 2011 1:10 am The Lack of Discipline in The University of Georgia Football Program has been an on-going issue since 20 December 2000, and has continued unabated. I refuse to make Excuses for those who represent this Program which I have poured so much of my monies, energy and time into. Mark Richt hired Todd Grantham. Mark Richt recruited the 30 individuals and gave them Scholarships to my alma mater who were supposed to have played vs their coach and Vanderbilt on Saturday, who did not play vs them. Some of their stories of these 30 individuals, early on, were made public enough for us to know what happened to each. Others, more recently this Football Program now wishes to Try to Cover-Up. Our Star Running Back has been front-and-center with his unsportsmanlike actions after nearly every play; and, yet the Officials at each game to-date, have not; Our Leaders on the Field Have Not; and, Glaringly our Coaching Staff, have Not done or said anything in Public to the Millions who look up to these Role Models in our Society. For 11 years now, our Players have been given Preferential Treatment if they are a Star Player in this Program. If they are Not Star Players, they get completely different treatment. I was told that The School I hold my Pigskin from would always make it public when a player is arrested and suspended; This is Not Happening at This Time. We have had several incidents this Season, having Won the Fulmer Cup last season, that it appears this Athletic Department Administration has Covered-Up. Sure the Bad Press has been Overwhelming. When you take the # 11 all-time 1-A wins’ football program, and put up the won/lost results we have after the 2005 season, which was Seven (7) seasons ago, of outside the Top 25 including the win Saturday, it is obvious that the results on the field are not good. The head football coach here, has Lied to the Press about what we deserve and were promised we would be getting full disclosure on. Open Records States, such as Georgia, are supposed to insure that our Youth are not looking up to as Role Models those who do not in the least represent the ideals ingrained to me in my hard studies at The University of Georgia. You could get into specifics, such as what has our Leader on the Field, Quarterback who has been on campus since the bowl game practice 2008 done about the antics on TV by our Star Tailback. What have the Offensive Coordinator and his Staff of Running Backs’ Coaches and others on the offensive coaching staff done about these bravura actions, unsportsmanlike to the very core; what have the Defensive Coordinator and his defensive coaching staff done about these insidious plays we witness on a weekly basis ? Indeed is it worth noting while all this Storm of Controversy ensues on this off-week prior to playing a football team this staff is 2-8 against in our own Division of The SEC, that there are many who posted to blogs at this very site, who have Defended The UGA DC now this week for Standing Up for His Players. They are Not His Players; they are Mark Richt’s, and please note that Mark Richt – whom bloggers are really defending in this last season for him to Right this Ship on and off the field for all This Bad Press Daily for 11 years and Mounting – has publically called out His Own players here at UGA and called out His Own Coaching Staff, he alone hired and he alone is responsible for. Mark Richt is Responsible for the Actions of the Players He Gives Scholarships to to My University, while 2 weeks ago I received my call, as did you, from The University of Georgia to Donate further funds so that this University can be Represented in This Fashion. We are told that all the previous arrests are over; Are They ? We know for a Fact, they are Not and that This University has Seen Fit to Hide and Cover-Up the Arrests as best they can to contain at least that part of this Unholy Unruly Group of Scholarship Football Players representing what this coaching staff wants to be represented by. And, so we have allowed this coaching staff to run-off, to force the Scholarship Players they were heralded in the Press for Bringing Here in Recruiting Classes which for 11 consecutive years now have averaged the # 7 Recruiting Class, averaging both of the most high profile recruiting ranking services. But, whom have we chosen and who is it that is teaching and coaching these Scholarship Recruits, and what example are they giving to a World now-a-days which sorely needs and hungers for Good Leadership – winning on the Field and winning off the field, or after the game is over. If you are in year 11 and you can name 120 different arrests from the WebPages of this site’s articles, you have a problem. The solutions seems to be to cover it up. And, if bravura from the players is to be revered; then why can no one point out that it is the head football coach who does not have the backs of his players and does not have the backs of his coaching staff – saying both are way wrong. This do-nothing athletic director has certainly done nothing. And, this do-nothing president of The University of Georgia has certainly done nothing for the longest time of all of them. And, so here we are the 1st BCS Poll out for the season and despite 5 consecutive wins, we are not ranked in the Top 25 of the BCS. Instead, preparing for an opponent we have beat 2 times in the tenure of this president of UGA, we are now faced with an internal investigation and SEC investigation that sportsmanship in collegiate football at this University is under siege now; and, make no mistake about it, this – not high school and certainly not in the NFL – is where we shall have NCAA standards of Sportsmanship Upheld. This University certainly does not know what it is doing. Loyal Dawg October 18th, 2011 6:16 am The more that comes out about this incident the bigger whiner Franklin looks like. He looks like the biggest pinhead in the video clip poor mouthing to CMR. I am glad CMR keeps a cool head in these situations. He seems like the only adult on the field. My guess is that this will not be the last time we see Franklin in this situation. You can bet on it. I also get the feeling Muschamps head is going to explode before the end of the season! LOL! Thomas Brown October 18th, 2011 7:03 am In the meantime, Mark Richt said that our Players are WRONG and that our coaching staff are WRONG. Loyal Dawg October 18th, 2011 7:33 am The truth can hurt sometimes Thomas. Like I said, the only adult on the field. Somebody’s got to maintain a sense of order; otherwise, chaos. Bozo N. Richt October 18th, 2011 7:39 am UGA brain trust refuses to cut Richt loose when its clear to everyone the game has passed him by. The evidence is overwhelming, whether its managing a game, managing his assistants, or team discipline. UGA football is second rate and the buck stops, not with Richt, but with McGarity and Adams. Second rate is as second rate does. I’ll say this, If Brantley is back, we will lose to Florida, and then to Auburn and then to Tech. Wait and see. BuzzBeat October 18th, 2011 8:22 am The UGA-lovers at the SEC office will not do anything to hurt the beloved university of football in athens. They will come down hard on vanderbilt. chase October 18th, 2011 8:38 am gotta love that it was at Vandy and somehow the cameras only pointed towards Grantham after Franklin went after Williams… If there is a camera on on the other side of the field, we could read Frankilin’s lips and actions too Jimmy Crack October 18th, 2011 8:44 am No matter what happens with UGA and the SEC office…. HEADLINE: Lady Gaga Tech lost to inept VIRGINIA. Al Ciraldo is spinning in his grave. LOST TO VIRGINIA? YES pastor bob hunnicutt October 18th, 2011 8:45 am what is the big deal its football, hard nose football is what uga was known for not lets not hurt anyones feeling. come on 20 years ago this story wouldnt even made the paper. TampaDawg October 18th, 2011 8:59 am BuzzBeat, the bitterness of a probation laden GT fan base is pretty funny. Because UGA doesn’t get hit with infractions by the NCAA or penalties by the SEC must mean that they are the beloved school. Nashville Cat October 18th, 2011 9:05 am Fat ass grantham needs to have a helmet hit on his fat ass mouth. He will shut up then. TampaDawg October 18th, 2011 9:13 am Bobo is not the problem dude SAYS “Richt or Grantham could have challenged Franklin without looking like such a fool. They are an embarrassment.” And Franklin could have actually gone straight to one of the coaches without chasing down a kid which will NEVER be the acceptable route for a coach. If he goes straight to Grantham and avoids Williams altogether, THIS MOST LIKELY NEVER HAPPENS. But he didn’t and now idiots like you think Franklin should be given a pass. Hope you never have to lead people because that is a disaster waiting to happen. TampaDawg October 18th, 2011 9:14 am Nashville Cat October 18th, 2011 9:05 am Fat ass grantham needs to have a helmet hit on his fat ass mouth. He will shut up then. ——————– None of the girls on Vandy’s coaching staff have the guts to do it though so might want to have another plan. TampaDawg October 18th, 2011 9:18 am Thomas Brown, your alma mater is not UGA, just freaking admit it. And I don’t have a problem with that, I have a problem with you lying about it. I have a problem with you posing as a UGA fan yet having NOTHING positive to say about the program as if we can’t see right through you. Your constant rambling posts probably never get entirely read by anyone anymore since it’s the same old crap that has been sent through the wash for weeks. Just go on to another blog where people might want to here the rants of someone sounding like an 8 year old. jenden@mindspring.com October 18th, 2011 9:20 am TidePride, ill-bred is better than in-bred. You must be one the 33% percent of Alabama that is on food stamps. Say hello to Spanky$$$$for me. War Eagle! just sayin' October 18th, 2011 10:17 am Typical UGA behavior. Richt has no control over his players or coaching staff. What an embarrassment. And no, that type of behavior is NOT part of football. It’s a part of totally immature and moronic behavior. Childish and deserving of suspensions and terminations. Betel October 18th, 2011 10:26 am I just think it’s hilarious that there was so much chippiness and anger over mighty Vanderbilt. What’s UGA going to do when they play a real game, instead of this stream of unranked cupcakes? Bob October 18th, 2011 11:05 am I don’t disapprove of Grantham sticking up for his players. I disapprove of his childish, unmanly way of going about it. Good sentiment. Right to do something perhaps. What he chose to do was a bad judgment. bob October 18th, 2011 11:39 am I thoght all games ended with SEC,SEC,SEC! TampaDawg October 18th, 2011 11:43 am just sayin’ October 18th, 2011 10:17 am Typical UGA behavior. Richt has no control over his players or coaching staff. What an embarrassment. —————– Typical troll behavior .. Come on here making baseless statements, no facts and no guts to admit what qualified for saint hood team you’re a fan of. just sayin' October 18th, 2011 11:52 am the 10:17 comment was not me…name hijacked…probably nerd…i’ve said before…franklin went after a ga player and grantham defended them…i support grantham….franklin a black kiffin…won’t go far… BigdaddyC October 18th, 2011 11:52 am How about Grantham uses that fire to get in the face of his own player who was disrespecting another NCAA coach and players after a WIN.. Hey man, as a former linebacker, I have no problem with fire and smash mouth football, but keep it between the whistles.. Too many personal fouls and a lack of discipline. I’ve got a good feeling Vandy’s coach didn’t just pick a random player and go off on him. Yea.. have your teams back, but sometimes having their back means teaching and discipline… Go Dawgs! ugaDAWG October 18th, 2011 1:00 pm Nobody ever heard for James Franklin until now. He’s a loser. ugaDAWG October 18th, 2011 1:03 pm Coach Richt: you really shouldn’t be apologizing to this Franklin a$$hole based upon what his lying a tells you. That’s the lowest in leadership any head coach can display. Throwing your coaches and players under the bus without hearing their side means you should go to the house. Your program is gone. Grantham Rules October 18th, 2011 1:18 pm Vanderbilt needs to stop complaining and get over it or Richt will put a bigger beatdown on them next year and run up the score! darnell October 18th, 2011 1:48 pm Grantham’s pathology is much more severe than has been seen. He was bullied as a child, and it remains to be seen when he will resort to violence. Fortunately for him, he has an outlet, and that outlet is on a public stage. Otherwise, he would surely resort to criminal behaviour. cdog October 18th, 2011 1:52 pm if vanderbilt had won franklin would have been over joyed and would have done a jim harbaugh. franklin wanted to win so bad and when he didn’t he took offense when he should have exhibited good sportsmanship and got off the field and got ready for his next game. grantham should have just got off the field.he got what he came for, a win so why fuss afterward? let franklin fuss and leave the field. Nashville Cat October 18th, 2011 2:21 pm tampa dog, the girl is fat ass grantham. hornblowermg October 18th, 2011 2:51 pm the video agrees with Coach Grantham’s account of what happened. Franklin was never headed toward cmr; he was looking for, and yelling at Williams. Franklin lied after the game, and he is lying now. he is a liar. Snoop October 18th, 2011 3:20 pm Fire Richt Firt McGarity Fire Bobo Give Grantham a raise. Up with manly men. Down with PC wimps. Snoop October 18th, 2011 3:23 pm Richt should apologize to the Bulldog Nation for ripping off the State of Georgia for over THIRTY MILLION DOLLARS in unearned income. Isn’t that considered grand theft? The Georgia State Patrol should arrest da Preacha Man Mark Marjoe Richt and his accomplice, da Water Girrrrulll Rampdawg October 18th, 2011 3:43 pm Hey! Thomas Brown, you’re a vagina. You talk out of your vagina. Close your legs, your breath stinks You sound just like a woman that has’nt had any. all you do is bit*h, bit*h. bit*h. Poured so much of my monies? MY 2 cents October 18th, 2011 3:51 pm Every dawg fan who takes a shot at Grantham needs to apologize. Some of you on here are never satisfied. You want Georgia coaches to be more emotional, well you just got a huge dose. Maybe an overdose for some of you. Personally, Im proud that ONE of our coaches has the BRASS to stand up for his players. That is just what this team needed, a nice shot of “you played your asses off for me all season and Ive got your back”!! This is football, not a nursery school. October 18th, 2011 4:16 pm My 2 cents…. Your comments don’t agree with mine, but I still like how you said what you said and I agree with some of it. Gatormeat October 18th, 2011 4:54 pm The latest ugly incident concerning georgia is just more proof the SEC needs to move towards some more conference realignment. Between threatening to choke opposing players, starting fights after games like last week, and cheap shots, teams like georgia hurt the integrity of the SEC. We will be taking care of the choking incident ourselves next week, a few dogs will be carried off the field instead of walking, but georgia does not belong with the honarable programs like Florida, Alabama, LSU and the others. The SEC needs to take out its trash, georgia needs to go Loyal Dawg October 18th, 2011 5:38 pm Honorable programs like Florida, Alabama, & LSU?!? LOL!!! You’ve got to be kidding me?? People sure do have short memories on this board (on purpose). To the moron who wants to use this as another opportunity to try to bash CMR- you’d really have something to shout about if CMR had acted like Franklin and allowed the situation to turn into a major brawl. How would you feel if half the team had gotten suspended because of fights at the end of the game?? You are a jerk and an idtiot. Go back under your rock. 2dawgs9 October 18th, 2011 5:41 pm Grantham for head coach, Love seeing a coach who has passion for what he does and his players. Man enough unlike Vandi’s coach to say maybe he took it to far. Richt apologizing to there coach for actions of players and coaches before he learned what it was all about totally ruined whatever hope I had as him being the man for the job as our coach. MY 2 cents October 18th, 2011 5:50 pm Youre calling Fla. an honorable program? What have you been watching lately? If any coach deserves to be called out its Will M. Ive never seen a coach attack officials the he has. Never! Even the Florida fans and alums agree he is out of control. This isnt a once a year rant either. This happens week in and week out. He is a hot head who will go to far soon. He is a ticking time bomb, and when we beat Fla., that just might set that bomb off. Florida needs to go! Loyal Dawg October 19th, 2011 7:01 am Wouldn’t you love to see Vandy and Florida have a slugfest? Can you image Franklin whinning about FL players to Muschamp? I’d love to see that postgame fight. [...] UGA, SEC reviewing coaches’ postgame confrontation [...] jake October 19th, 2011 7:49 pm bottom line is uga coaches came from special olympics ChiliDawg October 19th, 2011 10:17 pm Coach Franklin deserves more of the blame for this than he will receive. This is not the last controversey that will involve coach Franklin. You can take that to the bank. Thomas Brown October 19th, 2011 11:14 pm Without these actions by these two (2) coaches, their players would not have been suspended today – so, where are the leaders’ the role models’ the adults’ the coaches’ suspensions ? Hello, SEC, tell me would you really have suspended any of the 3 players had not James Franklin, smartly as it turns out, confronted Todd Grantham, both of whom wear their hearts on their sleeves ? Go ahead, it’s ok, you can tell us. October 20th, 2011 8:05 pm [...] postgame confrontation between Vanderbilt coach James Franklin and UGA defensive coordinator Todd Grantham appears to be a positive for Vanderbilt’s recruiting [...]
2013-05-22 22:13:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23851753771305084, "perplexity": 8161.465137898708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702452567/warc/CC-MAIN-20130516110732-00028-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.askphysics.com/problem-from-laws-of-motion/
Home » Ask Physics » Problem from laws of motion # Problem from laws of motion both block and wedge are of mass m . Question is to find acceleration of wedge and block . I can understand how to solve it taking wedge as frame of reference . but when am trying to find solution taking ground as frame of reference and finding their acceleration different . the answer should be acceleration of wedge =[ gsin(theta)cos(theta) ] / ( 1 + sin^2 (theta) ) acceleration of block = big one My approach to solve it wrt ground frame of reference : For block  : gsin(theta ) is acceleration Normal Force = mg cos (theta) For wedge : g cos (theta ) sin (theta) = acceleration But a should come as I said earlier . Where am I wrong ? ### Ask a Doubt Post your Physics Doubts here Name (required) Email (required) Subject Message/Doubt/Question Upload a file if required
2016-12-11 09:52:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9288852214813232, "perplexity": 3434.2521743724433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544672.33/warc/CC-MAIN-20161202170904-00240-ip-10-31-129-80.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/196790-similarity-matrices.html
# Math Help - Similarity matrices 1. ## Similarity matrices Let A, B are matrices such that $A^2=A,\quad B^2=B$. Prove that A, B are similar if and only if $rank A= rank B$. $A = (UBU^{-1})$
2015-02-01 18:15:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7896015644073486, "perplexity": 715.0831652139642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121478981.94/warc/CC-MAIN-20150124174438-00142-ip-10-180-212-252.ec2.internal.warc.gz"}
http://www.ancestry.com/wiki/index.php?title=Help:Images&diff=5527&oldid=5526
# Help:Images (Difference between revisions) Revision as of 10:51, 6 May 2010 (view source)m (Reverted edits by 41.141.151.74 (talk) to last revision by IAlex)← Older edit Revision as of 19:36, 6 May 2010 (view source)m (1 revision: Help pages)Newer edit → ## Revision as of 19:36, 6 May 2010 Important note: When you edit this page, you agree to release your contribution into the public domain. If you don't want this or can't do this because of license restrictions, please don't edit. This page is one of the Public Domain Help Pages, which can be freely copied into fresh wiki installations and/or distributed with MediaWiki software; see Help:Contents for an overview of all pages. See Project:PD help/Copying for instructions. This page explains the image syntax when editing the wiki. You or another user must usually upload an image before you can use it on a page. ## Syntax The full syntax for displaying an image is: [[File:{file_name}|{options}]] Where options can be zero or more of the following, separated by pipes: • border, frame, thumb, or frameless: Controls how the image is formatted • left, right, center, none: Controls the alignment of the image within a text • baseline, sub, super, top, text-top, middle, bottom, text-bottom: Controls the vertical alignment of the image within a text • {width} px: Resizes the image to the given width in pixels • {width}x{height}px: Resizes the image to fit within the given width and height in pixels; it is possible to specify only the height by writing x{height}px • Note that the image will always retain its aspect ratio. • link={destination}: Allows to link to an arbitrary title, URL or just nowhere ({{ #ifeq: www.ancestry.com | www.mediawiki.org | 1.14+ | 1.14+ }}) • Note that link cannot be used in conjunction with thumb as thumb is always meant to link to the larger version of the image. In addition, link cannot be used with frame. • alt={alternative text}: For changing the alternative text (alt="") of an image ({{ #ifeq: www.ancestry.com | www.mediawiki.org | 1.14+ | 1.14+ }}) • Special cases: • page=1: Displays the specified page when showing a djvu or pdf file The options can be given in any order. If the given options conflict each other, the latter is applied, except for the format options, where the options take the priority in the order of frame, thumb, and frameless and/or border. If a given option does not match any of the other possibilities, it is assumed to be the caption text. Caption text can contain wiki links or other formatting. If no caption text is supplied a caption is automatically created showing the file name; to completely remove the caption, set it to <span title=""></span>. For example, [[File:Example.jpg|20px|<span title=""></span>]]. ## Format The following table shows the effect of all available formats. When the height of an image in thumbnail is bigger than its width and you find it too outstanding, you may try the option upright, which will try to adjust its size to more desirable size. Note that by writing thumb={filename}, you can use a different image for the thumbnail. ## Alignment Note that when using frame or thumb, the default alignment will be right. Description You type You get no alignment specified, or default alignment ... text text text [[File:example.jpg|caption]] text text text ... text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text specify alignment as none ... text text text [[File:example.jpg|caption]] text text text ... text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text {{{ex}}} ... text text text [[File:example.jpg|caption]] text text text ... text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text {{{ex}}} ... text text text [[File:example.jpg|caption]] text text text ... text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text {{{ex}}} ... text text text [[File:example.jpg|caption]] text text text ... text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text ### Vertical alignment text text [[File:Example.jpg|20px]] text text [[File:Example.jpg|20px|baseline]] text text [[File:Example.jpg|20px|sub]] text text [[File:Example.jpg|20px|super]] text text [[File:Example.jpg|20px|text-top]] text text [[File:Example.jpg|20px|middle]] text text [[File:Example.jpg|20px|bottom]] text text [[File:Example.jpg|20px|text-bottom]] results in (the text is underlined to show the result more clearly) text text text text text text text text text text text text text text text text ## Size and Frame Among different formats, the effect of the size parameter may be different, as shown below. • When the format is not specified, or only bordered, the size can be both reduced and enlarged. • An image with frame always ignores the size specification. • The size of an image with thumb and frameless can be reduced, but can not be enlarged beyond the original size of the image. For how it appears when its size is not specified, see Format section above. FormatReduced Enlarged [[File:Example.jpg|50px]] [[File:Example.jpg|500px]] [[File:Example.jpg|50px]] [[File:Example.jpg|500px]] [[File:Example.jpg|50px]] [[File:Example.jpg|500px]] [[File:Example.jpg|50px]] [[File:Example.jpg|500px]] [[File:Example.jpg|50px]] [[File:Example.jpg|500px]] ## Stopping text flow On occasion it is desirable to stop text from flowing around an image. Depending on the web browser's screen resolution and such, text flow on the right side of an image may cause a section header (for instance, == My Header ==) to appear to the right of the image, instead of below it, as a user may expect. The text flow can be stopped by placing <br style="clear: both" /> before the text that should start after the image. ## Gallery of images It's easy to make a gallery of thumbnails only, not other images, with the <gallery> tag. The syntax is: <gallery> File:Image Name.ext|caption File:Image Name.ext|caption {...} </gallery> Note that the image code is not enclosed in brackets when enclosed in gallery tags. Captions are optional, and may contain wiki links or other formatting. for example: <gallery> File:Example.jpg|Item 1 File:Example.jpg File:Example.jpg File:Example.jpg| ''italic caption'' File:Example.jpg|on page "{{PAGENAME}}" </gallery> is formatted as: ### Parameters The gallery tag itself takes several additional parameters: <gallery {parameters}> {images} </gallery> • caption={caption}: sets a caption on the gallery. • widths={width}px: sets the widths of the images. Note the plural, widths • heights={heights}px: sets the (max) heights of the images. • perrow={integer}: sets the number of images per row. Example: Coding: <gallery widths=60px heights=60px perrow=7 caption="sunflowers are groovy"> File:Example.jpg File:Example.jpg File:Example.jpg File:Example.jpg File:Example.jpg File:Example.jpg File:Example.jpg File:Example.jpg File:Example.jpg File:Example.jpg </gallery> Result: If you put a colon (:) before File:, the image will not be embedded and the link will lead to the description page of the file. [[:File:Example.jpg]] [[:File:Example.jpg|Sunflowers]] results in This will make a 50px width picture with a link to the page MediaWiki: You can use the pseudo-namespace “Media” to link directly to a file, bypassing the description page. [[Media:Example.jpg]] [[Media:Example.jpg|Sunflowers]] results in You can also use: [[Special:FilePath/Example.jpg]] which can be used to link to a potential file, even if it doesn't exist. You can also use: {{FILEPATH:Example.jpg}} which generates an external URL to the file inline: http://www.ancestry.com/wiki/images/a/a9/Example.jpg ## Requisites www.ancestry.com | www.mediawiki.org | enabled file uploads | enabled file uploads }} and a user has to upload the file. System administrators may also set the wiki to accept files from {{ #ifeq: www.ancestry.com | www.mediawiki.org | foreign repositories | foreign repositories }}, such as the Wikimedia Commons. For server side image resizing it is necessary to have a scaler configured (such as GD2, ImageMagick, etc.). ## Files at other websites You can link to an external file available online using the same syntax used for linking to an external web page. [http://url.for/some/image.png] Or with different text:
2015-07-06 20:59:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9664658904075623, "perplexity": 1194.2305583744244}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098808.88/warc/CC-MAIN-20150627031818-00267-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.semanticscholar.org/paper/On-Implicit-ODEs-with-Hexagonal-Web-of-Solutions-Agafonov/e7cbe5179a9bef1b2051cabef3e56d3c07965408
# On Implicit ODEs with Hexagonal Web of Solutions @article{Agafonov2008OnIO, title={On Implicit ODEs with Hexagonal Web of Solutions}, author={Sergey I. Agafonov}, journal={Journal of Geometric Analysis}, year={2008}, volume={19}, pages={481-508} } • S. Agafonov • Published 4 August 2008 • Mathematics • Journal of Geometric Analysis Solutions of an implicit ODE form a web. Already for a cubic ODE the 3-web of solutions has a nontrivial local invariant, namely the curvature form. Thus, any local classification of implicit ODEs necessarily has functional moduli if no restriction on the class of ODEs is imposed. In this paper the most symmetric case of hexagonal 3-web of solutions is discussed, i.e. the curvature is supposed to vanish identically. A finite list of normal forms is established under some natural regularity… 10 Citations On symmetries of singular implicit ODEs We study implicit ODEs, cubic in derivative, with infinitesimal symmetry at singular points. Cartan showed that even at regular points the existence of nontrivial symmetry imposes restrictions on the Local classification of singular hexagonal 3-webs with holomorphic Chern connection form and infinitesimal symmetries Implicit ODE, cubic in derivative, generically has no infinitesimal symmetries even at regular points with distinct roots. Cartan showed that at regular points, ODEs with hexagonal 3-web of solutions Pairs of foliations on surfaces • Mathematics • 2009 We survey in this paper results on a particular set of Implicit Differential Equations (IDEs) on smooth surfaces, called Binary/Quadratic Differential Equations (BDEs). These equations define at most Webs and singularities We investigate the singular web structure of first-order PDEs from the viewpoint of singularity theory. Most of the results given have already appeared in papers by others, as well as the author [28, 3-webs with singularities • Mathematics • 2016 A 3-web with singularities is an ordered collection of three one-dimensional distributions L1, L2, L3 on a 2-dimensional manifold M. The subset Σ ⊂ M where these distributions are not pairwise Frobenius 3-Folds via Singular Flat 3-Webs We give a geometric interpretation of weighted homogeneous solutions to the associativity equation in terms of the web theory and construct a massive Frobenius 3-fold germ via a singular 3-web germ Note on generic singularities of planar flat 3-webs We propose a definition of genericity for singular flat planar 3-webs formed by integral curves of implicit ODEs and give a classification of generic singularities of such webs. A generalization of the Gauss-Bonnet and Hopf-Poincar\'e theorems • Mathematics • 2015 We consider a locally trivial fiber bundle $\pi : E \to M$ over a compact oriented two-dimensional manifold $M$, and a section $s$ of this bundle defined over $M \setminus \Sigma$, where $\Sigma$ is ## References SHOWING 1-10 OF 35 REFERENCES Systems of three differential equations of hydrodynamic type with hexagonal 3-web of characteristics on the solutions In this article we obtain complete description of the class of hyperbolic systems of three differential equations of hydrodynamic type, for which the characteristics form hexa- gonal 3-web on each INTEGRATION OF WEAKLY NONLINEAR SEMI-HAMILTONIAN SYSTEMS OF HYDRODYNAMIC TYPE BY METHODS OF THE THEORY OF WEBS Weakly nonlinear semi-Hamiltonian systems of n differential equations of hydrodynamic type in Riemann invariants are considered, and the geometry of the (n + 2)-web formed by the characteristics and On binary differential equations • Mathematics • 1995 In this paper we give the local classification of solution curves of binary differential equations a(x,y)dy2+2b(x,y)dxdy+c(x,y)dx2=0 at points at which the discriminant function b2-ac has a Morse Geometrical Methods in the Theory of Ordinary Differential Equations Since the first edition of this book, geometrical methods in the theory of ordinary differential equations have become very popular and some progress has Since the author explains basic ideas free Dupin indicatrices and families of curve congruences • Mathematics • 2004 We study a number of natural families of binary differential equations (BDE's) on a smooth surface M in R-3. One, introduced by G. J. Fletcher in 1996, interpolates between the asymptotic and Classification of generic integral diagrams and first order ordinary differential equations • Mathematics, Computer Science • 1990 Local normal forms of generic implicit first order ordinary differential equations with independent first integrals with Legendre singularity theory and differential analysis are given. Invariant description of solutions of hydrodynamic-type systems in hodograph space: hydrodynamic surfaces Hydrodynamic surfaces are solutions of hydrodynamic-type systems viewed as non-parametrized submanifolds of the hodograph space. We propose an invariant differential-geometric characterization of Notes on versal deformation offirst order PDE and web structure Abstract We apply Thom-Mather theory to the diagram of smooth map germs associated to first order partial differential equations. This reduces the problem of function moduli of infinite dimension for Positive Quadratic Differential Forms: Topological Equivalence Through Newton Polyhedra • Mathematics • 2006 Abstract.The purpose of this article is to establish conditions under which a positive quadratic differential form is topologically equivalent to its principal part defined by Newton polyhedra. The
2022-08-14 03:57:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7950307726860046, "perplexity": 1272.433111076149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00053.warc.gz"}
https://ja.overleaf.com/learn/latex/Questions/How_do_I_insert_an_image_into_my_document%3F
Then you can include them in your document using the \includegraphics command, such as in this example: \documentclass{article}
2022-01-23 21:40:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9934762120246887, "perplexity": 1360.3154044850316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.59/warc/CC-MAIN-20220123202547-20220123232547-00283.warc.gz"}
https://socratic.org/questions/what-is-the-limit-of-2x-2-4x-x-1-as-x-goes-to-infinity
# What is the limit of (2x^2-4x) / (x+1) as x goes to infinity? Oct 21, 2015 $\infty$ #### Explanation: ${\lim}_{x \to \infty} \frac{2 {x}^{2} - 4 x}{x + 1} = {\lim}_{x \to \infty} \frac{2 x - 4}{1 + \frac{1}{x}} = \infty$ When $x \to \infty$ then $\frac{1}{x} \to 0$.
2022-08-19 11:29:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9861303567886353, "perplexity": 5089.922818218183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00576.warc.gz"}
https://zbmath.org/?q=an:0836.49013
# zbMATH — the first resource for mathematics Convexity properties of the minimum time function. (English) Zbl 0836.49013 The paper deals with the minimum time optimal control problem governed by the system \begin{aligned} y'(t) & = f \bigl( y(t), u(t) \bigr) \tag{*} \\ y(0) & = x \in \mathbb{R}^n, \quad u : [0, + \infty) \to U, \end{aligned} where $$U \subset \mathbb{R}^m$$ is a compact set. Given a target $$K$$ and denoting by $$\Omega \subset \mathbb{R}^n$$ the set of points which can be driven in a finite time on $$K$$, the minimum time function $$\tau : \Omega \to [0, + \infty)$$ is defined as $\tau (x) : = \inf \biggl\{ T : \bigl( y(t), u(t) \bigr) \text{ solution of } (*),\;y(T) \in K \biggr\}.$ Under a Petrov type controllability assumption, it is proved that the semiconcavity of the distance function from the target (which can be considered as a mild regularity assumption on $$K)$$ implies the semiconcavity of $$\tau$$. For linear control systems a semiconvexity property can also be established. These conditions can be used to study the structure (Hausdorff dimension estimates, propagation of singularities) of the set of nondifferentiability points of $$\tau$$. Reviewer: L.Ambrosio (Pavia) ##### MSC: 49L20 Dynamic programming in optimal control and differential games 49L25 Viscosity solutions to Hamilton-Jacobi equations in optimal control and differential games 93C10 Nonlinear systems in control theory Full Text: ##### References: [1] Alberti G., Ambrosio L., Cannarsa P.: On the singularities of convex functions. Manuscr. Math.76, 421–435 (1992) · Zbl 0784.49011 · doi:10.1007/BF02567770 [2] Ambrosio L., Cannarsa P., Soner H.M.: On the propagation of singularities of semi-convex functions. Ann. Sc. Norm. Sup. Pisa.20, 597–616 (1993) · Zbl 0874.49041 [3] Bardi M.: A boundary value problem for the minimum time function. SIAM J. Control Optim.27, 776–785 (1989) · Zbl 0682.49034 · doi:10.1137/0327041 [4] Bardi M., Falcone M., An approximation scheme for the minimum time function. SIAM J. Control Optim.28, 950–965 (1990) · Zbl 0723.49024 · doi:10.1137/0328053 [5] Bardi M., Soravia P.: Time-optimal control, Lie brackets and Hamilton-Jacobi equations. Preprint Univ. Padova, Dep. Math.13 (1991) · Zbl 0732.35013 [6] Bressan A.: On two conjectures by Hájek. Funkcial. Ekvac.23, 221–227 (1980) · Zbl 0447.49035 [7] Cannarsa P., Frankowska H.: Some characterizations of optimal trajectories in control theory. SIAM J. Control Optim.29, 1322–1347 (1991) · Zbl 0744.49011 · doi:10.1137/0329068 [8] Cannarsa P., Soner H.M.: On the singularities of the viscosity solutions to Hamilton-Jacobi-Bellman equations. Indiana Univ. Math. J.36, 501–524 (1987) · Zbl 0612.70016 · doi:10.1512/iumj.1987.36.36028 [9] Conti R.: Processi di controllo lineari in $$\mathbb{R}$$ n (Quad. Unione Mat. Italiana 30) Bologna: Pitagora, 1985 [10] Crandall M.G., Evans L.C., Lions P.-L.: Some properties of viscosity solutions of Hamilton-Jacobi equations. Trans. Amer. Math. Soc.282, 487–502 (1984) · Zbl 0543.35011 · doi:10.1090/S0002-9947-1984-0732102-X [11] Crandall M.G., Lions P.-L.: Viscosity solutions of Hamilton-Jacobi equations. Trans. Amer. Math. Soc.277, 1–42 (1983) · Zbl 0599.35024 · doi:10.1090/S0002-9947-1983-0690039-8 [12] Evans L.C., James M.R.: The Hamilton-Jacobi-Bellman equation for time optimal control. SIAM J. Control Optim.27, 1477–1489 (1989) · Zbl 0688.49029 · doi:10.1137/0327076 [13] Hájek O.: On differentiability of the minimal time function. Funkcial. Ekvac.20, 97–114 (1977) [14] Hermes H.: Feedback synthesis and positive local solutions to Hamilton-Jacobi-Bellman equations, in Proc. MTNS 87 (Sacks, Byrns, Martin editors). North Holland · Zbl 0673.49010 [15] Hermes H., LaSalle J.P.: Functional analysis and time optimal control. New York: Academic Press 1969 · Zbl 0203.47504 [16] Kruzhkov S.N.: Generalized solutions of the Hamilton-Jacobi equations of the eikonal type I. Math. USSR Sb.27, 406–445 (1975) · Zbl 0369.35012 · doi:10.1070/SM1975v027n03ABEH002522 [17] Lions P.L.: Generalized solutions of Hamilton-Jacobi equations. Boston: Pitman 1982 · Zbl 0497.35001 [18] Lee E.B., Markus L.: Foundations of optimal control theory. New York: John Wiley 1968 [19] Petrov N.N.: Controllability of autonomous systems. Differential Equations4, 311–317 (1968) · Zbl 0234.93004 [20] Petrov N.N.: On the Bellman function for the time-optimal process problem. J. Appl. Math. Mech34, 785–791 (1970) · Zbl 0253.49012 · doi:10.1016/0021-8928(70)90060-2 [21] Sinestrari C: Semiconcavity of solutions of stationary Hamilton-Jacobi equations. Nonlinear Anal. (to appear) · Zbl 0836.49014 [22] Sussmann H.J.: The structure of time-optimal trajectories for single-input systems in the plane: theC nonsingular case. SIAM J. Control Optim.25, 433–465 (1987) · doi:10.1137/0325025 [23] Sussmann H.J.: The structure of time-optimal trajectories for single-input systems in the plane: the general real analytic case. SIAM J. Control Optim.25, 868–904 (1987) · Zbl 0664.93034 · doi:10.1137/0325048 [24] Sussmann H.J.: Regular synthesis for time-optimal trajectories of single-input real analytic systems in the plane. SIAM J. Control Optim.25, 1145–1162 (1987) · Zbl 0701.93035 · doi:10.1137/0325062 [25] Veliov V.M.: On the Lipschitz continuity of the value function in optimal control. J. Optim. Theory Appl. (to appear) · Zbl 0901.49022 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-03-05 16:30:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7922940850257874, "perplexity": 5180.822327668037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373095.44/warc/CC-MAIN-20210305152710-20210305182710-00581.warc.gz"}
https://www.toppr.com/ask/question/a-particle-is-oscillating-on-the-xaxis-with-an-amplitude/
Question # A particle is oscillating on the X-axis with an amplitude about the point with a frequency . A concave mirror of focal length is placed at the origin (see figure) Identify the correct statements: (A) The image executes periodic motion (B) The image executes no n-periodic motion (C) The turning points of the image are asymmetric w.r.t the image of the point at (D) The distance between the turning points of the oscillation of the image is A B C D ## (A), (D) Medium JEE Mains Solution Verified by Toppr Correct option is C)
2022-10-05 21:40:36
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8666087985038757, "perplexity": 1313.367309029185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00556.warc.gz"}
http://cstheory.stackexchange.com/questions/22250/are-two-qubit-unitaries-necessary-for-universal-quantum-computation
# Are two-qubit unitaries necessary for universal quantum computation? I was going through Fast Universal Quantum Computation with Railroad-switch Local Hamiltonians by Daniel Nagaj. In the first sentence of the fifth paragraph on the fourth page, he said, Two-qubit unitaries $U_t$ are necessary for universal quantum computation I understand that there are universal sets of two-qubit quantum gates but does it imply that the set must always consist of two-bit gates? - How else would you get something like CNot? – Logan Mayfield Apr 24 '14 at 15:34 No, it is not necessary for a set of gates universal for quantum computation to contain a two-qubit gate. A common example of a set of gates universal for quantum computation is $\{H, R_{\pi / 4}, \operatorname{CNOT}\}$, where $H = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}$ is the (one-qubit) Hadamard gate, $R_{\pi / 4} = \begin{bmatrix} 1 & 0 \\ 0 & e^{\pi i / 4} \end{bmatrix}$ is the (one-qubit) $\frac{\pi}{8}$-gate, and $\operatorname{CNOT} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}$ is the (two-qubit) controlled NOT gate. To obtain a set of gates universal for quantum computation but without any two-qubit gates, we can simply replace the two-qubit gate $\operatorname{CNOT}$ with the three-qubit gate $\operatorname{CNOT} \otimes \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$. This three-qubit gate applies $\operatorname{CNOT}$ to the first two qubits and ignores the third qubit. The correct statement is: One-qubit unaries are not sufficient for universal quantum computation. - A much better example is Hadamard + Toffoli. While still in violation of the likely spirit of the question, it is a smaller violation than a gate basis whose sole multi-qubit operation is a two-qubit-and-also-a-one-qubit operation; and also the result is in itself interesting (that quantum computation is not "about" complex amplitudes per se, and can be simulated with bounded error by two such simple operations). – Niel de Beaudrap Apr 24 '14 at 22:43 I think this question is probably better suited to cs.stackexchange.com, and I hesitate to answer it. That single qubit gates are not universal was stated by Deutsch, Barenco, and Ekert in 1995. They point out that you cannot entangle un-entangled qubits with only single qubit operators. You can also prove this without any appeal to entanglement or states in general by showing that at least one two qubit operator, namely $CNot$, cannot be constructed by single qubit operators. Assume single qubit operators are universal. Then there must exist some $A$ and $B$ such that $A \otimes B = CNot = (P_0 \otimes I) + (P_1 \otimes X)$. It follows then that that $A_{00}B = I$, $A_{11}B = X$, and $A_{01}B=A_{10}B=0$. For $A_{00}B$ to be $I$, it must be the case that $B$ itself is $I$. Similarly, for $A_{11}B$ to be $X$, $B$ itself must be $X$. This is a contradiction. So, single qubit operators are not universal as they cannot carry out at least one two qubit unitary operation, namely $CNot$. -
2016-05-31 11:55:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9320449233055115, "perplexity": 225.37246835452277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051299749.12/warc/CC-MAIN-20160524005459-00234-ip-10-185-217-139.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/391537/hierarchical-clustering-based-on-relative-error
# Hierarchical clustering based on relative error How can I use Weka to do hierarchical clustering, but based on the % difference between two elements rather than absolute elements? Let's say I want to draw many circles with specific radii. I have a vector of numbers where each number represents the radius of one circle. For each number, I set my compass to that radius using a ruler, draw, and repeat. I notice some radiuses are pretty close. Because it is tedious to set the compass, I decide to just use one approximation for all of them. For example: • If I have values $$(51.3, 51.7, 51.8)$$ they are all pretty similar. I could just set my compass to their mean 51.6, and draw 3 times, which is close enough (the error is only 0.6%, 0.1% and 0.4%) but saves me some work. • On the other hand if the values were $$(12, 34, 119)$$ I definitely want to set the compass each time. If I just use the mean 55, then they would be off by 358%, 61% and 53%. Not really worth it. So it seems like I want to group similar values, hence I should cluster. But because I'm interested in percent error, I want the distances to be scaled by each element. I couldn't figure out how to do that. If I just hierarchically cluster the values by Euclidean distance, the high values do fine, but lower values get grouped close together because the Euclidean distance between them is small. I think if instead of calculating distance as $$|i-j|$$, it was calculated as $$2 \frac{|i-j|}{i+j}$$, and the hierarchical clustering ran on that, it would give me the result I want. However, I couldn't find a way of implementing this in Weka. I am given options of Euclidean, Manhattan, Chebyshev and Minkowski distances and none of these seems to scale by mean. I couldn't come up with a filter that would do what I want either. Do I really have to write my own distance function (how?) or am I overlooking something simple?
2019-07-17 11:03:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7708446979522705, "perplexity": 335.528507434798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525136.58/warc/CC-MAIN-20190717101524-20190717123524-00539.warc.gz"}
http://mb.math.cas.cz/mb122-3/8.html
MATHEMATICA BOHEMICA, Vol. 122, No. 3, pp. 295-299, 1997 # Normal spaces and the Lusin-Menchoff property ## Pavel Pyrih Pavel Pyrih, Department of Mathematical Analysis, Charles University, Sokolovska 83, 186 00 Prague 8, Czech Republic, e-mail: pyrih@karlin.mff.cuni.cz Abstract: We study the relation between the Lusin-Menchoff property and the $F_\sigma$-"semiseparation" property of a fine topology in normal spaces. Three examples of normal topological spaces having the $F_\sigma$-"semiseparation" property without the Lusin-Menchoff property are given. A positive result is obtained in the countable compact space. Keywords: fine topology, finely separated sets, Lusin-Menchoff property, normal space Classification (MSC 1991): 54A10, 26A03, 31C40 Full text available as PDF (smallest), as compressed PostScript (.ps.gz) or as raw PostScript (.ps).
2013-05-18 17:36:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4325474202632904, "perplexity": 6246.420181102468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00019-ip-10-60-113-184.ec2.internal.warc.gz"}
https://aimsciences.org/article/doi/10.3934/nhm.2011.6.61
# American Institute of Mathematical Sciences March  2011, 6(1): 61-75. doi: 10.3934/nhm.2011.6.61 ## On the convergence rate in multiscale homogenization of fully nonlinear elliptic problems 1 Dip. di Matematica Pura e Applicata, Univ. dell'Aquila, loc. Monteluco di Roio, 67040 l'Aquila, Italy 2 Dipartimento di Matematica Pura ed Applicata, Università di Padova, via Trieste 63, 35121 Padova Received  October 2009 Revised  May 2010 Published  March 2011 This paper concerns periodic multiscale homogenization for fully nonlinear equations of the form $u^\epsilon+H^\epsilon (x,\frac{x}{\epsilon},\ldots,\frac{x}{epsilon^k},Du^\epsilon,D^2u^\epsilon)=0$. The operators $H^\epsilon$ are a regular perturbations of some uniformly elliptic, convex operator $H$. As $\epsilon\to 0^+$, the solutions $u^\epsilon$ converge locally uniformly to the solution $u$ of a suitably defined effective problem. The purpose of this paper is to obtain an estimate of the corresponding rate of convergence. Finally, some examples are discussed. Citation: Fabio Camilli, Claudio Marchi. On the convergence rate in multiscale homogenization of fully nonlinear elliptic problems. Networks & Heterogeneous Media, 2011, 6 (1) : 61-75. doi: 10.3934/nhm.2011.6.61 ##### References: [1] O. Alvarez and M. Bardi, Viscosity solutions methods for singular perturbations in deterministic and stochastic control,, SIAM J. Control Optim., 40 (2001), 1159. doi: 10.1137/S0363012900366741. [2] O. Alvarez and M. Bardi, Singular perturbations of nonlinear degenerate parabolic PDEs: A general convergence result,, Arch. Ration. Mech. Anal., 170 (2003), 17. doi: 10.1007/s00205-003-0266-5. [3] O. Alvarez and M. Bardi, Ergodicity, stabilization, and singular perturbations for Bellman-Isaacs equation,, Mem. Amer. Math. Soc., 204 (2010). [4] O. Alvarez, M. Bardi and C. Marchi, Multiscale problems and homogenizations for second-order Hamilton-Jacobi equations,, J. Differential Equations, 243 (2007), 349. doi: 10.1016/j.jde.2007.05.027. [5] O. Alvarez, M. Bardi and C. Marchi, Multiscale singular perturbation and homogenization of optimal control problems,, in, (2008), 1. [6] M. Arisawa and P. L. Lions, On ergodic stochastic control,, Comm. Partial Differential Equations, 23 (1998), 2187. [7] G. Barles and E. R. Jakobsen, On the convergence rate of approximation schemes for Hamilton-Jacobi-Bellman equations,, M2AN Math. Model. Numer. Anal., 36 (2002), 33. doi: 10.1051/m2an:2002002. [8] A. Braides and A. Defranceschi, "Homogenization of Multiple Integrals,'', Clarendon Press, (1998). [9] A. Bensoussan, J. L. Lions and G. Papanicolaou, "Asymptotic Analysis for Periodic Structures,'', North-Holland, (1978). [10] L. A. Caffarelli, P. Souganidis and L. Wang, Homogenization of fully nonlinear, uniformly elliptic and parabolic partial differential equations in stationary ergodic media,, Comm. Pure Appl. Math., 58 (2005), 319. doi: 10.1002/cpa.20069. [11] F. Camilli and C. Marchi, Rates of convergence in periodic homogenization of fully nonlinear uniformly elliptic PDEs,, Nonlinearity, 22 (2009), 1481. doi: 10.1088/0951-7715/22/6/011. [12] I. Capuzzo Dolcetta and H. Ishii, On the rate of convergence in Homogenization of Hamilton-Jacobi equations,, Indiana Univ. Math. J., 50 (2001), 1113. doi: 10.1512/iumj.2001.50.1933. [13] M. G. Crandall, H. Ishii and P.-L. Lions, User's guide to viscosity solutions of second order partial differential equations,, Bull. Amer. Math. Soc. (N.S.), 27 (1992), 1. [14] L. Evans, The perturbed test function method for viscosity solutions of nonlinear P.D.E.,, Proc. Roy. Soc. Edinburgh Sect. A, 111 (1989), 359. [15] L. Evans, Periodic homogenisation of certain fully nonlinear partial differential equations,, Proc. Roy. Soc. Edinburgh Sect. A, 120 (1992), 245. [16] W. H. Fleming and H. M. Soner, "Controlled Markov Processes and Viscosity Solutions,'', Springer-Verlag, (1993). [17] W. H. Fleming and P. E. Souganidis, On the existence of value functions of two-players, zero-sum stochastic differential games,, Indiana Univ. Math. J., 38 (1989), 293. doi: 10.1512/iumj.1989.38.38015. [18] D. Gilbarg and N. S. Trudinger, "Elliptic Partial Differential Equations of Second Order,'' 2nd, edition, (1983). [19] V. V. Jikov, S. M. Kozlov and O. A. Oleinik, "Homogenization of Differential Operators and Integral Functionals,'', Springer, (1994). [20] N. V. Krylov, On the rate of convergence of finite-difference approximations for Bellman's equations with variable coefficients,, Probab. Theory Related Fields, 117 (2000), 1. doi: 10.1007/s004400050264. [21] O. A. Ladyzhenskaya and N. N. Ural'tseva, "Linear and Quasilinear Elliptic Equations,'', Academic Press, (1968). [22] P. L. Lions, G. Papanicolaou and S. R. S. Varadhan, Homogeneization of Hamilton-Jacobi equations,, Unpublished, (1986). [23] P.L. Lions and P. Souganidis, Homogenization of "viscous'' Hamilton-Jacobi equations in stationary ergodic media,, Comm. Partial Differential Equations, 30 (2005), 335. doi: 10.1081/PDE-200050077. [24] P. L. Lions and P. Souganidis, Homogenization of degenerate second-order PDE in periodic and almost periodic environments and applications,, Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire, 22 (2005), 667. doi: 10.1016/j.anihpc.2004.10.009. [25] C. Marchi, Rate of convergence for multiscale homogenization of Hamilton-Jacobi equations,, Proc. Roy. Soc. Edinburgh Sect. A, 139 (2009), 519. doi: 10.1017/S0308210507000704. [26] M. V. Safonov, Classical solution of nonlinear elliptic equations of second-order,, Math. USSR-Izv., 33 (1989), 597. show all references ##### References: [1] O. Alvarez and M. Bardi, Viscosity solutions methods for singular perturbations in deterministic and stochastic control,, SIAM J. Control Optim., 40 (2001), 1159. doi: 10.1137/S0363012900366741. [2] O. Alvarez and M. Bardi, Singular perturbations of nonlinear degenerate parabolic PDEs: A general convergence result,, Arch. Ration. Mech. Anal., 170 (2003), 17. doi: 10.1007/s00205-003-0266-5. [3] O. Alvarez and M. Bardi, Ergodicity, stabilization, and singular perturbations for Bellman-Isaacs equation,, Mem. Amer. Math. Soc., 204 (2010). [4] O. Alvarez, M. Bardi and C. Marchi, Multiscale problems and homogenizations for second-order Hamilton-Jacobi equations,, J. Differential Equations, 243 (2007), 349. doi: 10.1016/j.jde.2007.05.027. [5] O. Alvarez, M. Bardi and C. Marchi, Multiscale singular perturbation and homogenization of optimal control problems,, in, (2008), 1. [6] M. Arisawa and P. L. Lions, On ergodic stochastic control,, Comm. Partial Differential Equations, 23 (1998), 2187. [7] G. Barles and E. R. Jakobsen, On the convergence rate of approximation schemes for Hamilton-Jacobi-Bellman equations,, M2AN Math. Model. Numer. Anal., 36 (2002), 33. doi: 10.1051/m2an:2002002. [8] A. Braides and A. Defranceschi, "Homogenization of Multiple Integrals,'', Clarendon Press, (1998). [9] A. Bensoussan, J. L. Lions and G. Papanicolaou, "Asymptotic Analysis for Periodic Structures,'', North-Holland, (1978). [10] L. A. Caffarelli, P. Souganidis and L. Wang, Homogenization of fully nonlinear, uniformly elliptic and parabolic partial differential equations in stationary ergodic media,, Comm. Pure Appl. Math., 58 (2005), 319. doi: 10.1002/cpa.20069. [11] F. Camilli and C. Marchi, Rates of convergence in periodic homogenization of fully nonlinear uniformly elliptic PDEs,, Nonlinearity, 22 (2009), 1481. doi: 10.1088/0951-7715/22/6/011. [12] I. Capuzzo Dolcetta and H. Ishii, On the rate of convergence in Homogenization of Hamilton-Jacobi equations,, Indiana Univ. Math. J., 50 (2001), 1113. doi: 10.1512/iumj.2001.50.1933. [13] M. G. Crandall, H. Ishii and P.-L. Lions, User's guide to viscosity solutions of second order partial differential equations,, Bull. Amer. Math. Soc. (N.S.), 27 (1992), 1. [14] L. Evans, The perturbed test function method for viscosity solutions of nonlinear P.D.E.,, Proc. Roy. Soc. Edinburgh Sect. A, 111 (1989), 359. [15] L. Evans, Periodic homogenisation of certain fully nonlinear partial differential equations,, Proc. Roy. Soc. Edinburgh Sect. A, 120 (1992), 245. [16] W. H. Fleming and H. M. Soner, "Controlled Markov Processes and Viscosity Solutions,'', Springer-Verlag, (1993). [17] W. H. Fleming and P. E. Souganidis, On the existence of value functions of two-players, zero-sum stochastic differential games,, Indiana Univ. Math. J., 38 (1989), 293. doi: 10.1512/iumj.1989.38.38015. [18] D. Gilbarg and N. S. Trudinger, "Elliptic Partial Differential Equations of Second Order,'' 2nd, edition, (1983). [19] V. V. Jikov, S. M. Kozlov and O. A. Oleinik, "Homogenization of Differential Operators and Integral Functionals,'', Springer, (1994). [20] N. V. Krylov, On the rate of convergence of finite-difference approximations for Bellman's equations with variable coefficients,, Probab. Theory Related Fields, 117 (2000), 1. doi: 10.1007/s004400050264. [21] O. A. Ladyzhenskaya and N. N. Ural'tseva, "Linear and Quasilinear Elliptic Equations,'', Academic Press, (1968). [22] P. L. Lions, G. Papanicolaou and S. R. S. Varadhan, Homogeneization of Hamilton-Jacobi equations,, Unpublished, (1986). [23] P.L. Lions and P. Souganidis, Homogenization of "viscous'' Hamilton-Jacobi equations in stationary ergodic media,, Comm. Partial Differential Equations, 30 (2005), 335. doi: 10.1081/PDE-200050077. [24] P. L. Lions and P. Souganidis, Homogenization of degenerate second-order PDE in periodic and almost periodic environments and applications,, Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire, 22 (2005), 667. doi: 10.1016/j.anihpc.2004.10.009. [25] C. Marchi, Rate of convergence for multiscale homogenization of Hamilton-Jacobi equations,, Proc. Roy. Soc. Edinburgh Sect. A, 139 (2009), 519. doi: 10.1017/S0308210507000704. [26] M. V. Safonov, Classical solution of nonlinear elliptic equations of second-order,, Math. USSR-Izv., 33 (1989), 597. [1] Eric Chung, Yalchin Efendiev, Ke Shi, Shuai Ye. A multiscale model reduction method for nonlinear monotone elliptic equations in heterogeneous media. Networks & Heterogeneous Media, 2017, 12 (4) : 619-642. doi: 10.3934/nhm.2017025 [2] Y. Efendiev, B. Popov. On homogenization of nonlinear hyperbolic equations. Communications on Pure & Applied Analysis, 2005, 4 (2) : 295-309. doi: 10.3934/cpaa.2005.4.295 [3] Thierry Colin, Boniface Nkonga. Multiscale numerical method for nonlinear Maxwell equations. Discrete & Continuous Dynamical Systems - B, 2005, 5 (3) : 631-658. doi: 10.3934/dcdsb.2005.5.631 [4] Andriy Bondarenko, Guy Bouchitté, Luísa Mascarenhas, Rajesh Mahadevan. Rate of convergence for correctors in almost periodic homogenization. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 503-514. doi: 10.3934/dcds.2005.13.503 [5] Zhilei Liang. Convergence rate of solutions to the contact discontinuity for the compressible Navier-Stokes equations. Communications on Pure & Applied Analysis, 2013, 12 (5) : 1907-1926. doi: 10.3934/cpaa.2013.12.1907 [6] Jie Zhao. Convergence rates for elliptic reiterated homogenization problems. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2787-2795. doi: 10.3934/cpaa.2013.12.2787 [7] Mogtaba Mohammed, Mamadou Sango. Homogenization of nonlinear hyperbolic stochastic partial differential equations with nonlinear damping and forcing. Networks & Heterogeneous Media, 2019, 14 (2) : 341-369. doi: 10.3934/nhm.2019014 [8] Jean Louis Woukeng. $\sum$-convergence and reiterated homogenization of nonlinear parabolic operators. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1753-1789. doi: 10.3934/cpaa.2010.9.1753 [9] Dag Lukkassen, Annette Meidell, Peter Wall. Multiscale homogenization of monotone operators. Discrete & Continuous Dynamical Systems - A, 2008, 22 (3) : 711-727. doi: 10.3934/dcds.2008.22.711 [10] Federica Masiero. Hamilton Jacobi Bellman equations in infinite dimensions with quadratic and superquadratic Hamiltonian. Discrete & Continuous Dynamical Systems - A, 2012, 32 (1) : 223-263. doi: 10.3934/dcds.2012.32.223 [11] Alain Bensoussan, Jens Frehse, Jens Vogelgesang. Systems of Bellman equations to stochastic differential games with non-compact coupling. Discrete & Continuous Dynamical Systems - A, 2010, 27 (4) : 1375-1389. doi: 10.3934/dcds.2010.27.1375 [12] Patrick Henning. Convergence of MsFEM approximations for elliptic, non-periodic homogenization problems. Networks & Heterogeneous Media, 2012, 7 (3) : 503-524. doi: 10.3934/nhm.2012.7.503 [13] Assyr Abdulle, Yun Bai, Gilles Vilmart. Reduced basis finite element heterogeneous multiscale method for quasilinear elliptic homogenization problems. Discrete & Continuous Dynamical Systems - S, 2015, 8 (1) : 91-118. doi: 10.3934/dcdss.2015.8.91 [14] Tong Li, Hui Yin. Convergence rate to strong boundary layer solutions for generalized BBM-Burgers equations with non-convex flux. Communications on Pure & Applied Analysis, 2014, 13 (2) : 835-858. doi: 10.3934/cpaa.2014.13.835 [15] Boris Haspot, Ewelina Zatorska. From the highly compressible Navier-Stokes equations to the porous medium equation -- rate of convergence. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 3107-3123. doi: 10.3934/dcds.2016.36.3107 [16] Yulan Lu, Minghui Song, Mingzhu Liu. Convergence rate and stability of the split-step theta method for stochastic differential equations with piecewise continuous arguments. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 695-717. doi: 10.3934/dcdsb.2018203 [17] Olesya V. Solonukha. On nonlinear and quasiliniear elliptic functional differential equations. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 869-893. doi: 10.3934/dcdss.2016033 [18] Xia Huang. Stable weak solutions of weighted nonlinear elliptic equations. Communications on Pure & Applied Analysis, 2014, 13 (1) : 293-305. doi: 10.3934/cpaa.2014.13.293 [19] Annamaria Canino, Elisa De Giorgio, Berardino Sciunzi. Second order regularity for degenerate nonlinear elliptic equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (8) : 4231-4242. doi: 10.3934/dcds.2018184 [20] C. Bandle, Y. Kabeya, Hirokazu Ninomiya. Imperfect bifurcations in nonlinear elliptic equations on spherical caps. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1189-1208. doi: 10.3934/cpaa.2010.9.1189 2018 Impact Factor: 0.871
2019-06-25 02:11:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6936421394348145, "perplexity": 4265.286154949881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999783.49/warc/CC-MAIN-20190625011649-20190625033649-00208.warc.gz"}
https://en.wikipedia.org/wiki/Emiliania_huxleyi
# Emiliania huxleyi Emiliania huxleyi A scanning electron micrograph of a single Emiliania huxleyi cell. Scientific classification Domain: (unranked): Class: Order: Family: Genus: Species: E. huxleyi Binomial name Emiliania huxleyi (Lohm.) Hay and Mohler Bloom of E. huxleyi in Hardangerfjord, Norway, May 2020 Emiliania huxleyi is a species of coccolithophore found in almost all ocean ecosystems from the equator to sub-polar regions, and from nutrient rich upwelling zones to nutrient poor oligotrophic waters.[1][2][3][4] It is one of thousands of different photosynthetic plankton that freely drift in the euphotic zone of the ocean, forming the basis of virtually all marine food webs. It is studied for the extensive blooms it forms in nutrient-depleted waters after the reformation of the summer thermocline. Like other coccolithophores, E. huxleyi is a single-celled phytoplankton covered with uniquely ornamented calcite disks called coccoliths. Individual coccoliths are abundant in marine sediments although complete coccospheres are more unusual. In the case of E. huxleyi, not only the shell, but also the soft part of the organism may be recorded in sediments. It produces a group of chemical compounds that are very resistant to decomposition. These chemical compounds, known as alkenones, can be found in marine sediments long after other soft parts of the organisms have decomposed. Alkenones are most commonly used by earth scientists as a means to estimate past sea surface temperatures. ## Basic facts Emiliania huxleyi was named after Thomas Huxley and Cesare Emiliani, who were the first to examine sea-bottom sediment and discover the coccoliths within it. It is believed to have evolved approximately 270,000 years ago from the older genus Gephyrocapsa Kampter[5][6] and became dominant in planktonic assemblages, and thus in the fossil record, approximately 70,000 years ago.[5][7] It is the most numerically abundant and widespread coccolithophore species. The species is divided into seven morphological forms called morphotypes based on differences in coccolith structure [8][9][10] (See Nannotax for more detail on these forms). Its coccoliths are transparent and commonly colourless, but are formed of calcite which refracts light very efficiently in the water column. This, and the high concentrations caused by continual shedding of their coccoliths makes E. huxleyi blooms easily visible from space. Satellite images show that blooms can cover areas of more than 10,000 km${\textstyle ^{2}}$, with complementary shipboard measurements indicating that E. huxleyi is by far the dominant phytoplankton species under these conditions.[11] This species has been an inspiration for James Lovelock's Gaia hypothesis which claims that living organisms collectively self-regulate biogeochemistry and climate at nonrandom metastable states. ## Abundance and distribution Emiliania huxleyi is considered a ubiquitous species. It exhibits one of the largest temperature ranges (1-30 °C) of any coccolithophores species.[3] It has been observed under a range of nutrient levels from oligotrophic (subtropical gyres) to eutrophic waters (upwelling zones/ Norwegian fjords).[12][13][14] Its presence in plankton communities from the surface to 200m depth indicates a high tolerance for both fluctuating and low light conditions.[4][12][15] This extremely wide tolerance of environmental conditions is believed to be explained by the existence of a range of environmentally adapted ecotypes within the species.[6] As a result of these tolerances its distribution ranges from the sub-Arctic to the sub-Antarctic and from coastal to oceanic habitats.[3][16] Within this range it is present in nearly all euphotic zone water samples and accounts for 20-50% or more of the total coccolithophore community.[3][12][17][18] During massive blooms (which can cover over 100,000 square kilometers), E. huxleyi cell concentrations can outnumber those of all other species in the region combined, accounting for 75% or more of the total number of photosynthetic plankton in the area.[11] E. huxleyi blooms regionally act as an important source of calcium carbonate and dimethyl sulfide, the massive production of which can have a significant impact not only on the properties of the surface mixed layer, but also on global climate.[19] The blooms can be identified through satellite imagery because of the large amount of light back-scattered from the water column, which provides a method to assess their biogeochemical importance on both basin and global scales. These blooms are prevalent in the Norwegian fjords, causing satellites to pick up "white waters", which describes the reflectance of the blooms picked up by satellites. This is due to the mass of coccoliths reflecting the incoming sunlight back out of the water, allowing the extent of E. huxleyi blooms to be distinguished in fine detail. Extensive E. huxleyi blooms can have a visible impact on sea albedo. While multiple scattering can increase light path per unit depth, increasing absorption and solar heating of the water column, E. huxleyi has inspired proposals for geomimesis,[20] because micron-sized air bubbles are specular reflectors, and so in contrast to E. huxleyi, tend to lower the temperature of the upper water column. As with self-shading within water-whitening coccolithophore plankton blooms, this may reduce photosynthetic productivity by altering the geometry of the euphotic zone. Both experiments and modeling are needed to quantify the potential biological impact of such effects, and the corollary potential of reflective blooms of other organisms to increase or reduce evaporation and methane evolution by altering fresh water temperatures. ## Biogeochemical impacts ### Climate change As with all phytoplankton, primary production of E. huxleyi through photosynthesis is a sink of carbon dioxide. However, the production of coccoliths through calcification is a source of CO2. This means that coccolithophores, including E. huxleyi, have the potential to act as a net source of CO2 out of the ocean. Whether they are a net source or sink and how they will react to ocean acidification is not yet well understood. ### Ocean heat retention Scattering stimulated by E. huxleyi blooms not only causes more heat and light to be pushed back up into the atmosphere than usual, but also cause more of the remaining heat to be trapped closer to the ocean surface. This is problematic because it is the surface water that exchanges heat with the atmosphere, and E. huxleyi blooms may tend to make the overall temperature of the water column dramatically cooler over longer time periods. However, the importance of this effect, whether positive or negative, is currently being researched and has not yet been established. ## Notes 1. ^ Okada, Hisatake (1973). "The distribution of oceanic coccolithophorids in the Pacific". Deep Sea Research and Oceanographic Abstracts. 20 (4): 355–374. Bibcode:1973DSRA...20..355O. doi:10.1016/0011-7471(73)90059-4. 2. ^ Charalampopoulou, Anastasia (2011) Coccolithophores in high latitude and Polar regions: Relationships between community composition, calcification and environmental factors University of Southampton, School of Ocean and Earth Science, Doctoral Thesis, 139pp. 3. ^ a b c d McIntyre, Andrew (1967). "Modern coccolithophoridae of the atlantic ocean—I. Placoliths and cyrtoliths". Deep Sea Research and Oceanographic Abstracts. 14 (5): 561–597. Bibcode:1967DSRA...14..561M. doi:10.1016/0011-7471(67)90065-4. 4. ^ a b Boeckel, Babette; Baumann, Karl-Heinz (2008-05-01). "Vertical and lateral variations in coccolithophore community structure across the subtropical frontal zone in the South Atlantic Ocean". Marine Micropaleontology. 67 (3–4): 255–273. Bibcode:2008MarMP..67..255B. doi:10.1016/j.marmicro.2008.01.014. 5. ^ a b Thierstein, H. R.; Geitzenauer, K. R.; Molfino, B.; Shackleton, N. J. (1977-07-01). "Global synchroneity of late Quaternary coccolith datum levels Validation by oxygen isotopes". Geology. 5 (7): 400–404. Bibcode:1977Geo.....5..400T. doi:10.1130/0091-7613(1977)5<400:gsolqc>2.0.co;2. ISSN 0091-7613. 6. ^ a b Paasche, E. (2001). "A review of the coccolithophorid Emiliania huxleyi (Prymnesiophyceae), with particular reference to growth, coccolith formation, and calcification-photosynthesis interactions". Phycologia. 40 (6): 503–529. doi:10.2216/i0031-8884-40-6-503.1. S2CID 84921998. 7. ^ Bijma, J.; et al. (2001). "Primary signal: Ecological and environmental factors—Report from Working Group 2" (PDF). Geochemistry, Geophysics, Geosystems. 2 (1): n/a. Bibcode:2001GGG.....2.1003B. doi:10.1029/2000gc000051. 8. ^ Findlay, C. S; Giraudeau, J (2000-12-01). "Extant calcareous nannoplankton in the Australian Sector of the Southern Ocean (austral summers 1994 and 1995)". Marine Micropaleontology. 40 (4): 417–439. Bibcode:2000MarMP..40..417F. doi:10.1016/S0377-8398(00)00046-3. 9. ^ Cook, S.S.; et al. (2011). "Photosynthetic pigment and genetic differences between two Southern Ocean morphotypes of Emiliania Huxleyi (Haptophyta)". Journal of Phycology. 47 (3): 615–626. doi:10.1111/j.1529-8817.2011.00992.x. PMID 27021991. S2CID 25399383. 10. ^ Hagino, K.; et al. (2011). "New evidence for morphological and genetic variation in the cosmopolitan coccolithophore Emiliania huxleyi (Prymnesiophyceae) from the cox1b-atp4 genes". Journal of Phycology. 47 (5): 1164–1176. doi:10.1111/j.1529-8817.2011.01053.x. PMID 27020197. S2CID 24499896. 11. ^ a b Holligan, P. M.; et al. (1993). "A biogeochemical study of the coccolithophore, Emiliania huxleyi, in the North Atlantic". Global Biogeochem. Cycles. 7 (4): 879–900. Bibcode:1993GBioC...7..879H. doi:10.1029/93GB01731. 12. ^ a b c Winter, A., Jordan, R.W. & Roth, P.H., 1994. Biogeography of living coccolithophores in ocean waters. In Coccolithophores. Cambridge, United Kingdom: Cambridge University Press, pp. 161–177. 13. ^ Hagino, Kyoko; Okada, Hisatake (2006-01-30). "Intra- and infra-specific morphological variation in selected coccolithophore species in the equatorial and subequatorial Pacific Ocean" (PDF). Marine Micropaleontology. 58 (3): 184–206. Bibcode:2006MarMP..58..184H. doi:10.1016/j.marmicro.2005.11.001. hdl:2115/5820. 14. ^ Henderiks, J; Winter, A; Elbrächter, M; Feistel, R; Plas, Av der; Nausch, G; Barlow, R (2012-02-23). "Environmental controls on Emiliania huxleyi morphotypes in the Benguela coastal upwelling system (SE Atlantic)". Marine Ecology Progress Series. 448: 51–66. Bibcode:2012MEPS..448...51H. doi:10.3354/meps09535. ISSN 0171-8630. 15. ^ Mohan, Rahul; Mergulhao, Lina P.; Guptha, M. V. S.; Rajakumar, A.; Thamban, M.; AnilKumar, N.; Sudhakar, M.; Ravindra, Rasik (2008-04-01). "Ecology of coccolithophores in the Indian sector of the Southern Ocean". Marine Micropaleontology. 67 (1–2): 30–45. Bibcode:2008MarMP..67...30M. doi:10.1016/j.marmicro.2007.08.005. 16. ^ Hasle, G.R., 1969. An analysis of the phytoplankton of the Pacific Southern Ocean: Abundance, composition, and distribution during the Brategg Expedition, 1947-1948, Universitetsforlaget. 17. ^ Beaufort, L.; Couapel, M.; Buchet, N.; Claustre, H.; Goyet, C. (2008-08-04). "Calcite production by coccolithophores in the south east Pacific Ocean". Biogeosciences. 5 (4): 1101–1117. Bibcode:2008BGeo....5.1101B. doi:10.5194/bg-5-1101-2008. ISSN 1726-4189. 18. ^ Poulton, A.J.; et al. (2010). "Coccolithophore dynamics in non-bloom conditions during late summer in the central Iceland Basin (July–August 2007)" (PDF). Limnology and Oceanography. 55 (4): 1601–1613. Bibcode:2010LimOc..55.1601P. doi:10.4319/lo.2010.55.4.1601. 19. ^ Westbroek, Peter (1993). "A model system approach to biological climate forcing. The example of Emiliania huxleyi". Global and Planetary Change. 8 (1–2): 27–46. Bibcode:1993GPC.....8...27W. doi:10.1016/0921-8181(93)90061-R. 20. ^ Seitz, R (2011). "Bright water: Hydrosols, water conservation, and climate change". Climatic Change. 105 (3–4): 365–381. arXiv:1010.5823. Bibcode:2011ClCh..105..365S. doi:10.1007/s10584-010-9965-8. S2CID 16243560.
2022-06-27 13:22:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5972733497619629, "perplexity": 13239.554026230371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103331729.20/warc/CC-MAIN-20220627103810-20220627133810-00208.warc.gz"}
https://ask.libreoffice.org/en/question/161474/writer-formulas-missing-in-output/
# Writer: Formulas missing in output Hi, my current problem: in Build ID: 5.0.6.2-14.el7, I have a document with several math. formulas. These look perfect in the editor, but are completely erased from any form of output (export as jpeg, pdf, png, print preview, print to file/pdf). Switching on or off "Embed fonts in the document" has no impact. I attach a small example document.formula_missing_in_print_and_export.odt and the png formula_missing_in_print_and_export.png it was exported as. The pdf export is not permitted as an attachment here, but looks similar to the png. Would updating help or is this another problem? If the latter it would be very nice if LO threw an error message instead of silently dropping the formulae. ufo edit retag close merge delete Sort by » oldest newest most voted more Thank you for that advise which indeed fixed the issue. However, I had not thought that I need to explicitly activate printing for something which is obviously part of the document content. Now, I have some more formulas in the actual document: is there a way to activate printing for all of them, or do I have to set this for all the formulas individually? Is there a pre-set possible to make (new) formulas printing without explicit activation? ufo ( 2018-07-24 21:39:03 +0200 )edit Newly inserted formulas should not need explicit activation with default settings. And they don't in my testing (at least with Version: 6.1.0.2, but I used that for very long time and had no issue in previous versions): both using InsertObjectFormula, and using "FN"→F3 autotext. But if you have modified Formula frame style disabling print, then the formulas will indeed not print - which is user error then. Check your document or template maybe. ( 2018-07-24 23:38:25 +0200 )edit And yes, that's the style that had been modified (just checked in your test document), so changing it to print will make all objects using that style to follow the change, unless they explicitly override the setting. F11Frame Styles button→right-click FormulaModifyOptions tab. ( 2018-07-24 23:40:24 +0200 )edit Thanks again, so, yes seems to be my fault albeit I can't remember to have switched off that default ... Learned something. If anyone dealing with the LO help is reading this: would be nice to see a small hint that this "to print or not to print" option exists and where it can be set / overridden. I have looked and found nothing :-(. ( 2018-07-25 09:50:12 +0200 )edit If anyone dealing with the LO help is reading this: ... Well - I also couldn't find a way to get to relevant help page unless already there in UI. So yes - your idea is valid, but... this site is not for bug reports/enhancement request - so if you want something to happen, please file relevant RFE. Helping with fixing that would be even greater! :-) ( 2018-07-25 11:03:12 +0200 )edit ## Stats Asked: 2018-07-24 18:18:52 +0200 Seen: 120 times Last updated: Jul 24 '18
2019-09-16 14:18:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24487334489822388, "perplexity": 3531.34050957715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572744.7/warc/CC-MAIN-20190916135948-20190916161948-00112.warc.gz"}
https://simple.wikipedia.org/wiki/Iwasawa_theory
# Iwasawa theory In number theory, Iwasawa theory is a Galois module theory of ideal class groups, started by Kenkichi Iwasawa, in the 1950s, as part of the theory of cyclotomic fields. In the early 1970s, Barry Mazur thought about generalizations of Iwasawa theory to Abelian Varieties. Later, in the early 90s, Ralph Greenberg has suggested an Iwasawa theory for motives. ## Formulation The first thing Iwasawa noticed was that there are towers of fields in algebraic number theory, having Galois group isomorphic with the additive group of p-adic integers. That group, usually written Γ in the theory and with multiplicative notation, can be found as a subgroup of Galois groups of infinite field extensions (which are by their nature pro-finite groups). The group ${\displaystyle \Gamma }$ is the inverse limit of the additive groups ${\displaystyle \mathbf {Z} /p^{n}\mathbf {Z} }$, where p is the fixed prime number and ${\displaystyle n=1,2,\dots }$. We can express this by Pontryagin duality in another way: Γ is dual to the discrete group of all ${\displaystyle p}$-power roots of unity in the complex numbers. ## Example Let ${\displaystyle \zeta }$ be a primitive ${\displaystyle p}$-th root of unity and look at the following tower of number fields: ${\displaystyle K=\mathbf {Q} (\zeta )\subset K_{1}\subset K_{2}\subset \cdots \subset \mathbf {C} ,}$ where ${\displaystyle K_{n}}$ is the field generated by a primitive ${\displaystyle p^{n+1}}$-th root of unity. This tower of fields has a union ${\displaystyle L}$. Then the Galois group of ${\displaystyle L}$ over ${\displaystyle K}$ is isomorphic with ${\displaystyle \Gamma }$; because the Galois group of ${\displaystyle K_{n}}$ over ${\displaystyle K}$ is ${\displaystyle \mathbf {Z} /p^{n}\mathbf {Z} }$. In order to get an interesting Galois module here, Iwasawa took the ideal class group of ${\displaystyle K_{n}}$, and let ${\displaystyle I_{n}}$ be its ${\displaystyle p}$-torsion part. There are norm mappings ${\displaystyle I_{m}\rightarrow I_{n}}$ when ${\displaystyle m>n}$, and so an inverse system. Letting ${\displaystyle I}$ be the inverse limit, we can say that ${\displaystyle \Gamma }$ acts on ${\displaystyle I}$, and it is good to have a description of this action. The motivation here was undoubtedly that the ${\displaystyle p}$-torsion in the ideal class group of ${\displaystyle K}$ had already been identified by Kummer as the main obstacle to the direct proof of Fermat's last theorem. What Iwasawa did that was new, was to go 'off to infinity' in a new direction. In fact, ${\displaystyle I}$ is a module over the group ring ${\displaystyle \mathbf {Z} _{p}[[\Gamma ]]}$. This is a well-behaved ring (regular and two-dimensional), meaning that it is quite possible to classify modules over it, in a way that is not too coarse. ## History From this beginning, in the 1950s, a good-sized theory has been built up. A basic connection was noticed between the module theory, and the p-adic L-functions that were defined in the 1960s by Kubota and Leopoldt. The latter begin from the Bernoulli numbers, and use interpolation to define p-adic analogues of the Dirichlet L-functions. It became clear that the theory might be able to move ahead finally from Kummer's century-old results on regular primes. The main conjecture of Iwasawa theory was formulated as an assertion that two ways of defining p-adic L-functions (by module theory, by interpolation) should coincide, as far as that was well-defined. This was eventually proved by Barry Mazur and Andrew Wiles for Q, and for all totally real number fields by Andrew Wiles. These proofs were modeled upon Ken Ribet's proof of the converse to Herbrand's theorem (so-called Herbrand-Ribet theorem). More recently, also modeled upon Ribet's method, Chris Skinner and Eric Urban have announced a proof of a main conjecture for GL(2). An easier proof of the Mazur-Wiles theorem can be found by using Euler systems as developed by Kolyvagin (see Washington's book). Other generalizations of the main conjecture proved using the Euler system method have been found by Karl Rubin, amongst others. ## References • Greenberg, Ralph, Iwasawa Theory - Past & Present, Advanced Studies in Pure Math. 30 (2001), 335-385. Available at [1]. • Coates, J. and Sujatha, R., Cyclotomic Fields and Zeta Values, Springer-Verlag, 2006 • Lang, S., Cyclotomic Fields, Springer-Verlag, 1978 • Washington, L., Introduction to Cyclotomic Fields, 2nd edition, Springer-Verlag, 1997 • Barry Mazur and Andrew Wiles (1984). "Class Fields of Abelian Extensions of Q". Inventiones Mathematicae 76 (2): 179-330. • Andrew Wiles (1990). "The Iwasawa Conjecture for Totally Real Fields". Annals of Mathematics 131 (3): 493-540. • Chris Skinner and Eric Urban (2002). "Sur les deformations p-adiques des formes de Saito-Kurokawa". C. R. Math. Acad. Sci. Paris 335 (7): 581-586.
2016-10-28 14:08:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 28, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7796260714530945, "perplexity": 422.46816896724556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722653.96/warc/CC-MAIN-20161020183842-00147-ip-10-171-6-4.ec2.internal.warc.gz"}
https://en.m.wikipedia.org/wiki/Chemical_energy
# Chemical energy Chemical energy is the energy of chemical substances that is released when they undergo a chemical reaction and transform into other substances. Some examples of storage media of chemical energy include batteries, food, and gasoline. Breaking and making chemical bonds involves energy, which may be either absorbed by or evolved from a chemical system. Energy that can be released or absorbed because of a reaction between chemical substances is equal to the difference between the energy content of the products and the reactants, if the initial and final temperature is the same. This change in energy can be estimated from the bond energies of the reactants and products. It can also be calculated from ${\displaystyle \Delta {U_{f}^{\circ }}_{\mathrm {reactants} }}$, the internal energy of formation of the reactant molecules, and ${\displaystyle \Delta {U_{f}^{\circ }}_{\mathrm {products} }}$, the internal energy of formation of the product molecules. The internal energy change of a chemical process is equal to the heat exchanged if it is measured under conditions of constant volume and equal initial and final temperature, as in a closed container such as a bomb calorimeter. However, under conditions of constant pressure, as in reactions in vessels open to the atmosphere, the measured heat change is not always equal to the internal energy change, because pressure-volume work also releases or absorbs energy. (The heat change at constant pressure is equal to the enthalpy change, in this case the enthalpy of reaction, if initial and final temperatures are equal). A related term is the heat of combustion, which is the energy released due to a combustion reaction and often applied in the study of fuels. Food is similar to hydrocarbon and carbohydrate fuels, and when it is oxidized to carbon dioxide and water, the energy released is analogous to the heat of combustion (though assessed differently than for a hydrocarbon fuel — see food energy). Chemical potential energy is a form of potential energy related to the structural arrangement of atoms or molecules. This arrangement may be the result of chemical bonds. Chemical energy of chemical substances can be transformed to other forms of energy by a chemical reaction. For example, when a fuel is burned, the chemical energy is converted to heat. Green plants transform solar energy to chemical energy through the process of photosynthesis, and electrical energy can be converted to chemical energy and vice versa through electrochemical reactions. The similar term chemical potential is used to indicate the potential of a substance to undergo a change of configuration, be it in the form of a chemical reaction, spatial transport, particle exchange with a reservoir, etc. It is not a form of potential energy itself, but is more closely related to free energy. The confusion in terminology arises from the fact that in other areas of physics not dominated by entropy, all potential energy is available to do useful work and drives the system to spontaneously undergo changes of configuration, and thus there is no distinction between "free" and "non-free" potential energy (hence the one word "potential"). However, in systems of large entropy such as chemical systems, the total amount of energy present (and conserved according to the first law of thermodynamics) of which this chemical potential energy is a part, is separated from the amount of that energy — thermodynamic free energy (from which chemical potential is derived) — which (appears to) drive the system forward spontaneously as the global entropy increases (in accordance with the second law).
2022-07-05 07:59:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7236747145652771, "perplexity": 334.20331699793866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104514861.81/warc/CC-MAIN-20220705053147-20220705083147-00494.warc.gz"}
http://mathoverflow.net/tags/motivation/hot
# Tag Info 166 I would have preferred not to comment seriously on Mochizuki's work before much more thought had gone into the very basics, but judging from the internet activity, there appears to be much interest in this subject, especially from young people. It would obviously be very nice if they were to engage with this circle of ideas, regardless of the eventual ... 136 I'll take a stab at answering this controversial question in a way that might satisfy the OP and benefit the mathematical community. I also want to give some opinions that contrast with or at least complement grp. Like others, I must give the caveats: I do not understand Mochizuki's claimed proof, his other work, and I make no claims about the veracity of ... 71 Last revision: 10/20. (Probably the last for at least some time to come: until Mochizuki uploads his revisions of IUTT-III and IUTT-IV. My apology for the multiple revisions. ) Completely rewritten. (9/26) It seems indeed that nothing like Theorem 1.10 from Mochizuki's IUTT-IV could hold. Here is an infinite set of counterexamples, assuming for ... 46 Algebraic K-theory originated in classical materials that connected class groups, unit groups and determinants, Brauer groups, and related things for rings of integers, fields, etc, and includes a lot of local-to-global principles. But that's the original motivation and not the way the work in the field is currently going - from your question it seems like ... 42 [The answer below is a response to an earlier version of the question that was rather different in certain respects. Minhyong Kim's answer gives excellent insight into ideas that Mochizuki had back in 2000 and that provide essential building blocks for the more recent work. But I still believe that it is too premature for a non-expert to seek insight into ... 33 Let me also try to give, in a modest complement to Minhyong Kim's great post, some additional remarks on Mochizuki's strategy. The idea that has led to the development of "Inter-universal Teichmuller theory for number fields" is certainly very beautiful, and was known to Mochizuki, along with the nature of the final estimate, already in 2000. (But let us ... 33 Let's say you have a resolution $0\to A\to J^0\to J^1\to\dots$ (of a module, a sheaf, etc.) If $J^n$ are acyclic (meaning, have trivial higher cohomology, resp. derived functors $R^nF$), you can use this resolution to compute the cohomologies of $A$ (resp. $R^nF(A)$). If $J^n$ are not acyclic, you get a spectral sequence instead, and that's the best you can ... 30 I want to point out a bibliographical information that perhaps is not very well-known and can be taken as "evidence" for the possibility of applying anabelian geometry to the ABC conjecture successfully. However, I am not claiming that this is related in any sort of way to Mochizuki's work. Here is the fact: There is a $\pi_1$ proof of the function field ... 29 Qiaochu links to a really nice article by Timothy Chow that says a lot about the mechanics of how to go from a filtered complex to its spectral sequence. Two questions that remain are, (1) why do filtered complexes show up so much, and (2) is there anything that you could do with a filtered complex other than compute its spectral sequence? VA gives a very ... 29 I'm going to interpret your question in the language of Gowers's "two cultures" essay as follows: How does one get good at theory-building? The process of developing a good theory can seem deceptively simple. One takes some definitions, perhaps by generalizing some known definitions, and deduces simple consequences of them. In comparison with the ... 22 My personal story with this question is that, sometime in 2007, I wanted to find a project for a student I was mentoring at RSI (a program for high school students which produces real research) and thought some variant of the question "how can you visualize all the different geometric structures on a topological torus (elliptic curve/$\mathbb{C}$)?" would be ... 22 A combinatorial motivation is the n! conjecture, whose proof by Haiman uses Hilbert schemes. An account of this work written by Haiman for the Current Developments in Mathematics conference in 2002 is at math.berkeley.edu/~mhaiman/ftp/cdm/cdm.pdf. Haiman emphasizes at the start of the paper that the main geometric results which had to be proved were ... 20 It may be helpful to say how I got into groupoids. In the 1960s, I was writing a topology text and wanted to do the fundamental group of a cell complex, which required the van Kampen Theorem (I have now been persuaded to call this the Seifert-van Kampen theorem, as on wikipedia, so I call it SvKT). I was kind of irritated that this did not as then ... 19 Charles, a couple of reasons why a complex algebraic geometer (certainly someone who is interested in moduli spaces of vector bundles, as your profile tells me) might at least keep an open verdict on the stuff NC-algebraic geometers (NCAGers from now on) are trying to do. in recent years ,a lot of progress has been made towards understanding moduli ... 19 I think a key point is that algebraic K-theory is defined not only for rings, but also for schemes (and other kinds of "generalized spaces" in algebraic geometry). If you believe that generalized (Eilenberg-Steenrod) cohomology theories are useful/interesting in algebraic topology, then it is also reasonable to think that they might be interesting in ... 18 Positivity of Kazhdan--Lusztig polynomials (and all the other positivity results in Kazhdan--Lusztig theory in general). Consider the Hecke algebra $H_n(q)$. It is a particular deformation of the group algebra of the symmetric group (or some other Coxeter group). As such, it has a basis $T_w$ indexed by permutations, and multiplication is given by ... 17 In addition to being a nice example for abelian, $A_{\infty}$ and Calabi-Yau categories, and being a prototypical example for Generalized Donaldson - Thomas Invariants and the Wall Crossing Phenomenon, the quivers have a lot of applications in variours different fields. Since the question is applications in addition to representation theory, I'm listing a ... 17 Let me first try to answer a simpler question: Why are long exact sequences so ubiquitous? Almost anything that is written as a capital letter, followed by a subscript i or superscript -i, i an integer, and finally some stuff in parentheses, can be interpreted as πi of some spectrum (or sometimes space, as in nonabelian group cohomology, or maybe a ... 17 Alexander realized they were useful, then Conway. However, Jones clearly was the one who really made a big bang with a skein relation. This allowed him to see a connection between the Jones polynomial and state sums in statistical mechanics. This was followed by HOMFLYPT, which might be the first time a skein relation was used to define an invariant rather ... 16 "How much would you subscribe to the statement that studying questions one finds interesting is something established mathematicians do, while younger ones are better off studying questions that the rest of the community finds interesting?" Not at all. I don't think anyone, young or old, will find success by working on questions other than those they find ... 16 Dear Alex, It seems to me that the general question in the background of your query on algebra really is the better one to focus on, in that we can forget about irrelevant details. That is, as you've mentioned, one could be asking the question about motivation and decision in any kind of mathematics, or maybe even life in general. In that form, I can't see ... 16 For a striking example of a classical result in algebraic geometry given a tropical proof, see A tropical proof of the Brill-Noether Theorem by Cools, Draisma, Payne, and Robeva. The original proof of this theorem (by Griffiths and Harris) involves subtle transversality arguments, which they are able to circumvent in this "combinatorial" proof. The new ... 15 I think it's important to take a historical perspective. There was a time not so long ago when computers as we know them now did not exist. At that stage, coming up with a precise definition of an algorithm or of a Turing machine was a major advance, allowing one to build the earliest modern computers and begin the revolution that we take for granted ... 15 A student of mine asking for a motivation unmotivated by applications? Haven't I taught you anything, Mike? (Joking of course.) However, perhaps one way to avoid talking about future applications is to reflect on implicit past applications and the explicitly stated original motivations. This is not to take away from your answer, Craig, you know I agree ... 14 As Pace Nielsen already posted, the strength of quiver theory is to provide easy examples and counterexamples. The first applications are of course inside representation theory and ring theory, because Gabriel's Theorem states, that if you have a property of a finite dimensional algebra over an algebraically closed field that can be detected in the module ... 13 If you want to understand some collection of objects, naturally you should also want to understand all maps between them. Since factors have no two-sided ideals, every map between factors is an inclusion. So to understand maps between factors is the same thing as understand subfactors. (This is a way in which factors are noncommutative analogues of fields, ... 13 NEW !! (2013-02-21) A Panoramic Overview of Inter-universal Teichmüller Theory By Shinichi Mochizuki http://www.kurims.kyoto-u.ac.jp/~motizuki/Panoramic%20Overview%20of%20Inter-universal%20Teichmuller%20Theory.pdf 13 I suggest looking at the introduction to Waldhausen's original paper on algebraic K-theory (Algebraic K-theory of generalized free products, Part I, Ann. Math., 108 (1978) 135-204). Waldhausen started out as a 3-manifold theorist, and he realized that certain phenomena in the topology of 3-manifolds would be explained if the Whitehead groups of classical ... 12 First, recall the slogan: Small constructions are good for making calculations, but large constructions are good for proving theorems. K-theory is certainly a large construction. In general, K-theory seems to turn up in topology when the following slogan holds: Chain compex good; homology bad. You can often construct exactly the same invariant using ... 12 In general I don't think there's anything easy about nearby and vanishing cycles. However, I tend to find it enlightening to just consider their topology. Namely, if f:X \to C is a function on a complex algebraic (or analytic) variety, then the stalk cohomology of the nearby cycles functor applied to some complex of sheaves F at a point x \in f^{-1}(0) is ... Only top voted, non community-wiki answers of a minimum length are eligible
2015-03-29 00:16:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7627260088920593, "perplexity": 551.7279967261641}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298015.2/warc/CC-MAIN-20150323172138-00005-ip-10-168-14-71.ec2.internal.warc.gz"}
https://eprint.iacr.org/2020/1410
### Mac'n'Cheese: Zero-Knowledge Proofs for Boolean and Arithmetic Circuits with Nested Disjunctions Carsten Baum, Alex J. Malozemoff, Marc B. Rosen, and Peter Scholl ##### Abstract Zero knowledge proofs are an important building block in many cryptographic applications. Unfortunately, when the proof statements become very large, existing zero-knowledge proof systems easily reach their limits: either the computational overhead, the memory footprint, or the required bandwidth exceed levels that would be tolerable in practice. We present an interactive zero-knowledge proof system for boolean and arithmetic circuits, called Mac'n'Cheese, with a focus on supporting large circuits. Our work follows the commit-and-prove paradigm instantiated using information-theoretic MACs based on vector oblivious linear evaluation to achieve high efficiency. We additionally show how to optimize disjunctions, with a general OR transformation for proving the disjunction of $m$ statements that has communication complexity proportional to the longest statement (plus an additive term logarithmic in $m$). These disjunctions can further be nested, allowing efficient proofs about complex statements with many levels of disjunctions. We also show how to make Mac'n'Cheese non-interactive (after a preprocessing phase) using the Fiat-Shamir transform, and with only a small degradation in soundness. We have implemented the online phase of Mac'n'Cheese and achieve a runtime of 144 ns per AND gate and 1.5 $\mu$s per multiplication gate in $\mathbb{F}_{2^{61}-1}$ when run over a network with a 95 ms latency and a bandwidth of 31.5 Mbps. In addition, we show that the disjunction optimization improves communication as expected: when proving a boolean circuit with eight branches and each branch containing roughly 1 billion multiplications, Mac'n'Cheese requires only 75 more bytes to communicate than in the single branch case. Available format(s) Category Cryptographic protocols Publication info A minor revision of an IACR publication in CRYPTO 2021 Keywords zero knowledge Contact author(s) cbaum @ cs au dk amaloz @ galois com marc @ galois com peter scholl @ cs au dk History 2021-07-26: revised See all versions Short URL https://ia.cr/2020/1410 CC BY BibTeX @misc{cryptoeprint:2020/1410, author = {Carsten Baum and Alex J. Malozemoff and Marc B. Rosen and Peter Scholl}, title = {Mac'n'Cheese: Zero-Knowledge Proofs for Boolean and Arithmetic Circuits with Nested Disjunctions}, howpublished = {Cryptology ePrint Archive, Paper 2020/1410}, year = {2020}, note = {\url{https://eprint.iacr.org/2020/1410}}, url = {https://eprint.iacr.org/2020/1410} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
2022-10-07 03:24:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4105796217918396, "perplexity": 4458.034994938373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337906.7/warc/CC-MAIN-20221007014029-20221007044029-00025.warc.gz"}
http://user.math.kyushu-u.ac.jp/index.php?ochiai%2Fpv_abstract
ochiai/pv_abstract - PukiWiki ¡¡ back to conference page Hideyuki Ishi (Nagoya) Analysis on regular convex cones associated to decomposable graphs Regular convex cones of positive definite real symmetric matrices with prescribed zero entries have been studied intensively in multivariate statistics. It turned out that analysis on the cone is quite feasible if the zero pattern corresponds to a decomposable graph. Indeed, an explicit formula is known for the Fourier-Laplace transform of a product of powers of minors over the cone. Inspired by these statistic works, we develop analysis on the cone in a similar way to theory of homogeneous cones. In particular, we consider Riesz distributions on the cone and associated b-functions. Yumiko Hironaka (Waseda) Spherical functions on certain $p$-adic homogeneous spaces, and some relation to PV-theory First I want to introduce a typical spherical function on certain homogeneous space $X$, and give its expression formula by using functional equations of sph. f's and data of the group. In this talk everything is assumed to be defined over a $\mathfrak{p}$-adic field $k$. Typical sph. f's are obtained by Poisson transform from relative $P$-invariant on $X$, where $P$ is a minimal parabolic that has Zarisky open orbit in $X$ over $\overline{k}$, and their functional equations are often reduced to those for certain limited type of prehomogeneous vector spaces. Then I want to discuss about some spaces of sesquiliear forms, and give explicit formulas of sph.f's by using specialized Hall-Littlewood polynomials associated to the root system, parametrization of all the sph.f's, and Plancherel formula. (The latter half is joint work with Y. Komori.) Koichi Takase (Miyagi) On spherical functions of supercuspidal representations of $GL_n(F)$ and prehomogeneous vector spaces over finite fields We will consider the spherical function of a square-integrable irreducible representation of $G$ with respect to a $K$-type with a compact subgroup $K$ of $G$, and want to determine the non-zero set of the Fourier transform of the spherical function. In the case of the holomorphic discrete series, there exists a close relationship between the non-zero set and a pre homogeneous vector space of parabolic type associated with a boundary component of $G/K$. We can consider a parallel problem for $p$-adic reductive group. In this talk, I will report on the result of an experiment for the simplest case of an irreducible level-zero super cuspidal representation of $GL_n(F)$. Jean-Louis Clerc (Lorraine) Conformally invariant trilinear forms on the sphere Given three scalar principal series representations of the conformal group of the sphere, I will first recall the construction of the (essentially unique) invariant trilinear form in the generic case. I will then discuss the singular cases. Salah Mehdi (Lorraine) Representation theoretic differential operators We will discuss several results on representations of Lie groups related to invariant differential operators on homogeneous spaces, with an emphasis on Dirac operators. If time allows, we will also present some connections between Dirac operators and coherent families of modules. Pascale Harinck (CNRS) Fourier transform of the Schwartz space of a $p$-adic reductive symmetric space Let $X=H\backslash G$ be a $p$-adic reductive symmetric space over a non archimedean local field $\mathbb F$ of characteristic different from $2$. An explicit Plancherel formula for $L^2(X)$ (spectral decomposition) was recently described when $G$ is split and $\mathbb F$ of characteristic zero by Y.Sakellaridis and A.Venkatesh, and for general $G$ and $\mathbb F$ of characteristic different from $2$ by P. Delorme. In this talk, I will explain a joint work with Y.Sakellaridis and P.Delorme in which we describe the Fourier transform on the Harish-Chandra Schwartz space of $X$. We obtain a spectral decomposition of this space. Our proof uses the strong version of the Plancherel formula and properties of Eisenstein integrals and their weak constant term. Robert J. Stanton (Ohio) Extensions on real bounded symmetric domains The real bounded symmetric domains were classified by H. Jaffee as the fixed point sets of anti-holomorphic involutions of bounded symmetric domains in $\Bbb C^n$ (there are now several alternative descriptions). B. Krötz and I showed how the harmonic analysis on the real domain has a holomorphic continuation and determined the extent of this. In this talk we will re-visit this setting from the point of view of split complex structures. We will show the existence of a split holomorphic domain containing the real domain and we will show the existence of split holomorphic extensions of the harmonic analysis to this domain. This is joint work with G.Ólafsson. Marcus J. Slupinski (Strasbourg) Symplectic goemetry of spinors in $12$-dimensions The spinor representations of the double cover of the orthogonal group in twelve dimensions are regular prehomogeneous vector spaces and in 1970 J-I. Igusa gave a normal form for each orbit and determined the corresponding isotropy groups. These representations each carry an invariant symplectic form and their direct sum is a Clifford module. In this talk we give a new classification of the orbits and describe the geometry of each orbit (over a field of characteristic not two or three) in terms of symplectic covariants and properties with respect to Clifford multiplication. This is joint work with R.J.Stanton. Fumihiro Sato (Rikkyo) Automorphic pairs of distributions on prehomogeneous vector spaces and zeta functions Let $(G,\rho,V)$ be a regular prehomogeneous vector space defined over $\mathbb Q$ and $(G,\rho^*,V^*)$ its dual. Denote by $\Omega$ and $\Omega^*$ the open orbits of $(G,\rho,V)$ and $(G,\rho^*,V^*)$, respectively. A pair of periodic distributions $T$ on $V_{\mathbb R}$ and $T^*$ on $V^*_{\mathbb R}$ is called automorphic, if $T$ and $T^*$ satisfy $T(f)=T^*(f_\infty)$ for any $f \in C^\infty_0(\Omega_{\mathbb R})$ where $f_\infty$ is defined by $f_\infty(\mathrm{grad} \log P(v)):=f(v)$ for a fixed nondegenerate relative invariant $P$. For an automorphic pair $(T,T^*)$ on a pv of commutative parabolic type, Dirichlet series with functional equation can be associated. As an application functional equations of zeta functions of certain (non-prehomogeneous) forms of degree 4 will be proved. The simplest cases of $G=GL(1)$ and $\dim V=1$ will be discussed in some detail. (This is a joint work with K.Tamura, K.Sugiyama, T.Miyazaki and T.Ueno.) Kyo NISHIYAMA (AGU) Robinson-Schensted-type correspondence over mirabolic double flag variety We consider the conormal variety (or the Steinberg variety) over a mirabolic double flag variety for a symmetric pair $(G, K)$. If the double flag variety has finitely many $K$-orbits, the irreducible components of the conormal variety $Y$ encode the parametrization of orbits. On the other hand, if we consider the image of a moment map called an exotic nilpotent cone, nilpotent $K$-orbits together with its fiber (an analogue of Springer fiber) also classifies irreducible components of $Y$. Thus we get a correspondence between geometric parametrization of orbits and the parametrization given by nilpotent orbits and its fiber. This is what we call "Robinson-Schensted-type correspondence". It turns out this whole picture is strongly related to the exotic (or enhanced) nilpotent cone, which are studied by many people including Travkin, Syu Kato, Achar-Henderson, Henderson-Trapa and Shoji-Sorlin among others. We will discuss the RS-type correspondence as well as the structure of the exotic nilpotent cone for the symmetric pair of type AIII. This is an on-going joint work with Lucas Fresse. Takashi Taniguchi (Kobe) Second order terms in some arithmetic functions Using the zeta functions of PV's (prehomogeneous vector spaces), Shintani proved the second order terms in counting functions for the weighted class numbers of binary quadratic forms and for the class numbers of binary cubic forms. We discuss some further places that we can derive the second order terms, using the related PV zeta functions. Kohji Matsumoto (Nagoya) Zeta-functions of root systems and Poincaré polynomials A useful way of evaluating special values of zeta-functions of root systems is to consider certain linear combinations of those zeta-functions, and express such combinations in terms of Bernoulli polynomials. By this method we can show explicit evaluation formulas for even integer points. However if we consider odd integer points, some "signature part" appears, and we have to determine when this part does not vanish. In this talk we express this part in terms of Poincare polynomilas of Weyl groups, discuss when it does not vanish, and show examples of explicit formulas for odd integer values. Toshiyuki Kobayashi (Tokyo) Symmetry breaking operators for rank one orthogonal groups I give a classification of all symmetry breaing operators that intertwines two spherical principal series representations of two groups $O(n+1,1)$ to $O(n.1)$. This is a joint work with B. Speh. Akihiko Yukie (Kyoto) On orbits of prehomogeneous vector spaces We consider orbits of prehomogeneous vector spaces in various situations. We first consider prehomogeneous vector spaces where the group is not necessarily split over a perfect field and show that the set of unstable points can be stratified by the convexity of GIT. Then we consider the question of orbits of prehomogeneous vector spaces over the p-adic integer ring and show in some cases orbits can be classified. Tamotsu Ikeda (Kyoto) PV and Siegel series We review the theory of Siegel series, and show that the functional equation of Siegel series can be obtained from the local functional equation of PV. Sofiane Souaifi (Strasbourg) Paley-Wiener theorem(s) for real reductive Lie groups In the early 80's, J. Arthur proved the Paley-Wiener theorem for real reductive Lie groups. To describe the Fourier transform of the space of compactly supported smooth functions, he uses the so-called Arthur-Campoli relations. More recently, P. Delorme, using other techniques, gave another proof of the Paley-Wiener theorem. His description of the Paley-Wiener space is now in terms of intertwining conditions. In a joint work with E. P. van den Ban, we make a detailed comparison between the two spaces, without using the proof or any validity of any of the associated Paley-Wiener theorems. This is done by use of the Hecke algebra, our techniques involving derivatives of holomorphic families of continuous representations and Harish-Chandra modules. Gautam Chinta Whittaker functions and Shintani zeta functions I will discuss some examples of coincidences between Shintani zeta functions and Whittaker functions of Eisenstein series on metaplectic double covers of linear groups. I will also describe some applications to number theory and suggest prospects for further study. Last-modified: 2014-08-29 (¶â) 16:36:02 (1118d)
2017-09-20 21:49:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8027119040489197, "perplexity": 379.7202095004951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687484.46/warc/CC-MAIN-20170920213425-20170920233425-00200.warc.gz"}
http://mathhelpforum.com/pre-calculus/123145-solving-equations-matrix-help.html
# Thread: Solving equations - Matrix Help! 1. ## Solving equations - Matrix Help! I just started learning about matrices in class, so this question is supposed to be fairly simple.. but I've never been any good at math, so I'm having a bit of trouble with it. Mark had $24,500 to invest. He divided the money into three different accounts. At the end of the year, he had made$1,300 in interest. The annual yield on each of the three accounts was 4%, 5.5%, and 6%. If the amount of money in the 4% account was four times the amount of money in the 5.5% account, how much had he placed in each account? I started with - 0.06 0.055 0.04 1300 1 1 1 24500 0 422 -104 0 then I stopped at 0 1 4 34000 1 0 -4 -9500 0 0 792 7174000 I'm stuck!! 2. Originally Posted by Jiyongie I just started learning about matrices in class, so this question is supposed to be fairly simple.. but I've never been any good at math, so I'm having a bit of trouble with it. Mark had $24,500 to invest. He divided the money into three different accounts. At the end of the year, he had made$1,300 in interest. The annual yield on each of the three accounts was 4%, 5.5%, and 6%. If the amount of money in the 4% account was four times the amount of money in the 5.5% account, how much had he placed in each account? I started with - 0.06 0.055 0.04 1300 1 1 1 24500 0 422 -104 0 then I stopped at 0 1 4 34000 1 0 -4 -9500 0 0 792 7174000 I'm stuck!! It would make more sense if you would explain what you are doing and what equations/matrices you are using rather than just giving a list of numbers! Since he had three accounts, let the amount of money he invested in each account be A, B, and C. He had $24,500 so A+ B+ C= 24500. He made 4%, 5.5%, and 6%, receiving$1300 in interest so .04A+ .055B+ .06C= 1300. Finally, " If the amount of money in the 4% account was four times the amount of money in the 5.5% account" so A= 4B. Your three equations for A, B, and C are A+ B+ C= 24500, .04A+ .055B+ .06C= 1300 and A- 4B= 0. But what you have done is find and you are almost finished. You can simplify by dividing that third row by 792 to get 0 0 1 9058.08 Add four times that new third row to the second row to get rid of the "-4": 1 0 (-4)+ 4(1) -9500+ 4(9058.08) 1 0 0 26732.32 And subtract four times that new third row to the first row to get rid of the "4": 0 1 4- 4(1) 34000- 4(9058.08) 0 1 0 -5658.08 That comes up negative but that is what I get too. It might not be possible to meet the conditions of the problem.
2017-06-22 17:31:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6349238753318787, "perplexity": 786.5277670291382}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319636.73/warc/CC-MAIN-20170622161445-20170622181445-00698.warc.gz"}
https://cs.stackexchange.com/questions/101832/reducing-subset-sum-to-even-subset-sum
# Reducing subset sum to even subset sum I'm trying to learn reduction. I have this problem called "even subset sum" that's very similar to subset sum. It's the same problem as as subset sum except that the only numbers allowed are even positive integers. I'm trying to show this problem is NP-hard by reducing the problem subset sum to it but I'm not sure where to start. • Please state your problem, "even subset sum" in its full detail, assuming there is no prior knowledge of subset sum at all. – Apass.Jack Dec 20 '18 at 11:04 • Given a set S of even positive integers and an integer k, is there a subset of S whose sum equals k? – AphexTwin Dec 20 '18 at 11:49 • It should be noted that the sets do not contain duplicate integers in both problems. – AphexTwin Dec 20 '18 at 12:03 Hint: suppose that we are able to solve your problem efficiently. Then what if we take an arbitrary subset sum problem and multiply all elements by $$2$$, as well as the target?
2019-09-20 06:09:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.721610963344574, "perplexity": 309.32186683218464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573832.23/warc/CC-MAIN-20190920050858-20190920072858-00381.warc.gz"}
https://www.coursehero.com/file/p7dcl0v/Exercise-64102-Solve-find-the-impulse-response-x-0-ax-%CE%B4-t-x-0-0-x-0-0-0/
# Exercise 64102 solve find the impulse response x 0 ax • 371 • 100% (5) 5 out of 5 people found this document helpful This preview shows page 293 - 298 out of 371 pages. Exercise 6.4.102 : Solve (find the impulse response) x 0 + ax = δ ( t ) , x (0) = 0 , x 0 (0) = 0 . Exercise 6.4.103 : Suppose that Lx = δ ( t ) , x (0) = 0 , x 0 (0) = 0 , has the solution x ( t ) = cos ( t ) for t > 0 . Find (in closed form) the solution to Lx = sin( t ) , x (0) = 0 , x 0 (0) = 0 for t > 0 . Exercise 6.4.104 : Compute L - 1 n s 2 s 2 + 1 o . Exercise 6.4.105 : Compute L - 1 n 3 s 2 e - s + 2 s 2 o . 294 CHAPTER 6. THE LAPLACE TRANSFORM Chapter 7 Power series methods 7.1 Power series Note: 1 or 1.5 lecture, §8.1 in [EP], §5.1 in [BD] Many functions can be written in terms of a power series X k = 0 a k ( x - x 0 ) k . If we assume that a solution of a di ff erential equation is written as a power series, then perhaps we can use a method reminiscent of undetermined coe ffi cients. That is, we will try to solve for the numbers a k . Before we can carry out this process, let us review some results and concepts about power series. 7.1.1 Definition As we said, a power series is an expression such as X k = 0 a k ( x - x 0 ) k = a 0 + a 1 ( x - x 0 ) + a 2 ( x - x 0 ) 2 + a 3 ( x - x 0 ) 3 + · · · , (7.1) where a 0 , a 1 , a 2 , . . . , a k , . . . and x 0 are constants. Let S n ( x ) = n X k = 0 a k ( x - x 0 ) k = a 0 + a 1 ( x - x 0 ) + a 2 ( x - x 0 ) 2 + a 3 ( x - x 0 ) 3 + · · · + a n ( x - x 0 ) n , denote the so-called partial sum . If for some x , the limit lim n →∞ S n ( x ) = lim n →∞ n X k = 0 a k ( x - x 0 ) k 295 296 CHAPTER 7. POWER SERIES METHODS exists, then we say that the series (7.1) converges at x . Note that for x = x 0 , the series always converges to a 0 . When (7.1) converges at any other point x , x 0 , we say that (7.1) is a convergent power series . In this case we write X k = 0 a k ( x - x 0 ) k = lim n →∞ n X k = 0 a k ( x - x 0 ) k . If the series does not converge for any point x , x 0 , we say that the series is divergent . Example 7.1.1: The series X k = 0 1 k ! x k = 1 + x + x 2 2 + x 3 6 + · · · is convergent for any x . Recall that k ! = 1 · 2 · 3 · · · k is the factorial. By convention we define 0! = 1. In fact, you may recall that this series converges to e x . We say that (7.1) converges absolutely at x whenever the limit lim n →∞ n X k = 0 | a k | | x - x 0 | k exists. That is, the series k = 0 | a k | | x - x 0 | k is convergent. If (7.1) converges absolutely at x , then it converges at x . However, the opposite implication is not true. Example 7.1.2: The series X k = 1 1 k x k converges absolutely for all x in the interval ( - 1 , 1). It converges at x = - 1, as k = 1 ( - 1) k k converges (conditionally) by the alternating series test. But the power series does not converge absolutely at x = - 1, because k = 1 1 k does not converge. The series diverges at x = 1. 7.1.2 Radius of convergence If a power series converges absolutely at some x 1 , then for all x such that | x - x 0 | ≤ | x 1 - x 0 | (that is, x is closer than x 1 to x 0 ) we have a k ( x - x 0 ) k a k ( x 1 - x 0 ) k for all k . As the numbers a k ( x 1 - x 0 ) k sum to some finite limit, summing smaller positive numbers a k ( x - x 0 ) k must also have a finite limit. Therefore, the series must converge absolutely at x . We have the following result. Theorem 7.1.1. For a power series (7.1) , there exists a number ρ (we allow ρ = ) called the radius of convergence such that the series converges absolutely on the interval ( x 0 - ρ, x 0 + ρ ) and diverges for x < x 0 - ρ and x > x 0 + ρ . We write ρ = if the series converges for all x. 7.1. POWER SERIES 297 x 0 x 0 + ρ x 0 - ρ diverges
2022-01-26 17:07:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.923294186592102, "perplexity": 298.9894737244398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304959.80/warc/CC-MAIN-20220126162115-20220126192115-00280.warc.gz"}
https://koreascience.or.kr/article/JAKO200501436739515.page
GENERALIZED DIFFERENCE METHODS FOR ONE-DIMENSIONAL VISCOELASTIC PROBLEMS • Li, Huanrong (DEPARTMENT OF MATHEMATICS, CAPITAL NORMAL UNIVERSITY) • Published : 2005.12.25 Abstract In this paper, generalized difference methods(GDM) for one-dimensional viscoelastic problems are proposed and analyzed. The new initial values are given in the generalized difference scheme, so we obtain optimal error estimates in $L^p$ and $W^{1,p}(2\;{\leq}\;p\;{\leq}\;{\infty})$ as well as some superconvergence estimates in $W^{1,p}(2\;{\leq}\;p\;{\leq}\;{\infty})$ between the GDM solution and the generalized Ritz-Volterra projection of the exact solution.
2021-05-08 20:08:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6476293206214905, "perplexity": 1019.2078003022252}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988923.22/warc/CC-MAIN-20210508181551-20210508211551-00067.warc.gz"}
https://www.studyadda.com/solved-papers/jee-main-advanced/physics/electrostatics-capacitance/jee-main-solved-paper-2016/52
# Solved papers for JEE Main & Advanced JEE Main Solved Paper-2016 ### done JEE Main Solved Paper-2016 • question_answer1) A combination of capacitors is set up as shown in the figure. The magnitude of the electric field, due to a point charge Q (having a charge equal to the sum of the charges on the $4\mu F$ and $9\mu F$capacitors), at a point 30 m from it , would equal: [JEE Main Solved Paper-2016 ] A)  480 N/C B) 240 N/C C) 360 N/C D) 420 N/C • question_answer2) The region between two concentric spheres of radii 'a' and 'b', respectively (see figure), has volume charge density $\rho =\frac{A}{r},$where A is a constant and r is the distance from the centre. At the centre of the spheres is a point charge Q. The value of A such that the electric field in the region between the spheres will be constant, is :- [JEE Main Solved Paper-2016 ] A) $\frac{2Q}{\pi {{a}^{2}}}$ B) $\frac{Q}{2\pi {{a}^{2}}}$ C) $\frac{Q}{2\pi ({{b}^{2}}-{{a}^{2}})}$ D) $\frac{2Q}{\pi ({{a}^{2}}-{{b}^{2}})}$
2020-09-21 06:58:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8037387132644653, "perplexity": 602.9992952805785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198942.13/warc/CC-MAIN-20200921050331-20200921080331-00602.warc.gz"}
http://physics.stackexchange.com/tags/estimation/hot
Tag Info 20 In the LHC, we are talking about mini black holes of mass around $10^{-24}kg$, so when you talk about $10^{15}-10^{20}kg$ you talk about something in the range from the mass of Deimos (the smallest moon of Mars) up to $1/100$ the mass of the Moon. So we are talking about something really big. The Schwarzschild radius of such a black hole (using the ... 17 This is a really rough calculation that doesn't take into account the realistic direction of the bow shock, or calculation of the drag force. I just take the net momentum flow in the solar wind and direct it so as to produce the maximum decceleration and see what happens. Apparently the solar wind pressure is of the order of a nanoPascal. As I write this ... 14 Although I don't know anything about this, using some rough estimates I think I can get the right order of magnitude: Volume of graphite in a pencil: $10 cm$ cylinder of $1 mm$ thick = $0.314 mm^3$ (error: ~factor 2) Maximum surface a pencil can write: $50 km$ $\times$ $1$ mm = $10 m^2$ (error: ~factor 5) Thickness of the graphite layer: Volume / Surf. ... 11 The horizontal component of running is believed to be fairly negligible for humans. Some research suggests that the limit isn't strength related at all, but design --- in particular, based solely on power, humans could theoretically run up to almost 40 mph. The issue is two fold: first, our limbs are actually too heavy, for big strength (e.g. climbing in ... 7 Dr. Phil Plait has written about this extensively. He has a book (Death from the Skies) with a chapter that deals with this. He has a blog entry about this very subject as well (in addition to a link to one just talking about getting hit by a meteorite). Here is an excerpt: what are the odds of getting killed by one? Turns out, they’re a lot ... 7 Live on earth is protected from solar wind by the earth's magnetic field. Charged particles from the sun (mostly) penetrate the earth's atmosphere with great velocity. These particles can be trapped by a magnetic field to follow circular path's around the magnetic field lines, thereby losing their energy due to collisions or bremstrahlung. From first ... 6 There are a few ways I might approach this experimentally: (1) - Strip a pencil down to the cylindrical graphite core (or simply use a mechanical pencil), weigh this core to obtain a value for $m_{core}$, and then counting as you go, draw fixed-length lines on paper using a straight-edge. After some time, weigh the remaining section of the core to determine ... 6 You should always find an answer that is a formula, and then only apply significant figures once you get to the one final step of substituting your numbers back into the problem in place of variables. Avoid multiple intermediate steps of substituting numbers at all costs. Not only will this save your pencil a lot of work, but it will also cause your ... 5 We run an experiment on my A Level Physics course to answer this question. Expose the graphite in the pencil you wish to use at either end. Measure the length of the graphite, its diameter (then calculate its cross-sectional area) and the electrical resistance along its length (either by direct measurement using a multimeter or by passing a current through ... 5 Here's my quantitative attempt at $4.$ and $1.$: The Coandă effect here is the tendency of the airflow to adhere to the surface of the ball. This means that near the surface of the ball, the streamlines are curved with a radius of curvature approximately equal to the radius of the ball $R$; this curvature results in a pressure gradient just as it does in ... 5 If the black hole simply swalled matter, and didn't lose any energy, it probably isn't too hard a calculation, just assume the earth is unsupported mass that falls into the BH, which grows in mass as it adds more stuff. The problem, is we know this isn't how it would happen, and some significant fraction of swalled mass will be released as energy, maybe one ... 5 Since I have much better answer from Vagelford -- I'll write my own version. When matter falls on the black hole it gets fractioned and radiates. As far as I know (correct me if I'm wrong) one can estimate the radiated energy as $\simeq 0.05mc^2$. Where $m$ is the mass of the falling matter. The Earth's matter is pulled by the black hole gravitation ... 5 Is it possible to estimate? Yes. I'll give it a quick try. But the details of whether the planet will be incinerated and so on will make the reality much more complicated. As a ballpark, I think supernovae release about $10^{53}$ erg of energy. Spread over a sphere of, say, 1 AU gives $3.55\times10^{22}$J.m$^{-2}$. This energy isn't all released in one go ... 4 Diehl et al. (2006) used gamma ray observations to map $^{16}$Al in the galaxy. Because $^{16}$Al has a half-life long compared to the expected rate of supernova, but not so long we expect the SN rate in the galaxy to have changed dramatically over that time, it might be an indicator of the recent SN rate. Actually carrying through this calculation relies on ... 4 Let me give the naivest possible estimate, so that people have something to criticize. Assuming that the most of the jet interacts with the ball and is deflected at a substantial angle then the force on the ball is roughly the momentum flow through the pen. In your units this is $\rho_{air} Q^2/(\pi d^2)$. Saying the force to levitate a ball is $1\times ... 4 I'll take a go at it - as with the piano tuners in Chicago, I take the approach as if I have "no facts to go on". Your head has a surface area of$4\pi r^2$, the fraction of it which is covered with hair is$\gamma$. The density of hairs per unit area is$\sigma$, and the number of hairs is then$N=4\pi r^2 \gamma \sigma$Hairs per unit area is obviously ... 3 Supernovae can release several times 10^44 J of energy. This has resulted in the adoption of the foe (10^44 J) as the standard unit of energy in the study of supernovae. The Foe is a unit of energy equal to 10^44 joules. To measure the vast amounts of energy that produces a supernova, the scientists used a unit of energy occasionally called foe was an ... 3 I'll take a slightly different approach to the others. I just got a close haircut (not for science, but why waste a good opportunity right?) and managed to keep something like 90% of the hair. So I can use the fact that$N$hairs of diameter$d$, length$\ell$and density$\rho$have a mass $$M = N \frac{\pi}{4} d^2 \ell \rho.$$ Accounting for the fact ... 3 I just went to a mirror to count the linear hair density of my head. I found that in about$1 cm$there are$15 hairs$, thus the linear hair density is about$\lambda=15 hairs/cm$. So density of hair per unit area is$\sigma=\lambda^2=225 hairs/{cm}^2$And assume that this hair density is roughly constant. I found that it takes about 6 times the area of my ... 3 This question is different from, but related to another question How is it that the Earth's atmosphere is not “blown away”?. In answering that question with respect to solar wind, I remarked that the orbital speed of Earth is 30 km/s while the speed of the solar wind varies between 300 km/s and 800 km/s in a nearly orthogonal directions (fully ... 2 The chances are of the order of 1 in 10,000. You can derive this number by assuming the most probable impact as the chance order of magnitude (there's a 1 in 10,000 chance in 2019). All that you wanted to know about near Earth objects, dangerousness and probability of impact is here: http://neo.jpl.nasa.gov/risk/ 2 I'm a first year physics student, so my answer might not be satisfactory - but I hope it will give some insight to the problem. 1) from what I know we need to consider: Drag - which I will address Turbulence - which I know next to nothing about, and therefore I will ignore with hope someone will be able to expand. we need the drag force to be equal to ... 2 Firstly, I assume that we have 300 hairs per square cm on our head. This can be tested by waxing an area of 1cm^2 on your scalp and counting the number of hairs that are removed. Step 2, we must calculate the area of the scalp, and we assume 100 hairs per square cm applies to the whole area of the scalp. I assume the radius of my head is sphere. I ... 1 The odds of a very-high-casualty rate impact in the next decade are even lower than previous answers have stated, since there are ongoing surveys looking for just such dangerous "near Earth objects" and "Earth approachers" and there are no known near-term threats: JPL's Current Impact Risks New survey instruments are coming online now. For instance, the ... 1 Given your tag of 'estimation' I would just use $$a=\frac{v-v_0}{t}$$ plugging in some numbers for terminal velocity$v$and a 'safe' parachute deployed velocity$v_0$. Then assume ~$1\rm{s}$for$t$and we have an a, those wiki links should confirm whether your numbers are good! For fun can I suggest calculating$a$for$v=0$and$t\approx0.5\rm{s}\$ ;-) 1 To a good approximation the deceleration felt by the tourist will be the same as that felt by the parachutist. There may be some elasticity in the link joining the two, but I'd be surprised if this made much difference. As always (is there anything not on Wikipedia) there are a couple of useful articles on Wikipedia. ... 1 I have not done the math but would expect that the radiation from the asphalt as T^4 will favor larger gradients for higher temperatures. I have the impression that air goes something like T^6, so even conduction energy transferred will have larger gradients the hotter it is. Your g is temperature dependent I guess. Edit in response to edit of question. ... 1 I just want to add that oscillations about the center of the Earth are dampened due to the momentum of the entering mass. Figures for volume of the mass continually eaten by the black hole differ by orders of magnitude going by previous posters. But the consumed material as it falls will depend on the cross section of this volume times the radius of the ... Only top voted, non community-wiki answers of a minimum length are eligible
2013-12-21 23:26:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6789792776107788, "perplexity": 500.1257090402744}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345776447/warc/CC-MAIN-20131218054936-00026-ip-10-33-133-15.ec2.internal.warc.gz"}
http://demon-software.com/public_html/support/htmlug/ug-node69.html
### Keyword ORBITALS This keyword controls the atomic orbital choice. Options: SPHERICAL / CARTESIAN SPHERICAL Spherical atomic orbitals (, ) are used. This is the default. CARTESIAN Cartesian atomic orbitals (, ) are used. Description: In deMon2k the spherical (see 4.4.7 for the definition of real spherical harmonic Gaussians) and Cartesian atomic orbitals have the general form (without normalization): (16) (17) Because spherical orbitals have no linear dependencies, they are the recommended choice for most applications. See 4.4.7 regarding CARTESIAN orbitals in the case of an atom.
2018-12-13 18:55:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8917075395584106, "perplexity": 2956.188584021468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825029.40/warc/CC-MAIN-20181213171808-20181213193308-00187.warc.gz"}
https://3dprinting.stackexchange.com/questions/6013/openscad-library-for-empty-space-holes
# OpenSCAD library for empty space/holes I am working on a design in OpenSCAD, and need (would want) two things that seem tedious to hand-code: 1. A pattern of holes in a wall, for less weight/material. Could be round holes or some geometric pattern. 2. Empty enclosed space, with some grid-structure for stability. Again, for saving weight/material (and time during the printing). Are there any libraries for these things? I'm not aware of libraries that do that for you (but you can create your own, see end of the answer), but creating a relieve hole pattern is not that difficult or tedious using iterator functions (e.i. the for loop). Note that it may not be a good idea to make enclosed holes inside your object, see the edit section at the bottom of the answer. tol=0.2; difference(){ cube([100,100,10]); for (x=[10:20:90]){ for (y=[10:20:90]){ translate([x,y,-tol/2]){ cylinder(r=9,h=10+tol,\$fn=180); } } } } This can be used for both parts of your question, but in case the pattern needs to be inside an object you need to lower the value of h in the subtracting cylinder and raise/translate it (you could use the center=true in the cylinder function as a parameter and raise the center to the middle of your object translate([x,y,objectThickness/2])). You could make a module of the recurring pattern yourself to create your library. note: Replace cylinder with cube or any other geometrical solid or (2D) pattern (use linear_extrude) to subtract from your part. EDIT: Note that it is not always wise to create your own spacing/grid structure enclosed in the model. Please read the accepted answer of this question. This answer explains that slicer applications work best with true solids!
2021-06-23 18:14:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27177172899246216, "perplexity": 1295.401424816018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488539764.83/warc/CC-MAIN-20210623165014-20210623195014-00078.warc.gz"}
https://discuss.codechef.com/questions/3659/cielquak-editorial
× HARD # EXPLANATION At first, we consider the cases where C is small, for example we assume C 50. Then, the problem can be solved by using dynamic programming (DP). In DP, we calculate the probability that each state occurs. When we focus the intersection (r, c), the states are the connection relationship among the intersections (r-1, c), (r-2, c), ..., (1, c), (R, c-1), (R-1, c-1), ..., (r, c-1) and (1, 1). We ignore the states that (1, 1) doesn't connect with any other intersections. Next, our focus is moved to the intersection (r+1, c), if r = R then we move to (1, c+1). We can calculate the probabilities for the new states by using the probabilities that the roads from (r, c) to (r+1, c), and from (r+1,c-1) to (r+1, c) are destroyed. It takes O(S * R2) time for constructing the relationship among the states, and O(S * R * C) time for DP, where S is the number of states. If R = 8, then the number of states S = 6435. Next we consider the cases having larger C. Let the answer be X(R, C, p). Then X(r, c+1, p) / X(r, c, p) converges some constant as c goes to infinity. Let Cmax be enough large, and L(r, p) = X(r, Cmax, p) / X(r, Cmax-1, p). Then we can calculate the answer as X(R, cmax, p) * L(R, p)C-Cmax for large C. If p is large, X(r, c+1, p) / X(r, c, p) converges very fast, otherwise the answer is converges to 0 very fast. So we can check that around 40--50 for Cmax is enough for this problem. Therefore we obtain an O(S * R2 + S * R * Cmax) time solution. Of course, one question is remaining. Why X(r, c+1, p) / X(r, c, p) should converge? Let P(s, c) be the probability that the state s occurs when we focus the intersection (1, c), and let the normalized version be N(s, c) = P(s, c) / (P(1, c) + P(2, c) + ... + P(S, c)). If it is shown that N(s, c) converges as c goes to infinity for all 1 ≤ sS, then it is clear that X(r, c+1, p)/ X(r, c, p) should converge. It is well-known fact that an ergodic Markov chain has a unique equilibrium distribution. (See wikipedia: Markov chain for details) In this problem has a very similar structure, therefore X(r, c+1, p) / X(r, c, p) converges. I omit a rigorous proof, but some clues for proof are noted here. We can use Perron-Frobenius theorem. Let A be the transition matrix between set of states of two consecutive columns. Note that the number of states are different from above algorithm, because we focus only the intersection (1, c) here, then the number of reachable states is smaller. And let eigenvalues of A be E1, E2, ..., (|E1 | ≥ |E2 | ≥ ...). Since A is non-negative, irreducible, and aperiodic, then |E1| |E2| from Perron-Frobenius theorem. And Perron-Frobenius theorem says the only eigenvector whose components are all positive are those associated with the eigenvalue E1. Therefore N(s, c) converges to some element of the eigenvector. SETTER'S SOLUTION Can be found here. TESTER'S SOLUTION Can be found here. This question is marked "community wiki". 19.8k350498541 accept rate: 36% toggle preview community wiki: Preview By Email: Markdown Basics • *italic* or _italic_ • **bold** or __bold__ • image?![alt text](/path/img.jpg "title") • numbered list: 1. Foo 2. Bar • to add a line break simply add two spaces to where you would like the new line to be. • basic HTML tags are also supported • mathemetical formulas in Latex between \$ symbol Question tags: ×15,852 ×1,359 ×12 ×1 question asked: 09 Nov '12, 19:39 question was seen: 1,499 times last updated: 12 Nov '12, 13:25
2019-03-25 09:55:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8578512072563171, "perplexity": 2627.09361666776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203865.15/warc/CC-MAIN-20190325092147-20190325114147-00528.warc.gz"}
https://zbmath.org/?q=an:0701.62087
# zbMATH — the first resource for mathematics AR(1) processes with given moments of marginal distribution. (English) Zbl 0701.62087 Summary: Let $$X_ t$$ be an AR(1) process given by $$X_ t=bX_{t-1}+e_ t$$ where $$b\in (-1,1)$$ and $$e_ t$$ is a strict white noise. Sometimes $$X_ t$$ must satisfy also some additional conditions, e.g. $$X_ t\geq 0$$ or $$C\leq X_ t\leq D$$. The problem solved in the paper is how to find a distribution of $$e_ t$$ such that the moments E $$X_ t^ k$$ $$(k=1,...,n)$$ have given values. ##### MSC: 62M10 Time series, auto-correlation, regression, etc. in statistics (GARCH) Full Text: ##### References: [1] N. I. Achiezer: Klassičeskaja problema momentov i nekotoryje voprosy analiza svjazannyje s neju. Gos. izd. fiz.-mat. lit., Moskva 1961. [2] J. Anděl: Marginal distributions of autoregressive processes. Trans. Ninth Prague Conf. on Inform. Theory, Statist. Dec. Functions, Random Processes, Academia, Prague 1983, pp. 127-135. [3] J. Anděl: On linear processes with given moments. J. Time Ser. Anal. 8 (1987), 373-378. · Zbl 0633.60056 · doi:10.1111/j.1467-9892.1987.tb00001.x [4] J. Anděl, M. Garrido: On stationary distributions of some time series models. Trans. Tenth Prague Conf. on Inform. Theory, Statist. Dec. Functions, Random Processes, Academia, Prague 1988, pp. 193-202. [5] J. Anděl, V. Dupač: An extension of the Borel lemma. Comment. Math. Univ. Carolin. 32 (1989), 405-407. · Zbl 0678.60030 [6] J. Anděl, K. Zvára: Simulation methods in time series. Proc. 2nd Internat. Symp. on Numerical Analysis, Prague 1987 (I. Marek. Teubner-Texte zur Mathematik 107, Teubner, Leipzig 1988, pp. 99-113. [7] G. P. Chamitov: Imitacija slučajnych processov. Izd. Irkutskogo instituta narodnogo chozjajstva, Irkutsk 1983. [8] D. P. Gaver, P. A. W. Lewis: First-order autoregressive gamma sequences and point processes. Adv. Appl. Probab. 12 (1980), 727-745. · Zbl 0453.60048 · doi:10.2307/1426429 [9] M. T. Krejn, A. A. Nudelman: Problema momentov Markova i ekstremalnyje zadači. Izd. Nauka, Moskva 1973. [10] M. T. Krein, A. A. Nudelman: The Markov Moment Problem and Extremal Problems. (Transl. of Math. Monographs, Vol. 50.) American Mathematical Society, Providence 1977. [11] A. J. Lawrance, P. A. W. Lewis: Modelling and residual analysis of nonlinear autoregressive time series in exponential variables. J. Roy. Statist. Soc. Ser. B 47 (1985), 165-202. · Zbl 0579.62075 [12] J. S. Ramberg E. J. Dudewicz P. R. Tadikamalla, E. F. Mykytka: A probability distribution and its uses in fitting data. Technometrics 21 (1979), 201-214. · Zbl 0403.62004 · doi:10.2307/1268517 [13] M. M. Sondhi: Random processes with specified spectral density and first-order probability density. Bell System Technical J. 62 (1983), 679-701. This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-03-08 09:41:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5873545408248901, "perplexity": 6537.591468485209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178383355.93/warc/CC-MAIN-20210308082315-20210308112315-00424.warc.gz"}
http://librestats.com/
I have a lot of projects that will take ages to finish (some are in such poor shape that I tuck them away in private repositories, so no one can see my shame).  So sometimes it's nice to just take a weekend and crank out something start to finish, even if it's dumb and no one cares about it and fewer people want it.  Which brings us to the matter at hand.  There's a relatively famous XKCD comic Read more >> ## 1 + 2 + 3 + ... is not equal to -1/12, you gullible rubes For some reason, a relatively uninteresting observation of Ramanujan has become a bit of an internet celebrity among factoids lately.  Someone who's out to generate link bait makes a post stating that $latex 1+2+3+\dots = \frac{-1}{12}$ Now look.  I appreciate trolling as much as the next jerk, but people who really ought to know better are getting tricked by this.  In the usual sense of convergence, Read more >> ## Rules for Naming Objects in R Naming Rules in R How are objects allowed to be named in R? As it turns out, this is a very different question from how should objects be named. This isn't about style conventions, camelCase, dots.verus_underscores, or anything like that; this is about what is strictly possible. I do a lot of outreach to HPC people who are starting to get an interest in R, often because their users are starting Read more >> ## Intentionally Writing Obtuse Code Sometimes intentionally writing bad code can be a lot of fun. Now here, when I say "bad", I mean something that's functional but completely incoherent to anything but the machine. There are even competitions for this kind of thing, but I only consider myself a dabbler in this dark art. Thankfully, it's often pretty easy to make obtuse code in R. Given that I have a weird admiration for bad code Read more >> ## Update I've been too quiet around here for too long. I plan to rectify that soon.  As for why I've been absent for so long, the very short answer is that I lost the energy.  I'm working on a lot of really new and exciting software projects (~6 of them side projects I work on in my free time, mostly by myself).  Combine this with an aggressive work schedule, lots of traveling for the summer conference season, Read more >> ## The fizzbuzz that Fortran Deserves I've talked about the toy programming problem fizzbuzz on this blog a few times. In R, you might do something like this: Or to be a bit more R-ish, you might do something like: But there's a problem. That's all too readable! When I look at it, I know exactly what it's doing. Enter Fortran. I spend most of my life programming in Fortran these days --- which I then hook Read more >> ## pbdR Updates - Distributed lm.fit() and More Over the weekend, we updated all of the pbdR packages currently available on the CRAN.  The updates include tons of internal housecleaning as well as many new features. Notably, pbdBASE_0.1-1 and pbdDMAT_0.1-1 were released, which contain lm.fit() methods.  This function in particular has been available at my github for over a month, but didn't make its way to the CRAN until recently because of Read more >> ## R at 12,000 Cores I am very happy to introduce a new set of packages that has just hit the CRAN. We are calling it the Programming with Big Data in R Project, or pbdR for short (or as I like to jokingly refer to it, 'pretty bad for dyslexics'). You can find out more about the pbdR project at http://r-pbd.org/ The packages are a natural programming framework that are, from the user's point of view, a very simple Read more >>
2014-04-24 09:09:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32671239972114563, "perplexity": 1399.497687146395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
https://email.esm.psu.edu/pipermail/macosx-tex/2009-November/041853.html
# [OS X TeX] amsmath and txfonts in Mactex 2009 Raphael Attie raphael.attie at mac.com Wed Nov 11 19:06:37 EST 2009 It seems there's a conflict between these two packages : {txfonts} and {amsmath} \documentclass{article} \usepackage{txfonts} \usepackage{amsmath} \begin{document} blablaa \end{document} It was working fine in version 2007. Don't know about 2008. So, what would be the package that would be at least the intersection (algebraically speaking) of both packages ? or does simply one of the two contains the other ? Thanks Raphael Attie Ph.D Student Max Planck Institute for Solar System Research Max Planck Str. 2 37191 Katlenburg-Lindau Germany
2018-02-26 01:38:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7098349332809448, "perplexity": 12041.483031051706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817908.64/warc/CC-MAIN-20180226005603-20180226025603-00131.warc.gz"}
https://www.groundai.com/project/measurement-of-planet-masses-with-transit-timing-variations-due-to-synodic-chopping-effects/
Planet masses with TTV # Measurement of planet masses with transit timing variations due to synodic “chopping” effects ## Abstract Gravitational interactions between planets in transiting exoplanetary systems lead to variations in the times of transit that are diagnostic of the planetary masses and the dynamical state of the system. Here we show that synodic “chopping” contributions to these transit timing variations (TTVs) can be used to uniquely measure the masses of planets without full dynamical analyses involving direct integration of the equations of motion. We present simple analytic formulae for the chopping signal, which are valid (generally error) for modest eccentricities . Importantly, these formulae primarily depend on the mass of the perturbing planet, and therefore the chopping signal can be used to break the mass/free-eccentricity degeneracy which can appear for systems near first order mean motion resonances. Using a harmonic analysis, we apply these TTV formulae to a number of Kepler systems which had been previously analyzed with full dynamical analyses. We show that when chopping is measured, the masses of both planets can be determined uniquely, in agreement with previous results, but without the need for numerical orbit integrations. This demonstrates how mass measurements from TTVs may primarily arise from an observable chopping signal. The formula for chopping can also be used to predict the number of transits and timing precision required for future observations, such as those made by TESS or PLATO, in order to infer planetary masses through analysis of TTVs. planetary systems 1 ## 1. Introduction In a multi-planet system, mutual gravitational interactions between planets lead to deviations from Keplerian orbits. In particular, the instantaneous orbital periods are no longer constant, which in turn implies that transiting planets in multi-planet systems will not transit at a fixed, constant rate. The detection of these changes in the transit rate, or ‘transit-timing variations’ (TTVs), was initially recognized as a way to infer the presence of non-transiting planets in systems with at least one other transiting planet (Schneider, 2003; Agol et al., 2005; Holman & Murray, 2005; Miralda-Escudé, 2002). TTVs have since been used to confirm that a transit light curve signal is due to a planetary transit (e.g. Holman et al., 2010), to constrain planetary orbital elements and measure planetary masses using photometry alone (e.g. Carter et al., 2012), and to detect and characterize non-transiting planets (e.g. Ballard et al., 2011; Nesvorný et al., 2012). TTV data are most commonly analyzed through inversion, a process through which observed transit times are fit using a model of gravitationally interacting planets in order to determine the system parameters, including planetary masses relative to the mass of the star, as well as orbital elements.2 Transiting exoplanets generally have well constrained radii, so measurements of their masses yields information regarding densities, bulk compositions and gravities. This in turn can be used to identify promising targets for atmospheric characterization and to constrain planetary formation and dynamical evolution (e.g. Hansen & Murray, 2013). Of the orbital elements, constraints on the planetary eccentricities in particular are necessary to understand the importance of interaction with the protoplanetary disk, interactions with remnant planetesimals, and tidal dissipation (e.g. Lithwick & Wu, 2012; Hansen & Murray, 2013; Batygin & Morbidelli, 2013; Hansen & Murray, 2014; Mahajan & Wu, 2014). However, the TTV inversion problem is often complicated by strong nonlinear correlations between parameters in a large dimensional space, and as a result precise planetary mass and orbit measurements can be difficult to make. Many of the Kepler multi-planet systems with partially characterized planetary orbits and masses are those near first order mean motion resonances, a configuration in which the period ratio of two planets is close to , where is an integer greater than unity, is the period of the inner planet, and is the period of the outer planet. Indeed, for nearly circular orbits and given planet-to-star mass ratios, TTVs are largest in amplitude near first-order mean motion resonances (e.g. Agol et al., 2005; Holman & Murray, 2005). If the planets are near to, but not in resonance, then the planets show sinusoidal variations with a period equal to the ‘super-period’, Pj=1|jR/P2−(jR−1)/P1| (1) (Agol et al., 2005; Lithwick et al., 2012). However, the amplitude of this TTV signal depends on both the mass of the perturbing planet and the eccentricity vectors of both planets (Lithwick et al., 2012). This degeneracy can be broken statistically with analyses of a large number of planetary systems (Hadden & Lithwick, 2013) or for systems with very precisely measured transit times; however in practice it inhibits the measurement of the masses and limits our knowledge of the eccentricities of individual planetary systems. In spite of this mass-eccentricity degeneracy (and others3), it has been possible in some cases to precisely measure the masses of planets using TTVs (e.g. Carter et al., 2012; Nesvorný et al., 2013; Masuda, 2014; Dreizler & Ofir, 2014a; Nesvorny et al., 2014). The successful mass measurements in these systems is due to the fact that an additional, independent periodic component of the TTVs, with a timescale other than the super-period, was resolved. Other components of TTVs have amplitudes that depend on the orbital parameters and masses in different ways, and so the measurement of secondary components leads to additional, independent constraints on orbital parameters. In particular, the so-called short-timescale4 “chopping” TTV associated with the planetary synodic timescale has been identified as an important feature for unique characterization of systems (Holman et al., 2010; Nesvorný et al., 2013). More recently, Nesvorný & Vokrouhlický (2014) studied this chopping TTV to clarify how, despite degeneracies between parameters, the TTV method can be used to measure planetary masses in the case of low-eccentricity orbits. In Section 2, we begin by describing a harmonic approach to analyzing TTVs. We review the work of Lithwick et al. (2012) in Section 3, and discuss the TTV signal for a system near a first order mean motion resonance (referred to hereafter as the “Lithwick et al. formula”). We then introduce the conjunction effect and give analytic formulae for the chopping signal in Section 4; these were derived first in Agol et al. (2005) and more recently, using a similar approach, in Nesvorný & Vokrouhlický (2014). We show how the chopping formula encompasses near-resonant effects, and more generally consider the range of validity of the synodic TTV formulae. We then place the synodic TTV expression in context with that of Lithwick et al. and discuss the regimes in which each should be used. In Section 4.4, we address the more general problem of using these formulae to predict, given the timing precision on the transits, how many observations are required to infer the masses of a particular system. This will be important in the planning of follow-up observations of partially characterized systems and for estimating the timing precision required for future surveys to obtain a measurement of planetary masses through chopping. In Section 5, we apply the synodic TTV formula to Kepler data, and use it to infer planetary masses for systems both near and far from mean motion resonance. The synodic formulae can be used alone to determine planetary masses, or in combination with the Lithwick et al. formula for systems near first order mean motion resonances, in which case the mass-free eccentricity degeneracy can be broken and a constraint on the free eccentricities can be determined as well. We present our conclusions in Section 6. In the Appendix we give an alternate derivation of the synodic TTV formulae based on Hamiltonian perturbation theory, and we discuss the convergence of the series for the synodic chopping signal. ## 2. Transit aliasing and harmonic analysis of TTVs The TTVs of a planet can be written as a combination of periodic components with frequencies that are integer combinations of the two interacting planets’ orbital frequencies, , where are integers, and are the mean motions of the two planets (Nesvorný & Morbidelli, 2008; Nesvorný, 2009; Nesvorný & Beaugé, 2010). More explicitly, transit-timing variations for a two planet system can be expanded as: δt1,k = P1m2M⋆∑p,q[a1,p,qcos[(pn1−qn2)t1,k]+b1,p,qsin[(pn1−qn2)t1,k]], (2) δt2,k = P2m1M⋆∑p,q[a2,p,qcos[(pn1−qn2)t2,k]+b2,p,qsin[(pn1−qn2)t2,k]], (3) where denotes the transit number, () are coefficients which are functions of the orbital elements of the planets (except the mean longitudes ), and therefore vary on timescales long compared to the orbital period; are the masses of the two planets; is the mass of the star; and is the th transit time of the th planet. We assume that the observation baseline is short compared to the secular timescales (which are typically “long” since they are proportional to ) so that treating the coefficients as constant is justified. We will discuss this further in Section 4.2. Note that the transit timing variations scale in proportion to the orbital period of each transiting planet and in proportion to the mass ratio of the perturbing planet. (These equations and scaling relations do not apply in mean-motion resonance (Agol et al., 2005).) The transit times, are converted to transiting timing variations after removing a mean ephemeris, , where is an integer (Agol et al., 2005). Since transit timing variations are typically much smaller than the planetary orbital periods, , the planets’ transits are (nearly) sampled on their orbital frequencies, so δt1,k ≈ P1m2M⋆∑p,q[a1,p,qcos[(pn1−qn2)(t1,0+P1k)]+b1,p,qsin[(pn1−qn2)(t1,0+P1k)]], (4) δt2,k ≈ P2m1M⋆∑p,q[a2,p,qcos[(pn1−qn2)(t2,0+P2k)]+b2,p,qsin[(pn1−qn2)(t2,0+P2k)]], (5) where we have dropped terms of order . However, since , these equations can be rewritten as: δt1,k ≈ P1m2M⋆∑q[a′1,qcos2πq(P1/P2)k+b′1,qsin2πq(P1/P2)k)], (6) δt2,k ≈ P2m1M⋆∑p[a′2,pcos2πp(P2/P1)k+b′2,psin2πp(P2/P1)k)], (7) where, e.g. for the inner planet, the sum over is now implicit: for a specific value of , the coefficient for each possible value of has been absorbed into the new coefficients (for the outer planet, the sum over is now implicit). Therefore, each coefficient now contains variations due to multiple linearly independent frequencies of the form . But because of the sampling of the TTV on the transiting planet’s orbital period, frequencies that differ by integer multiples of the orbital frequency of the transiting planet are indistinguishable in transit timing variations. Therefore short-period TTVs (arising from short-timescale periodic terms in the variation of the orbital elements) can contribute to the same harmonic of the perturbing planet as resonant TTVs, those arising from any near-resonant configuration. For example, in a system near the 2:1 resonance, the fast frequency will, due to the discrete sampling at every transit of the inner planet, be aliased such that it appears as a sinusoid with the super-period, i.e. the same timescale as the resonant frequency (akin to the stroboscopic effect in which a spinning car wheel appears to be rotating at a slower rate due to the sampling rate). The inner planet will have apparent TTV frequencies that are integer multiples of the perturbing planet’s orbital frequency, , while the outer planet TTVs will show frequencies that are integer multiples of the inner planet’s frequency, . If the perturbing planet does not transit the host star, this aliasing can mask the true period of the perturbing planet since we can add integer multiples of the transiting planet’s orbital frequency without affecting the TTV. If both planets transit the star, though, then both periods and orbital phases are known. Consequently, TTVs can be fit using a harmonic analysis of the perturbing planet’s orbital frequency if that planet has been identified. We utilize this harmonic analysis below to identify components of the TTV that are caused by the conjunctions of the planets (Nesvorný & Vokrouhlický, 2014), and thus depends on the difference between their mean longitudes, the synodic angle . But first we turn to a review of TTVs due to first-order resonant terms. ## 3. First-Order Resonant TTV signal Here we summarize the formula for the largest amplitude component of transit timing variations of a pair of planets near a first order mean motion resonance, as derived by Lithwick et al. (2012), and remind the reader of the origin of the mass-eccentricity degeneracy. Lithwick et al. (2012) considered a system of two planets near the : first order resonance, in which case the TTVs take the following approximate form: δt1 = R[−iV1expiλjR] (8) δt2 = R[−iV2expiλjR]; (9) here denotes and V1 = P1π1j2/3R(jR−1)1/3Δm2M⋆[−f−3Z∗free2Δ] (10) V2 = P2π1jRΔm1M⋆[−g+3Z∗free2Δ] (11) λjR = jRλ2−(jR−1)λ1, (12) with Zfree = feiϖ11+geiϖ22 (13) Δ = P2P1jR−1jR−1 (14) and and are combinations of Laplace coefficients (note that and ), the free eccentricities, and the longitudes of pericenter of each of the planets. The reference direction from which longitudes are measured is the observer’s line of sight, so that at transit. Note that , , and are complex quantities; however, all observables are found by taking the real components as in Equation 8. ( is the complex conjugate of , and and denote the real and imaginary components of a complex number ). In the derivation of these formulae, only the two resonant terms associated with the : resonance at first order in eccentricity are considered. All other terms in the gravitational potential between the planets, even those of independent of eccentricity, are neglected since near resonance the TTVs they produce are very small compared to those caused by the resonant terms (due to the small denominator appearing in the amplitudes given in Equations 10). The total eccentricity of a planet near a first-order mean motion resonance is made up of “free” and “forced” components; the forced eccentricity is driven by resonant interactions between the planets while the free eccentricity is determined by matching the initial eccentricity vector (the initial conditions). The free eccentricity vector can be approximated as constant on observational timescales (it varies on the long secular timescale) while the forced eccentricity vector precesses on the resonant timescale (the super-period). Both components affect the TTVs: the forced eccentricity is responsible for the first term in the brackets in Equations 10 (depending on the coefficients and ), while the free eccentricities are responsible for the term. The amplitude of the free eccentricity is not necessarily smaller or larger than that of the forced eccentricity. In the case of two transiting planets, the unknown quantities are , , , and , and, in principle, there are four measurable quantities: both amplitudes and both phases. However, as pointed out in Lithwick et al. (2012), in both the case of and the TTVs are approximately anti-correlated, and so in either of these regimes the relative phase is no longer a constraining quantity (if the signal-to-noise is insufficient to measure a phase offset). This leads to the degeneracy between mass and free eccentricities, and hence from this first-order TTV alone one in practice cannot usually determine the masses or the combination of free eccentricities . Finally, since the TTVs are sampled for the inner [outer] planet at [], these expressions become: δt1 = R[−iV1exp(ijRλ2)] (15) δt2 = R[−iV2exp(i(jR−1)λ1)], (16) Actually, Lithwick et al. have taken advantage of the aliasing effect to create these expressions. The component of the TTVs due to the forced eccentricity has a dependence for the inner planet and a dependence of the outer planet (see their Equations (A15) and (A24)). Because these appear at the same aliased frequency as the resonant term, they are included with the resonant TTV. Since the reference direction was chosen such that at transit, these two terms have the same phase and can be grouped together in this way. ## 4. The Synodic Chopping Signal A pair of planets interacts most strongly when the distance between them is smallest. For low eccentricity and nearly coplanar orbits, this occurs at conjunction, when the synodic angle (). Conjunctions occur periodically, with a timescale of , and it is intuitive to expect that this timescale would appear in the TTVs. In fact, at zeroth order in the eccentricities, the transit timing variations only depend on the synodic angle and its harmonics, ψj=j(λ1−λ2); (17) see Appendix for more detail. ### 4.1. Synodic chopping signal formulae to zeroth order in the free eccentricities The synodic chopping signal is included in the computations in Agol et al. (2005) and Nesvorný & Vokrouhlický (2014). We also give a distinct derivation in the Appendix here which casts the expressions in a form useful for the harmonic analysis described in Section 2. All three formulae agree in the limit that the reflex motion of the star can be ignored (after correcting a typo in Nesvorný & Vokrouhlický (2014); see Appendix for more detail). For an inner transiting planet, the synodic component of the transit timing variations takes the form: δt1=∞∑j=1P12πm2M∗f(j)1(α)sinψj, (18) where f(j)1(α)=−αj(β2+3)bj1/2(α)+2βDαbj1/2(α)−αδj,1(β2+2β+3)β2(β2−1), (19) where , is the Laplace coefficient bj1/2(α)=1π∫2π0cos(jθ)√1−2αcosθ+α2dθ. (20) and is the derivative operator . The Laplace coefficients can be evaluated in terms of complete elliptic integrals; for example: b(1)1/2(α) = 4(K(α)−E(α))/(απ), (21) b(1)1/2(α)+α∂b(1)1/2(α)∂α = 4αE(α)π(1−α2), (22) where and are the complete elliptic integrals of the first and second kinds, respectively. For an outer transiting planet, the synodic component of the transit timing variations takes the form: δt2=∞∑j=1P22πm1M∗f(j)2(α)sinψj, (23) where f(j)2(α)=j(κ2+3)bj1/2(α)+2κ(Dαbj1/2(α)+bj1/2(α))−α−2δj,1(κ2−2κ+3)κ2(κ2−1), (24) where . In the limit of large , the function has a leading coefficient which scales like and the function has a leading coefficient which scales like (see Appendix), so that the largest contributions to the sum in Equation 18 and in Equation 23 in general come from smaller values of . Because of the behavior of the leading coefficients, more terms are necessary to faithfully approximate the TTV signal as because the convergence of the sums is slow. Closely spaced planets will, at a given , have a larger synodic TTV than more widely spaced planets because the parameters and appearing in the denominators approach zero as . Additionally, if the pair is near the mean motion resonance, the denominators and will be close to zero for the term with (for the inner planet) and (for the outer planet). Near the resonant configuration, then, these terms dominate the synodic TTV because of these small denominators. This reflects the fact that near the mean motion resonance, the time between conjunctions approximately corresponds to orbits of the outer planet and orbits of the inner one, such that most of the TTV amplitude is incorporated in these harmonics of the synodic angle. Additionally, these particular harmonics of the synodic angle contribute to the TTV as a long period effect. Due to aliasing, they have a timescale given by the super-period of the first order resonance. The magnitude of the functions and are plotted in Figure 1 for . It is clear that the synodic amplitude generally grows as , for all , and therefore more terms must be included in the sum for close pairs of planets. Furthermore, when the period ratio is close to the mean motion resonance, the functions with (inner planet) and (outer planet) peak for the reasons discussed above. This happens for all at some period ratio except for the synodic TTV of the inner planet since . As noted in Agol et al. (2005), the dip near a period ratio of 2.5 in the synodic signal of the outer planet is due to the fact that the TTV caused by the motion of the star about the barycenter of the inner planet-star binary subsystem is opposite in sign and comparable to that caused by direct interaction with the inner planet at this period ratio. For larger period ratios, the TTV of the outer planet is dominated by the component due to the motion of the star about the barycenter of the inner planet-star binary subsystem. Note that these relations have no eccentricity dependence; they only depend on the semimajor axis ratio of the two planets, , and the planet-star mass ratios. The phase and period of the chopping signal are straightforward to measure if the perturbing planet also transits the star; thus, if this synodic chopping signal can be measured, the mass ratio of the perturbing planet can be immediately inferred if the parameters of the system satisfy the major assumptions made in this paper (see Section 4.2). ### 4.2. Range of validity of the synodic formula The intrinsic assumptions made in deriving these TTV expressions were 1) that the system has low eccentricities and nearly coplanar orbits; 2) that the system is not in or too near resonance; 3) that is not too close to unity; 4) that the masses of the planets are small compared to that of the star; and 5) that the coefficients themselves can be treated as constant in time. Additionally, if two perturbing planets contribute to the TTVs of a third planet, the true TTV of this third planet cannot be written as only a sum of one of the perturbing planet’s harmonics. In most configurations, we hypothesize that the TTV could be approximated as a simple sum of two “single-perturber” contributions. We will focus on testing the validity of the formula in light of the intrinsic sources of error in the formula. First, we consider the qualification of “small” eccentricities and inclinations. A simple first guess of the order of magnitude error of these neglected terms would be that the coefficient would change from a value to , in which case if the eccentricities and inclinations are only a few percent than the error will also be a few percent. However, whether or not the eccentricities and inclinations can be considered “small” depends on how close the system is to resonance. In general, if one derived a formula for the TTV good to first order in eccentricity, it would have terms at first order in eccentricity only depending on the angles of (not including the longitude of pericenter piece). In fact, the TTV formulae themselves should have the d’Alembert characteristics since they only depend on angles referenced to a fixed direction (longitudes) (Hamilton, 1994). Due to the transit aliasing effect discussed above, these first-order frequencies will be indistinguishable from variations of frequency for the inner planet since these appear as the same harmonic of the outer planet, (see Equation 15). For the outer planet, these first order (in eccentricity) terms will indistinguishable from those at the harmonic of . These first-order terms will have amplitude proportional to , which, away from resonance, should be negligible for all if the eccentricity is low. But near the : resonance, the (for the inner planet) and the term (for the outer planet) will have amplitudes proportional to (appearing as in the formulae of Lithwick et al, Equation 10). The resonant combination of frequencies in the denominator mitigates the effects of the eccentricity coefficient, so that these terms are large corrections to the TTV formulae at these frequencies, and hence they make the synodic formula for these values of less accurate. Note that since , the synodic frequency, , has no resonant aliases for the inner planet. Thus the harmonic component which depends on can be uniquely identified with the synodic frequency variation. If the system is in a mean motion resonance, the dominant, resonant TTV period will be related to the libration time. In this case, the resonant contribution to the TTVs will not appear at the frequency for the inner planet and for the outer planet, since the libration time cannot be written simply as an integer times the orbital frequency. In the resonant case, then, the harmonic analysis approach cannot be used. It is important to point out that in each of these cases - both near the : resonance, where eccentricity errors are larger, and in the mean motion resonance, where the derivation in the Appendix must be modified, the synodic TTV formula for values of (inner planet) and (outer planet) will still apply. In Figure 2, we show how the predicted amplitude of the synodic chopping term compares with that calculated numerically, for both the inner and outer planet, assuming the pair is near the 2:1 resonance. We varied only the period of the outer planet and the eccentricity of both planets (assumed to be equal; the vertical scale is the square root of the sum of the squares of the eccentricities, ). The mass of each planet was set to be . For each planet, we first determined the TTVs by fitting a line to the transit times, and we then used the computed average orbital periods to determine the value of used in the synodic formulae with for each planet. We then determined the numerical amplitude of the chopping signal by fitting 10 harmonics of the perturber’s period along with a linear term to 1,000 simulated TTVs of the inner planet and between 460 and 540 simulated TTVs of the outer planet and selecting the harmonic. This experiment therefore represents an ideal case where the main source of error comes from the assumptions made in deriving the formulae, and not from issues with not having enough data and/or precision to resolve the amplitudes of the various harmonics. The error in the formula for the inner planet is below across the entire range studied, except very near the 2:1 resonance itself. This is as expected - near the 2:1 resonance, only the contribution of the TTV of the inner planet is expected to have large corrections due to eccentricity effects. The configurations with errors of are likely those in the mean motion resonance, where the harmonic approach does not apply (the libration frequency is not simply aliased with the frequency ), or those where the super period is significantly longer than the simulation time (to be discussed at the end of this section). For the outer planet, the synodic term is aliased with the resonant term, and hence we expect large errors as a function of eccentricity. Indeed, even relatively far from resonance the synodic formula with fails (errors larger than 10%) for eccentricities larger than 0.04. We performed the same numerical experiment varying the number of harmonics fit to the TTVs, and we also decreased the simulation time. We found the same results even with fewer harmonics (in all cases, the number of data points was much larger than the number of free parameters). With shorter simulation times, we also recovered the same results except when the simulation time became significantly shorter than the TTV period (in which case the error is not due to the formula, but due to insufficient coverage). Although we have not calculated the effects of inclination on the TTVs, if the TTVs in fact follow the d’Alembert characteristics then the correction due to inclination will only appear at second order. Therefore, one would expect that the inclination terms neglected will be a smaller source of error than the eccentricity terms neglected. Indeed, Nesvorný & Vokrouhlický (2014) did not find errors larger than 20% in the synodic chopping formula (away from resonance) until the mutual inclination was larger than . In deriving the TTV formula, we began from the disturbing function, which assumes that the interaction between the two planets can be written as a converging series in . This means that the formulae will not work for co-orbital planets (see Vokrouhlický & Nesvorný (2014) for an analysis of that case). Additionally, as , the Laplace coefficents converge less quickly, and so higher order eccentricity and inclination terms, ignored in the derivation, are potentially more important. A different issue is that as , mean motion resonances are densely spaced (for arbitrary , the system is more likely to be close to a resonance if is closer to unity). In principle then the (neglected) effects of eccentricity could be more important for closer pairs of planets due to the effect of the small denominators discussed above. Note, however, again the special case of the synodic TTV of the inner planet. At there are no possible small denominators in the TTV of the inner planet, since all resonances (except the 1:1) require . In Figure 3, we show the error in the formula for the synodic TTV, for the inner planet, across a wide range of orbital separation and eccentricities. In making this plot, 500 transits of the inner planet were simulated and 10 harmonics were fit to the resulting TTVs. Except at resonance, the error is less than , regardless of the eccentricity. This indicates that the neglected eccentricity terms are in fact small across a wide range of . Note that the error is unbiased, in that it is not typically positive or negative. In deriving the synodic formulae, we assumed that the masses of the planets, compared to that of the host star, are small, so that neglected corrections of order are small. In Figure 4, we show how the fractional error in the synodic TTV formula with for the inner planet grows as we increase the masses of the planets to and then to . Some of these configurations tested were unstable, and some were perturbed enough to change the number of transits by more than 1, and those were not fit (though shown in magenta). Note that the widths of resonances grow as the mass of the planets grow, and so more systems are in resonance, where the harmonic analysis will not work, than in the fiducial case shown in Figure 3. In the case of Jupiter mass planets, there are configurations where the chopping formula may be too approximate, as even outside of resonance, between the 3:2 and the 2:1, the error is on the order of . As Nesvorný & Vokrouhlický (2014) point out, in these cases the chopping formula primarily provides motivation and understanding as to how mass measurements from TTVs arise. Lastly, we assume that the coefficients of the synodic TTV, which depend on , can be treated as fixed in time at their average values. It is possible, especially in the near resonant case where the TTV timescale is long, that the observations will cover only a small fraction of the TTV cycle. In this case, the observed semimajor axis ratio may be different than the average one. The fractional error resulting from using the “incorrect” value of in the formula for will be of order . Although is small, of the same order of the TTV compared to the orbital period, the derivative of the coefficient can be large near resonance. Therefore, if one observes only a small fraction of the super-period of a near resonant system, there will be errors relating to an incorrect estimate of the average value of . These errors are larger for the terms in the synodic sum aliased with the resonant frequency. The long-period oscillations in the eccentricity and inclinations due to secular effects will not be resolved, since the secular timescale is in general very long compared to observational timescales (for example, on secular timescales will change). The synodic chopping formula, which is independent of eccentricity and inclination, will therefore have slowly varying error terms, but as long as the eccentricities and inclination remain small over the secular timescale the error terms will remain small as well. ### 4.3. Comparison to the Lithwick et al. formula with Zfree=0 How do the synodic chopping formulae compare with the Lithwick et al. formulae? First, the Lithwick et al. formulae applies for pairs of planets near a first order mean motion resonance, and it includes the contribution to the TTV at the frequencies for the inner planet and for the outer planet. However, since only the resonant terms were used in deriving the Lithwick et al. formulae, some synodic effects which get aliased to the resonant frequencies were neglected (these neglected terms are small amplitude near resonance since they do not have the small denominator of ). On the other hand, the synodic formula applies for pairs both far and near mean motion resonances, and encompasses the effects of conjunctions at every harmonic in the TTVs of the inner planet and in the TTVs of the outer planet. However, the Lithwick et al. formulae includes the approximate first order eccentricity correction for pairs near a mean motion resonance, while the synodic formulae only hold at zeroth order in eccentricity. We now compare the two sets of formulae in the regime where they should agree: near resonance (where the synodic chopping terms without the small denominator ignored by Lithwick et al. (2012) are negligible), with , and for the correct value of chosen in the synodic sum (that of for the inner planet and for the outer planet). Our expectation is borne out by a numerical comparison between the two, the results of which are shown in Figure 5. In short, the Lithwick et al. expression is an excellent approximation to the TTV of systems near first order mean motion resonances, while further away from resonance it becomes a worse approximation to the chopping signal with for the inner planet and for the outer planet. Correspondingly, the synodic formulae will be a worse approximation at these specific values of for systems closer to a first order resonance because of eccentricity effects, as we neglect the correction - see Section 4.2. ### 4.4. Measurement precision One can use the synodic TTV formulae to estimate the precision on the mass measurement of a perturbing planet. For example, the expected mass precision (of the outer planet) due to synodic chopping with in the inner planet is given by σM2=σt1M∗2π√(Ntrans−Nparam)/2P1f11(α), (25) where is the timing precision of the inner planet, is the number of model parameters, and is the total number of transits observed. This formula assumes that the phase of the sine function is adequately sampled so that the RMS of can be assumed, that there is no uncertainty on the mass of the star, , or , and that there is no covariance with other harmonics being fit. This formula also assumes that transits are observed continuously over the full super-period. ## 5. Applications In this section we apply the chopping and resonant formulae to several planetary systems which have been analyzed using a full dynamical model involving numerical integration of the gravitational equations of motion. We instead analyze these systems using the synodic formulae for TTVs, in combination with the component of the Lithwick et al. formula for near resonant systems. As a result, we can measure planetary masses for these systems without full numerical analysis and without the complication of the mass-free eccentricity degeneracy inherent in the Lithwick et al. formula alone. This allows us to demonstrate empirically the validity of the synodic TTV signal and to strengthen the understanding of what information in a TTV signal leads to mass measurements. For a system far from a first order resonance, we would recommend using the synodic formulae for the terms (for which ), and using the second component of the Lithwick et al. formula for the term for the resonant for the inner planet and for the outer planet. If the system is close to resonance, the synodic chopping formulae can be used for values of not aliased with the resonant frequencies. Note again that the resonant term and the synodic terms aliased with the resonant term will in general have different phases, unless the reference direction is chosen so that at transit. Additionally, while only the sinusoidal component of the harmonics should have nonzero amplitude, the resonant component has in general both sine and cosine components due to the complex quantity . ### 5.1. PH3/Kepler-289 The Kepler-289 (PH3/KOI-1353/KIC 7303287) planetary system was identified by the Kepler pipeline (Borucki et al., 2011; Batalha et al., 2013; Tenenbaum et al., 2013) and by the Planet Hunters crowd-sourced project (Fischer et al., 2012). This system consists of three planets with orbital periods near 35 days (Kepler-289b/PH3b), 66 days (PH3c), and 126 days (PH3d); each adjacent pair of planets is close to a period ratio of 1:1.9 (Schmitt et al., 2014). The outer two planets both display large amplitude transit-timing variations with a timescale of the super-period of the nearby 2:1 resonance, and the middle planet shows a strong chopping signal caused by the outermost planet. The masses of the outer two planets were measured by the transit timing variations through a full numerical analysis of the (assumed coplanar) system, performed by EA, as part of Schmitt et al. (2014). The inner planet does not have significantly detected TTVs, and does not significantly affect the outer two planets’ transit times, resulting in an upper limit on its mass only. In this work, we returned to the published transit times and uncertainties of Schmitt et al. (2014) and analyzed them using the harmonic-fitting approach described in section 2 in order to measure the masses of PH3c and PH3d. Figure 6 shows our initial harmonic fit to the data; two harmonics are required for PH3c, while only one is required for 3d. We ignore the innermost planet (PH3b) in the analysis. We used the component of the Lithwick et al. formula with for modeling the component of the outer planet’s TTVs and for the component of the middle planet’s TTVs. We included the synodic chopping signal with for the middle planet PH3c, and the component for the outer planet (which encompasses the contribution from the Lithwick et al. formula). As these three terms together only constrain a linear combination of the free eccentricities of the planets (), we also enforced as a prior the Hill stability criterion (Gladman, 1993) to prevent the eccentricities from growing too large. We added a systematic error parameter, , in quadrature to the measured timing errors, such that larger values of are penalized in the likelihood function while smaller values of require a closer fit to the transit times in order to have a high likelihood. We carried out an affine-invariant Markov chain analysis (Foreman-Mackey et al., 2013) with eleven free parameters: the ephemerides (, ), eccentricity vectors (, ), and masses of each planet, plus . Figure 7 shows the confidence limits in the planet masses from the harmonic analysis with and without the constraint from the synodic chopping signal, as well as the confidence limits from the full dynamical analysis of Schmitt et al. (2014). As explained in Lithwick et al. (2012), the 2:1 resonant signal constrains a combination of the mass ratios of the planets and the free eccentricity, . Without the synodic chopping signal, our analysis shows a banana-like degeneracy between the two planet masses which is due to the trade-off between their masses and the free eccentricity (Figure 7, light blue), giving and . When the chopping signal in the TTVs of the middle planet is included, the mass ratio of the outer planet becomes constrained; this then breaks the mass/free eccentricity degeneracy, and allows the mass of the inner planet to be determined as well. The derived error ellipse is similar to that from the full dynamical analysis: Schmitt et al. (2014) report masses of and , while the harmonic analysis yields and . This analysis demonstrates the power of the chopping signal in constraining planetary masses near a first-order mean motion resonance. In principle, although the chopping components (for the outer planet) and chopping components also provide independent constraints on the planetary masses, they are smaller in amplitude and not detected in this case. ### 5.2. Kepler 11d/e The Kepler-11 system (Lissauer et al., 2011) is a system with six transiting planets. A full dynamical analysis has been carried out for this system, giving constraints on the masses of all planets (Migaszewski et al., 2013; Lissauer et al., 2013). Several of the planets, despite being only a few Earth masses, have low densities which require H/He atmospheres; this result is puzzling in light of core-accretion theory, which would not predict planets so low in mass to accumulate substantial gaseous envelopes. Here we validate the existing mass measurements of these two planets and we show that the mass constraints of Kepler-11d and Kepler 11-e largely result from the chopping TTV signal. Kepler 11d and 11 e are the two of the three most massive planets in the Kepler-11 system, with periods near 23 and 32 days, respectively, in close proximity to 3:2 commensurability. Each of these planets have transit timing variations that are dominated by the other; thus they can be dynamically ‘decoupled’ from the rest of the planets, and treated as a two-planet system. Note, however, that the decoupling is “one-way”: Kepler 11e affects the TTVs of planet Kepler 11-f, and hence there is more information with regards to the masses of d and e to be gained by fitting the entire system instead of treating the (d,e) pair in isolation. Figure 8 shows the harmonic fitting results for Kepler 11d/e using the transit times due to Jason Rowe presented in Lissauer et al. (2013), to be compared to the dynamical constraints in Table 7 of that paper. In black circles, we show the actual TTV measurements for each planet, with corresponding uncertainties. In this case we simply fit for the harmonics of the TTV with the frequency of the companion planet up to (dotted lines), which resulted in excellent fits for both; the synodic chopping signal from the harmonic fit is also plotted (in red). Note that near the 3:2 mean motion resonance, the synodic signal is not aliased with the resonant frequencies, and so we expect the chopping signal to be well approximated by our formula, as discussed in 4.2. We over-plot the predicted synodic chopping signal based on the the best-fit mass ratios from Table 7 in Lissauer et al. (2013), shown as the blue curves. This shows that the chopping signal is detected for both planets, and that it is consistent with the chopping signal predicted by the full dynamical analysis. Next, we carried out a Markov chain analysis for these planets including the Lithwick et al. resonance formulae with , relevant to the 3:2 commensurability (Section 3), as well as both the inner and outer chopping formulae, summed to . Figure 9 shows the constraints on the masses of the two planets. The black curve shows the 1 confidence limit from dynamical analysis in Lissauer et al. (2013). In dark[light] blue is the 1[2] confidence limit for our analysis with the resonant and full chopping signals included for both planets; this is consistent with the dynamical analysis at the level, albeit with a larger uncertainty (recall again that the TTVs of planet Kepler 11-f are affected by Kepler 11-e, and so there is more information as to the (d,e) subsystem, available when fitting the whole system). Another interesting byproduct of the chopping signal, when using in conjunction with the Lithwick et al. formula, is that it allows for a measurement of the quantity . In the case of Kepler 11 d/e, we find a value of . Since dissipation of eccentricities first damps the free eccentricities, it is interesting that the here, though modest, is distinctly nonzero. In general, because the chopping amplitude function is smaller in magnitude for larger values of , the components with larger may not be measurable. However, if they are, they can be used to provide additional constraints on the masses as consistency checks. For example, in the case of Kepler 11 d, the chopping signals with all independently constrain the mass of planet e, while the chopping signals of present in the TTVs of e all independently constrain the mass of planet d. For planet d, the inferred mass (assuming in this case a one solar mass star) is or , compared to in Lissauer et al. (2013). For planet e the inferred mass is or , compared to in Lissauer et al. (2013). Given that these estimates agree at the level, this indicates that the dynamical analysis yields masses that are consistent with the chopping amplitudes. ### 5.3. Kepler 9 In some cases, the Lithwick et al. formula alone can be used to determine the masses of the planets. In practice this requires that the amplitudes and phases of the TTVs must be measured with high accuracy. The first system with detected transit timing variations is the Kepler-9 system (Holman et al., 2010), which consists of three planets, the outer two of which are close to 2:1 period commensurability with periods near 19 and 39 days, respectively. The outer pair is dynamically decoupled from the inner planet in that the inner planet does not measurably affect their TTVs. A recent dynamical analysis of Kepler-9 shows that the transit timing variations yield masses of the outer two planets of and (Dreizler & Ofir, 2014b). We carried out a harmonic fit to the transit times for each planet, and find an excellent fit to planet Kepler-9c[d]’s transit times with four[six] harmonics of the period of Kepler-9d[c]. Figure 10 shows the transit timing variations for both planets along with the harmonic fit. The synodic chopping () amplitude of the inner planet matches the phase and amplitude predicted based on the mass inferred by Dreizler & Ofir (2014b). We carried out a Markov chain analysis on the set of transit times published by Dreizler & Ofir (2014b) for this system with the 2:1 resonant term and chopping, as well as additional harmonics with amplitudes that were not constrained by the physical parameters of the model: for the inner planet we added harmonics at , and , while for the outer planet we added harmonics at , and to our model. We find similar masses as Dreizler & Ofir (2014b) and , albeit with larger uncertainties. When we remove the constraint on the amplitude of the chopping signal, we find comparable masses and uncertainties; we suspect that the reason for this is that in this case the amplitudes of the TTV are measured with sufficient precision that the and terms that occur in the first order resonant TTV formula can be distinguished in amplitude. The imaginary component of the TTV is significant for both planets, so in this case one can break the degeneracy between the planet masses with just the resonant term. The amplitudes for the resonant term for both planets have four (well-measured) constraints, the real and imaginary amplitude for each planet, while there are four unknowns: the mass ratios for each planet, and the real and imaginary component of . This gives a unique solution, so the masses are well determined without the need for the chopping constraint. Although chopping is not required to determine the mass of the planets, we can show that in this case it is consistent with the masses inferred from the resonant terms alone. The measured mass of the outer planet predicts the amplitude of the synodic chopping signal of the inner planet, which only has dependence. In Figure 11 we show that the measured sine amplitude of the inner chopping signal is consistent to with the predicted amplitude. The amplitude of the synodic chopping term gives a mass of the outer planet of , while the mass estimated from the resonant term is . In addition, the amplitude of the cosine term is consistent with zero at , as it should be for the synodic chopping term. When radial velocities are included in the analysis, a larger mass is derived for the planets, yielding about 55 for the outer planet (Dreizler & Ofir, 2014b). This is inconsistent with the chopping signal, at about the 3 level (see green point in Figure 11) and inconsistent with the masses derived from resonant TTV alone. This discrepancy indicates that there is still some tension between the TTV data and the RV data, possibly due to RV jitter, additional planets (causing perturbations of the RV velocities), systematic errors in the transit times (perhaps due to star spot crossings), or, perhaps, simply statistical fluctuations. The fact that the resonant and chopping terms give similar estimates of the outer planet mass increases our confidence that the transit timing analysis is not strongly affected by additional planets in this system. For this system we also tried to use the amplitude of the chopping signal for the outer planet to constrain the mass of the inner planet. However, the amplitude is much too large to be due to chopping, and instead we believe is due to the 2:4 resonant term (which is of order , but these planets are so close to 1:2, that the term compensates for this). ### 5.4. Mass precision of KOI-872c For planets that are not near a first-order mean-motion resonance (), the synodic chopping amplitude can provide the strongest constraint upon the planet masses. If there are a large number of transits, then the signal-to-noise of the planet mass ratio can be estimated with equation (25). The first non-transiting planet found with transit timing variations (with a unique identification of the perturbing planet’s period) occurred in the system KOI-872 (Nesvorný et al., 2012), in which the period ratio of the two planets is close to 5:3. We carried out a harmonic analysis of the transit times of the inner planet with harmonics up to , starting with the period of the perturbing planet, KOI-872c, from the published dynamical analysis. The synodic chopping signal, the coefficient of the term, was measured to have an amplitude of days for the sinusoidal component (detected at 32), and for the cosine component (consistent with zero at ). Applying the synodic chopping amplitude formula (Equation 18, with ), we find a mass ratio of: . This compares favorably with the mass ratio measured from the full dynamical analysis (Nesvorný et al., 2012) of: , with a similar magnitude uncertainty. Figure 12 shows the harmonic fit to the transit timing variations of KOI-872, with the synodic chopping signal shown with red points. The predicted synodic chopping based upon the dynamical solution is shown in blue, demonstrating agreement with the derived signal (albeit discrepant by ). The expected precision in the planet mass is given by equation 25; for KOI-872 there are 37 transit times with a typical precision of 0.0015 days, 13 model parameters, giving an expected mass precision of , which matches well that found with the harmonic fits, and is close to the uncertainty found with the full dynamical analysis. This indicates that the timing precision along with the total number of transits observed and the number of free parameters fit can be used to forecast the mass measurement precision in the non-resonant case. A caveat is that this formula applies when chopping dominates the mass uncertainty, which may not be true for large eccentricities of the planets which can cause higher order resonant terms to play a more important role. ### 5.5. Predicting planetary mass precision inferred from the synodic TTV The chopping effect potentially allows for mass measurements of planets with TTVs observed by a TESS-like mission because the synodic TTV, when unaliased with a resonant frequency, is a short-period effect. However, the amplitude of the effect is also considerably smaller than that due to the long-timescale resonant variations, and so more transits are needed to build up signal to noise. Here we consider a system with 2 planets with a period ratio of 1.5, with an inner orbital period of 20 days. We assume that the system has been observed for 1 year (the baseline TESS will have for stars near the celestial poles), and assume that the timing uncertainty on the transit times of the inner planet are 1 minute. In this case, and the formula in Equation 25 yields, for 10 free parameters and assuming a solar mass star, a mass precision of the outer planet based on the chopping of the inner planet of . The mass precision on the inner planet due to the chopping in the outer planet is . Wider pairs will have larger mass uncertainty since the function is smaller. For example, this same pair moved to the 2:1 resonance will allow an mass uncertainty for the outer planet. Note that the mass uncertainty scales like the inverse of the square of the orbital period, and, given an orbital period, like the inverse of the square of the observation time. Therefore, for a longer mission like PLATO, with an observational baseline of 2 or 3 years, the uncertainty on the mass of the perturbing planet will be smaller by a factor of or , respectively, compared with that estimated for 1 year of data above. ## 6. Conclusions In this paper we have written down expressions for transit-timing variations in the plane-parallel limit, in the limit of zero free eccentricity of both planets, outside of resonance, and for timescales shorter than the secular timescale. Despite these assumptions, these terms have important consequences for analysis of transit-timing variations of multi-planet systems: 1) the TTVs have a dependence on , for to ; 2) the amplitude of these terms only depends on the mass-ratio of the perturbing planet to the star, and the semi-major axis ratio of these planets. Although other papers have presented formulae in the limit of zero-eccentricity (Agol et al., 2005; Nesvorný & Vokrouhlický, 2014), this is the first time that the coefficients for each have been written down explicitly. This allows for harmonic analysis of transit times in terms of the period and phase of the perturbing planet; the coefficients of each harmonic can then be related to the planet mass ratios and eccentricities via these formulae (except for the harmonics affected by resonant terms). When the period and phase of the perturbing planet are known, then fitting for the harmonic coefficients (and the mean ephemeris) is a linear regression problem; thus a global solution can be found by simple matrix inversion, yielding a unique solution for the coefficients. This means that there cannot be multi-modal degeneracies in the derived masses of the planets. The amplitudes and uncertainties of these coefficients can then be translated into planet masses using the formulae given above. Alternatively the coefficients can computed from the physical properties of the planets (masses, eccentricity vector, and ephemerides), and then the model can be fit to the data with a non-linear optimization or Markov chain, as we have carried out for several of the examples presented above. In particular, we have highlighted the importance of the ‘synodic chopping’ signal of the inner planet, which has a weak dependence on the eccentricities of the planets, and is not aliased with any first-order resonant terms. The formula for this term in particular has very little error even for very compact orbits. For the outer planet, the chopping signal can be used if the period ratios are distant from the 2:1 resonance, though if the transit times are precisely measured the synodic TTV resulting from values of may be used to constrain the mass of the inner planet instead. In general, if the system is near or in a first order resonance, with , only the synodic TTV component with (inner planet) and (outer planet) will be altered; the formulae with not equal to these values will apply. Other limitations of this formula is that it cannot be directly used when more than one planet strongly perturbs the transiting planet, it breaks down for very massive planets () with period ratios less than 2, and that as the period ratio approaches unity the error due to neglected eccentricity terms can become more important in the formula (though this does not apply to the term in the TTVs of the inner planet). Following Nesvorný & Vokrouhlický (2014), we conclude that in these regimes the chopping formula provides insight into mass measurements with TTVs even though the analytic formulae may be too approximate to be applied directly. We have applied these formulae to existing transiting planet systems that have been analyzed in prior publications, recovering the mass measurements, but using harmonic fits and the analytic chopping/resonant expressions rather than full N-body integrations. In some cases (KOI-872) this shows that the primary constraint on the mass of the planets comes from the synodic chopping signal. In other cases (KOI-1353c/d, Kepler 11d,e) the primary mass constraint comes from the combination of the first-order resonant signal and the synodic chopping of the inner planet. In the case of Kepler-9, the primary constraint comes from the resonant signal, although the chopping component gives a consistent constraint on the mass of the outer planet. In future applications, we expect that these formulae can be used for rapid fitting and estimation of transit timing variations, for rapid estimation of planet masses, for initialization of N-body integration, for determining the requisite timing precision to measure planet masses with future follow-up observations of multi-planet transiting systems, and for forecast of transit timing variations. It should be possible to apply these formulae in systems of more than two planets using linear-combinations of the TTVs induced by more than one perturbing planet. We would like to thank Dan Fabrycky, Eric Ford, Matt Holman, Daniel Jontof-Hunter, Jack Lissauer, Jason Steffen, and the Kepler TTV group for helpful conservations. E.A. acknowledges funding by NSF Career Grant AST 0645416, NASA Astrobiology Institute’s Virtual Planetary Laboratory, supported by NASA under cooperative agreement NNH05ZDA001C, and NASA Origins of Solar Systems Grant 12-OSS12-0011. K.M.D. acknowledges supports from JCPA fellowship at Caltech. Work by K.M.D was supported by NASA under grant NNX09AB28G from the Kepler Participating Scientist Program and grants NNX09AB33G and NNX13A124G under the Origins program. ## Appendix A Derivation of synodic TTV Here we provide an alternate derivation of the synodic TTV to zeroth order in planetary eccentricities. We follow the method first described in Nesvorný & Morbidelli (2008), and work at zeroth order in eccentricities and inclinations and first order in the parameter . As mentioned in the main text, the synodic TTV has been derived before, by Agol et al. (2005) and Nesvorný & Vokrouhlický (2014). These, and the derivation below, give agreement in the limit that the reflex motion of the star can be ignored (after correcting a typo in Nesvorný & Vokrouhlický (2014)). For the outer planet, the reflex motion of the star dominates at large period ratios. This effect is accounted for in the equations in the appendix of Agol et al. (2005) and in the alternate derivation in the appendix here, but is missing from the terms in Nesvorný & Vokrouhlický (2014) (the Agol et al. (2005) calculation utilized heliocentric coordinates, while that of Nesvorný & Vokrouhlický (2014) used Jacobi coordinates and thus computed the TTVs of the outer planet relative to the center of mass, not relative to the star. Here we also employ Jacobi coordinates but correct for the reflex contribution afterwards. TTVs are determined by taking an observed set of transit times and performing a linear fit to them, such that the transit timing variation can be written as δtn =tn−(t0+n¯P) (A1) The slope is the average time between successive transits during the observational baseline, or the average period over this timespan. In this sense, transit timing variations are the deviations from the transit times predicted by the average of the true (non-Keplerian) orbit. Transits occur when the true longitude of the transiting planet is equal to a particular value. The expression for is, to first order in eccentricity, θ =f+ϖ =M+2esinM+ϖ+O(e2) =λ+2esin(λ−ϖ)+O(e2) (A2) where is the eccentricity, is the true anomaly, the longitude of periastron, the mean anomaly, and the mean longitude of the planet. Because we will perform the calculation using a Hamiltonian formalism, we now switch to canonical coordinates. Our variables will be Λi=mi√GM⋆ai λi xi=√2Picospi yi=√2Pisinpi where: Pi=Λie2i2+O(e4i) pi=−ϖi where for the two planets, is the mass of the planet, the mass of the star, and all orbital elements are Jacobi elements. Variables in the left columns are the canonical momenta, while those in the right columns are the conjugate coordinate. In terms of this canonical set, Equation A for becomes θ =λ+2√Λ(xsinλ+ycosλ) (A3) We will perturb Equation A3 about the averaged orbit, and keep terms only at zeroth order in and : δθ =δλ+2√Λ(δxsinλ+δycosλ) (A4) where as we will see the parameters are going to be of order and independent of eccentricities to lowest order. It is important to note that we define for any arbitrary function , where over-bar denotes a time-average. In that case, δθ =−nδt (A5) where is the average mean motion and is the timing variation. Note that 1) we are consistent with the style that a negative timing variation corresponds to an “early” transit with and 2) we can use the average mean motion because Equation A4 holds only at first order in the perturbations; in the same sense, we can treat quantities without a on the right hand side as averaged variables as well. The question now arises - how do we determine the perturbations to the average orbit and ? The Hamiltonian for a system of two planets of mass orbiting a much more massive star of mass , written in Jacobi coordinates and momenta , takes the form, to first order in combinations of , of H =H0+H1 H0 =HKepler,1+HKepler,2 HKepler,i =p2i2~mi−G~Mi,⋆~mi|ri| H1 (A6) where , and denote Jacobi masses. Note that the perturbation takes the functional form of the disturbing function with an exterior perturber (Murray & Dermott, 1999). We set and also ignore the difference between Jacobi and physical masses. In the Keplerian piece , this approximation corresponds to a (constant) change in the mean motions of the planets by order , but since we are interested in TTVs this constant change does not matter. The correction between Jacobi and physical masses in the perturbation generates only terms, and we ignore these. When expressed in terms of the canonical set given in Equation A and Equation A, the Hamiltonian takes the form: H =H0(Λi)+ϵ1H1(Λi,Pi,λi,pi)+O(ϵ2) (A7) where e.g. , , and H0 =−μ12Λ21−μ22Λ22 H1 =−μ2Λ22[j=∞∑j=−∞gj,0(α)cosj(λ1−λ2)+ j=∞∑j=−∞gj,27(α)√2P1Λ1cos(jλ2−(j−1)λ1+p1)+ j=∞∑j=−∞gj,31(α)√2P2Λ2cos(jλ2−(j−1)λ1+p2)] (A8) and , , , and , and are functions of Laplace coefficients with the indirect terms included (Murray & Dermott, 1999)5. The relevant functions of Laplace coefficients can be written as gj,0(α) =12bj1/2(α)−δj,1α gj,27(α) =12(−2j−αddα)bj1/2(α)+δj,132α−δj,−112α gj,31(α) =12(−1+2j+αddα)bj−11/2(α)−δj,22α bj1/2(α) =1π∫2π0cos(jθ)√1−2αcosθ+α2dθ. (A9) In writing the gravitational potential in this form, we have assumed that is not too close to unity, in which case Laplace coefficients converge slowly. For , the analogs of and appearing with higher powers of eccentricity are larger and so the neglected terms in Equation A become more important. We now convert the variables (Equation A) to the set (Equation A), and also set and . Then the perturbation Hamiltonian takes the form: H1=−μ2Λ
2021-02-25 01:45:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8492993116378784, "perplexity": 1033.8105443183676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350706.6/warc/CC-MAIN-20210225012257-20210225042257-00031.warc.gz"}
https://mathhelpboards.com/threads/trigonometric-inequality.5471/
# Trigonometric inequality #### MarkFL Staff member Show that : $$\displaystyle \left( {\sin x + a\cos x} \right)\left( {\sin x + b\cos x} \right) \leq 1 + \left( \frac{a + b}{2} \right)^2$$ #### Albert ##### Well-known member Show that : $$\displaystyle \left( {\sin x + a\cos x} \right)\left( {\sin x + b\cos x} \right) \leq 1 + \left( \frac{a + b}{2} \right)^2$$ left side= $$\displaystyle \left( {\sin x + a\cos x} \right)\left( {\sin x + b\cos x} \right)\leq \sqrt{1+a^2}\times \sqrt{1+b^2}$$ $\leq\dfrac{1+a^2+1+b^2}{2}=1+\dfrac {a^2+b^2}{2}$ Are you sure , right side is correct ? Last edited: #### MarkFL Staff member left side= $$\displaystyle \left( {\sin x + a\cos x} \right)\left( {\sin x + b\cos x} \right)\leq \sqrt{1+a^2}\times \sqrt{1+b^2}$$ $\leq\dfrac{1+a^2+1+b^2}{2}=1+\dfrac {a^2+b^2}{2}$ Are you sure , right side is correct ? Yes, it is correct...it appears you are assuming the two sinusoidal factors are in phase with one another, that is for $a=b$. In this case, then your result is equivalent to that which I gave. #### Albert ##### Well-known member if it is correct then ,we must prove $\dfrac {a^2+b^2}{2}\leq (\dfrac{a+b}{2})^2=\dfrac {a^2+b^2}{4}+{ab}$ for all $a,b \in R$ ${\therefore \dfrac {a^2+b^2}{4}\leq ab}$ how about if ab<0,then it does not fit Last edited: #### MarkFL Staff member if it is correct then ,we must prove $\dfrac {a^2+b^2}{2}\leq (\dfrac{a+b}{2})^2=\dfrac {a^2+b^2}{4}+{ab}$ for all $a,b \in R$ ${\therefore \dfrac {a^2+b^2}{4}\leq ab}$ how about if ab<0,then it does not fit It appears you are on the right track here, but have made some algebraic errors. #### Albert ##### Well-known member sorry ,I have made some algebraic errors I will try to use another approach #### MarkFL Staff member sorry ,I have made some algebraic errors I will try to use another approach Your errors are quite minor, and in fact leads to a much simpler approach than I have. #### Albert ##### Well-known member I think ,I should take a rest ,and have a cup of tea or coffee #### Albert ##### Well-known member for some x,and a,b if left side $\leq 0$ then it holds naturely now we assume both sides are positive if a=b then the original inequality holds if a>b then :$1+b^2\leq left \,\, side \leq 1+a^2$ $1+b^2\leq right \,\, side \leq 1+a^2$ if a<b then :$1+a^2\leq left \,\, side \leq 1+b^2$ $1+a^2\leq right \,\, side \leq 1+b^2$ ---------- Last edited: #### anemone ##### MHB POTW Director Staff member My solution: I first expand the LHS of the inequality and get: $$\displaystyle ( {\sin x + a\cos x} )( {\sin x + b\cos x})=\sin^2 x+(a+b)\sin x \cos x+ab\cos^2 x$$ $$\displaystyle \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=(1-\cos^2 x)+(a+b)\sin x \cos x+ab\cos^2 x$$ $$\displaystyle \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=\frac{ab+1}{2}+\left(\frac{a+b}{2}\right)\sin 2x+\left(\frac{ab-1}{2}\right)\cos 2x$$ Next, by applying the Cauchy-Schwarz Inequality to the part $$\displaystyle \left(\frac{a+b}{2}\right)\sin 2x+\left(\frac{ab-1}{2}\right)\cos 2x$$ yields $$\displaystyle \left(\frac{a+b}{2}\right)\sin 2x+\left(\frac{ab-1}{2}\right)\cos 2x\le\sqrt{\left(\frac{a+b}{2}\right)^2+\left( \frac{ab-1}{2}\right)^2}\cdot\sqrt{\sin^2 2x+\cos^2 2x}$$ $$\displaystyle \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\le \sqrt{\left(\frac{a^2b^2+a^2+b^2+1}{4}\right)}$$ $$\displaystyle \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\le \frac{\sqrt{(1+a^2)(1+b^2)}}{2}$$ Also, AM-GM inequality tells us that $$\displaystyle \frac{(1+a^2)+(1+b^2)}{2}\ge\sqrt{(1+a^2)(1+b^2)}$$ or $$\displaystyle \frac{(1+a^2)+(1+b^2)}{4}\ge\frac{\sqrt{(1+a^2)(1+b^2)}}{2}$$ $$\displaystyle \frac{2+a^2+b^2}{4}\ge\frac{\sqrt{(1+a^2)(1+b^2)}}{2}$$ Finally, by combining all that we found in the above steps, we can now conclude that $$\displaystyle ( {\sin x + a\cos x} )( {\sin x + b\cos x})$$ $$\displaystyle =\frac{ab+1}{2}+\left(\frac{a+b}{2}\right)\sin 2x+\left(\frac{ab-1}{2}\right)\cos 2x$$ $$\displaystyle \le \frac{2+a^2+b^2}{4}+\frac{ab+1}{2}$$ $$\displaystyle \le \frac{2+a^2+b^2+2ab+2}{4}$$ $$\displaystyle \le \frac{4+a^2+b^2+2ab}{4}$$ $$\displaystyle \le 1+\frac{a^2+b^2+2ab}{4}$$ $$\displaystyle \le 1+\frac{(a+b)^2}{4}$$ $$\displaystyle \le 1+(\frac{a+b}{2})^2$$ (Q.E.D.) #### MarkFL Staff member This is my proof: Let: $$\displaystyle A=\tan^{\small{-1}}(a)$$ $$\displaystyle B=\tan^{\small{-1}}(b)$$ Using a linear combination, we may write the inequality as: $$\displaystyle \sqrt{(1+a^2)(1+b^2)}\sin(x+A)\sin(x+B)\le1+\left(\frac{a+b}{2} \right)^2$$ Let: $$\displaystyle f(x)=\sin(x+A)\sin(x+B)$$ Thus: $$\displaystyle f'(x)=\sin(2x+A+B)$$ $$\displaystyle f''(x)=2\cos(2x+A+B)$$ Then $f(x)$ has its maxima for: $$\displaystyle x=\frac{(2k+1)\pi-(A+B)}{2}$$ where $$\displaystyle k\in\mathbb Z$$ We then find: $$\displaystyle f\left(\frac{(2k+1)\pi-(A+B)}{2} \right)=\sin\left(\frac{(2k+1)\pi-(A+B)}{2}+A \right)\sin\left(\frac{(2k+1)\pi-(A+B)}{2}+B \right)=$$ $$\displaystyle \sin\left(\frac{(2k+1)\pi+A-B}{2} \right)\sin\left(\frac{(2k+1)\pi-A+B}{2} \right)=$$ $$\displaystyle \frac{\cos(A-B)-\cos((2k+1)\pi)}{2}=\frac{\cos(A-B)+1}{2}=$$ $$\displaystyle \frac{\cos(A)\cos(B)+\sin(A)\sin(B)+1}{2}=$$ $$\displaystyle \frac{1+ab+\sqrt{(1+a^2)(1+b^2)}}{2\sqrt{(1+a^2)(1+b^2)}}$$ Now, we need only show: $$\displaystyle \sqrt{(1+a^2)(1+b^2)}f\left(\frac{(2k+1)\pi-(A+B)}{2} \right)\le1+\left(\frac{a+b}{2} \right)^2$$ $$\displaystyle \frac{1+ab+\sqrt{(1+a^2)(1+b^2)}}{2}\le1+\left( \frac{a+b}{2} \right)^2$$ $$\displaystyle 2+2ab+2\sqrt{(1+a^2)(1+b^2)}\le4+a^2+2ab+b^2$$ $$\displaystyle 2\sqrt{(1+a^2)(1+b^2)}\le2+a^2+b^2$$ $$\displaystyle 4a^2b^2+4a^2+4b^2+4\le a^4+2a^2b^2+4a^2+b^4+4b^2+4$$ $$\displaystyle 2a^2b^2\le a^4+b^4$$ $$\displaystyle 0\le(a^2-b^2)^2$$
2021-10-17 12:07:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8820399045944214, "perplexity": 4282.553612782999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00556.warc.gz"}
http://www.lofoya.com/Solved/1080/in-section-a-how-many-students-failed-in-only-one-course
# Moderate Bar Charts Solved QuestionData Interpretation Discussion Common Information Answer the questions on the basis of the information given below. The $2006$ batch of a premier B- school in India fared poorly in the three courses namely Statistics, Economics and Business Maths in their first semester examinations. The batch was divided into four sections $A$, $B$, $C$ and $D$ and every student in the batch wrote the examination on each of the three mentioned courses. The following bar graph provides information about the number of students who failed in each of the courses in the four sections. It  also  provides  information  about  the  total  number  of  students  and  the  number  of  students  who failed in two courses in each of the four sections. No student failed in all the three courses. Q. Common Information Question: 1/5 In section $A$, how many students failed in only one course? ✖ A. 61 ✖ B. 62 ✔ C. 63 ✖ D. 64 ✖ E. 65 Solution: Option(C) is correct In Section $A$: Number of students who failed in only one course $= [56 + 45 + 60 – (2 × 49)]$ $=63$ ## (4) Comment(s) () Oh.. Got it overall people who failed will use 1*49 where as no.of people failed in only one subject will use 1*49, check use Venn diagram to check and verify. () Shouldn't it be 1*49? Flyingdildo () how many passed or all failed? did any failed in 3 subjects? so many unknown how did u assume so many variables?..49 failed in two subjects which two subject? or any two subject?.did 49 failed in statics n business maths or statics n economics or economics n business maths? or did 49 failed in any two subjects? Flyingdildo () the number of students who failed in 2 courses is 49..why is it 2x49 n not 3x 49?
2017-09-26 00:18:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3300086259841919, "perplexity": 3952.1355640290217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693866.86/warc/CC-MAIN-20170925235144-20170926015144-00633.warc.gz"}
https://stats.stackexchange.com/questions/362550/algorithm-for-selecting-largest-possible-value-when-observing-online-sequence-o
# Algorithm for selecting largest possible value, when observing online sequence of unknown distribution? I have been trying to devise an algorithm for a problem that's been bugging me for a while. For some weird reason I haven't been able to find any mention of this problem in the literature, so far. I find that curious, though, which is why I would like to ask for help here... In a nutshell, the problem can be stated as follows: Find the largest possible value in an unsorted sequence of numbers that can only be observed incrementally (online). The numbers in the sequence adhere to some unknown distribution, so you can never be completely sure, if you have already found the largest value. You cannot go back to a previous value and once the current value is selected as the "largest possible", the process is stopped. In pseudo code: While there are still numbers in the sequence: n = select the next number in the sequence do some analysis with the numbers observed so far if n meets some criterion for being sufficiently large: return n break Repeat Also, the algorithm should ideally select the first good candidate, instead of running and waiting endlessly. Does anybody happen to know of an efficient algorithm that solves this problem? Any help would be much appreciated. • a variant of this problem is called the secretary problem (Wikipedia) (but that's typically with a fixed, known N items you're going to choose from; nonetheless, the Wikipedia link gives a good overview of many variants of the problem – Ben Bolker Aug 17 '18 at 1:10 • Thanks a lot for pointing me to this. It seems, the "unified approach" and the "cardinal payoff variant" mentioned on that page will help me devise a suitable solution. Indeed, I vaguely remember having come across that article by Martin Gardner, but when I was researching my current problem I was at a loss as to what terms to google for. Would you like to post your comment as an answer, so that I can accept it? – Marcus C. Aug 17 '18 at 8:37
2019-02-18 12:40:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6687617897987366, "perplexity": 277.3306169386796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247486480.6/warc/CC-MAIN-20190218114622-20190218140622-00440.warc.gz"}
http://mathoverflow.net/revisions/102890/list
3 added 89 characters in body Steven's and Gjergji answers points that there is no bijection, however possibly this idea should not be put into the rubbish completely. Ideologically conjugacy classes and irreducible representations are somewhat dual to each other. The other instances of this "duality" is Kirillov's orbit method - this is "infinitesimal version" of the duality: orbits in Lie algebra are infinitesimal versions of the conjugacy classes. But pay attention orbits are taken not in Lie algebra, but in the dual space g^. This again manifests that there irreps and conj. classes are dual to each other. However think of semi-simple Lie algebra - then g^ and g can be canonically identified... Another instance is Langlands parametrization of the unitary irreducible representations of the real Lie group G. They are parametrized by conjugacy classes in Langlands dual group G^L. Again here are conjugacy classes in G^L, not in G itself. However for example GL=GL^L... So it might be one should ask the question what are the groups such that conjugacy classes and irreps are in some natural bijection or something like this ? PS Here is some natural map conjugacy classes -> representations. But it does not maps to irreducible ones, and far from being bijection in general. A colleague of mine suggested the following - take vector space of functions on a group which are equal to zero everywhere except given conjugacy class "C". We can act on these functions by $f \to g f g^{-1}$ - such action will preserve this class. So we get some representation. In the case of abelian group this gives trivial representation, however in general, it might be non-trivial. It always has trivial component - the function which is constant on "C". I have not thought yet how this representation can be further decomposed, may be it is well-known ? 2 Here is some natural map conjugacy classes -> representations. Steven's and Gjergji answers points that there is no bijection, however possibly this idea should not be put into the rubbish completely. Ideologically conjugacy classes and irreducible representations are somewhat dual to each other. The other instances of this "duality" is Kirillov's orbit method - this is "infinitesimal version" of the duality: orbits in Lie algebra are infinitesimal versions of the conjugacy classes. But pay attention orbits are taken not in Lie algebra, but in the dual space g^. This again manifests that there irreps and conj. classes are dual to each other. However think of semi-simple Lie algebra - then g^ and g can be canonically identified... Another instance is Langlands parametrization of the unitary irreducible representations of the real Lie group G. They are parametrized by conjugacy classes in Langlands dual group G^L. Again here are conjugacy classes in G^L, not in G itself. However for example GL=GL^L... So it might be one should ask the question what are the groups such that conjugacy classes and irreps are in some natural bijection or something like this ? PS Here is some natural map conjugacy classes -> representations. A colleague of mine suggested the following - take vector space of functions on a group which are equal to zero everywhere except given conjugacy class "C". We can act on these functions by $f \to g f g^{-1}$ - such action will preserve this class. So we get some representation. In the case of abelian group this gives trivial representation, however in general, it might be non-trivial. It always has trivial component - the function which is constant on "C". I have not thought yet how this representation can be further decomposed, may be it is well-known ? 1 Steven's and Gjergji answers points that there is no bijection, however possibly this idea should not be put into the rubbish completely. Ideologically conjugacy classes and irreducible representations are somewhat dual to each other. The other instances of this "duality" is Kirillov's orbit method - this is "infinitesimal version" of the duality: orbits in Lie algebra are infinitesimal versions of the conjugacy classes. But pay attention orbits are taken not in Lie algebra, but in the dual space g^. This again manifests that there irreps and conj. classes are dual to each other. However think of semi-simple Lie algebra - then g^ and g can be canonically identified... Another instance is Langlands parametrization of the unitary irreducible representations of the real Lie group G. They are parametrized by conjugacy classes in Langlands dual group G^L. Again here are conjugacy classes in G^L, not in G itself. However for example GL=GL^L... So it might be one should ask the question what are the groups such that conjugacy classes and irreps are in some natural bijection or something like this ?
2013-05-26 03:04:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8151207566261292, "perplexity": 440.0941759173996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00033-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/4214655/i-ak-0-text-implies-that-exists-a-1-text-s-t-aa-1-i
# $(I-A)^k=0 \text{ implies that } \exists A^{-1} \text{ s.t. }AA^{-1}=I$ I think this proposition is right. If this is not right, could you provide a counter example? However, this is definitely right for the $$\mathbb{R}^3$$ case. Here is how I proved it. Is it right and if so rigorous? Even if yes, are there more ways to prove this, I'm really curious. I put two ways, I'm not sure if either is right, or maybe one of them is rigorous and the other one is not. Could you please point out flaws in the proofs if the idea is right but it is not rigorously explained? I'm new to proofs and it's summer so I can't annoy my professors. Thanks. $$\\(I-A)^k=0 \\ det(I-A)^k=0 \Rightarrow det(I-A)=0 \\ \text{Therefore there exists a non-zero vector x, }\\ (2) \quad(I-A)x=Ix-Ax=0 \\ \text{There exists non trivial x s.t. } Ax=\lambda_{2}x \\ \text{So now } (I-A)x=x-\lambda_2x=(1-\lambda_2)x \\ \text{Multiplying both sides by (A-I) k times, we get } 0=(1-\lambda)^kx \\ \text{Since x is non-trivial, }1-\lambda=0 \Rightarrow \lambda=1 \\ \therefore \text{A has eigenvalue 1 of multiplicity k, and so A is invertible}$$ • Not about your attempts, but you can binomial-expand $(I-A)^k$, take the $I$ to one side, and factorize $A$ out of the other side, so you instantly have an explicit inverse. For example, $(I-A)^2 = 0$ then $I^2 - 2AI+A^2 = 0$ so $I-2A+A^2 =0$ so $A^2+2A = I$ so $A(A+2) = (A+2)A = I$. Aug 2, 2021 at 1:21 • Note that your attempt by taking the determinant is incorrect since you only show that at least one eigenvalue is 1, not that all eigenvalues are 1. – Eric Aug 2, 2021 at 1:28 • Your attempt assumes that $I-A=0$. That is not what you are given: you are given that there is a $k$ such that $(I-A)^k=0$. Aug 2, 2021 at 1:31 • I don't assume that I-A=0. Do I assume it without noticing? I say that determinant(I-A) is true since $(I-A)^k=\prod_{i=1}^{\k} (I-A)$ and the determinant of the right is zero. Then (I-A)v=Lv, (I-A)(I-A)v=(I-A)Lv=L^2*v, you keep going until you have (I-A)^k=0=L^kv. So L must be 0. – Goob Aug 2, 2021 at 2:00 • Don't make the same comment in two different places. I've replied in your identical comment on my answer. You correctly conclude $\lambda=0$ is an eigenvalue of $I-A$, but then what you do with that is either pointless or unwarranted, unless you are asserting that $(I-A)x=0$ for all $x$. Aug 2, 2021 at 2:03 Since $$[A,I]=0$$, we have $$0=(I-A)^k=\sum_{i=0}^k\binom{k}{i}(-1)^iA^i I^{k-i}=I+\sum_{i=1}^k\binom{k}{i}(-1)^iA^i$$ Rearranging gives $$I=-\sum_{i=1}^k\binom{k}{i}(-1)^iA^i=A\left[-\sum_{i=1}^k\binom{k}{i}(-1)^iA^{i-1}\right]$$ as desired. • outside the box, nice! Aug 2, 2021 at 1:52 Best is to denote $$B\colon = I- A$$, so $$A = I-B$$. Now $$B$$ is nilpotent, ( $$B^k = 0$$). Check that $$( I + B + B^2 + \cdots B^{k-1})(I-B) = I$$ With your method: assume that $$A v = 0$$. Then $$(I-A)v = v$$, so by induction, $$(I-A)^n v = v$$ for all $$n\ge 1$$. Now, for $$n=k$$ we get $$(I-A)^k v = v$$. But the LHS is $$0$$, so $$v=0$$, and thus $$A$$ is injective, and therefore has an inverse $$A^{-1}$$. • This is really neat thanks! This is only for left inverse. Not that it matters? – Goob Aug 2, 2021 at 1:54 • @RabbitBoy: For square matrices, if $BA=I$ then $AB=I$. Aug 2, 2021 at 2:10 • @Rabbit Boy: won't matter, but in this case, also the permuted product gives $I$. Aug 2, 2021 at 4:11 Let's try this with minimal polynomial argument: Consider $$p(x)=(1-x)^k$$ . Then $$A$$ is a matrix which satisfies the polynomial $$p(x)$$. Now the minimal polynomial of $$A$$ has to be a divisor of $$p(x)$$( This fact can be easily proven using the divison algorithm). So the minimal polynomial of $$A$$ has to be of the form $$m(x)=(1-x)^r$$ where $$r\leq k$$. So the minimal polynomial of $$A$$ has roots 1 with multiplicity $$r$$. Now the roots of the minimal polynomial of $$A$$ are exactly the eigen values od $$A$$. So eigen values of $$A$$ is 1 and this is the only eigen value. So $$0$$ is not an eigen value of $$A$$ hence $$A$$ is invertible. Your (1) is incorrect. You know that $$(I-A)^k$$ is the zero matrix, but you do not know that $$(I-A)x=0$$. There is absolutely no warrant for asserting that. Your (2) shows that if $$\lambda_2$$ is an eigenvalue of $$A$$ and $$x$$ is an eigenvector corresponding to $$\lambda_2$$, then it is an eigenvector of $$I-A$$ corresponding to $$1-\lambda_2$$. It would be better to state it that way. Your Method 1 is incorrect, since it assumes that $$I-A=0$$, which was not warranted. Method 2 is also incorrect, because again you assume that $$I-A=0$$, which is not warranted. Easy to prove facts: 1. If $$\lambda$$ is an eigenvalue of $$A$$, then $$k-\lambda$$ is an eigenvalue of $$kI-A$$, for any scalar $$k$$. 2. If $$\lambda$$ is an eigenvalue of $$B$$, then $$\lambda^k$$ is an eigenvalue of $$B^k$$. 3. $$C$$ is invertible if and only if $$\lambda=0$$ is not an eigenvalue of $$C$$. Using these facts: If $$\lambda$$ is an eigenvalue of $$A$$, then $$1-\lambda$$ is an eigenvalue of $$I-A$$ (what you prove in (2)), and $$(1-\lambda)^k$$ is an eigenvalue of $$(I-A)^k$$. Since $$(I-A)^k=0$$, the only eigenvalue of $$(I-A)^k$$ is $$0$$. Therefore... • I don't assume I-A=0. I assume (I-A)x=0 for some non zero x. You also say I also cannot do that but why? The determinant of I-A must be zero since $(I-A)^k=\prod_{i=1}^{k} (I-A)$. The determinant of the right is zero, then so must the determinant of every one of those products. Doesn't this imply a 0 eigenvalue? – Goob Aug 2, 2021 at 1:49 • @RabbitBoy: If you are assuming that $(I-A)x=0$, then you point (1) is... pointless. Of course the only $\lambda$ for which $\lambda x=0$ is $\lambda=0$. You are essentially saying "Assuming that $x$ is an eigenvector of $0$, then it is only an eigenvector of $0$ and no other scalar". That's true for any matrix, any eigenvector. What's the point, then? And your (2) would then assume that you can pick $x$ to also be an eigenvector of $A$, and in that case you are making another unwarranted assumption. Yes, $0$ is an eigenvalue, but you are not doing valid things with that fact. Aug 2, 2021 at 2:02 • If a matrix has a determinant zero, then the nullity of that matrix is not empty right?The determinant of (I-A) is 0. Does this not imply that there is a non zero vector x in the nullity of I-A such that (I-A)x=0? I don't understand why this is an assumption. Sorry. – Goob Aug 2, 2021 at 2:25 • @RabbitBoy: Again: you are correct that there must be an eigenvector of $0$ for $I-A$. Then, once you select an eigenvector of $0$ for $I-A$, what is it you are doing with $\lambda_1$? As far as I can tell, just showing that if $x$ is also an eigenvector corresponding to some possibly different eigenvalue $\lambda_1$, then $\lambda_1=0$. Why are you doing that? It's (i) irrelevant; and (ii) we always have for any matrix that if $x$ is an eigenvector of $\lambda$, then that's the only eigenvalue it can be an eigenvector for. (cont) Aug 2, 2021 at 2:59 • @RabbitBoy: then you take $\lambda_2$, and assume that $x$, in addition to being an eigenvector corresponding to $0$ for $I-A$, is also an eigenvector corresponding to $\lambda_2$ for $A$. That is an assumption: that you can pick a vector which is both an eigenvector corresponding to $0$ for $I-A$, and at the same time, an eigenvector for $A$ corresponding to some $\lambda_2$. You prove that if such a thing can be done, then $\lambda_2=1$. But you are assuming it can be done in the first place. Aug 2, 2021 at 3:00
2022-10-05 14:51:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 67, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9645600914955139, "perplexity": 147.20295432192606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00195.warc.gz"}
https://quant.stackexchange.com/questions/50732/what-is-variance-of-delta-of-brownian-motion
# What is Variance of delta of brownian motion [closed] I am new to this. If variance of Brownian motion b is t, what is the variance of db? db is delta of b Let $$(B_t)$$ be a standard Brownian motion. Then, $$B_t\sim N(0,t)$$ and $$B_t-B_s\sim N(0,t-s)$$. Informally, you can say $$\mathrm{d}B_t\sim N(0,\mathrm{d}t)$$ where $$\mathrm{d}B_t=B_{t+\mathrm{d}t}-B_t$$ is an infinitesimal increment. • @InfoLearner good question. A constant can be pulled out of the variance if you remember to square it, i.e. $\sigma \mathrm{d}B_t\sim N(0,\sigma^2\mathrm{d}t)$. – KeSchn Jan 19 at 10:46
2020-09-26 06:20:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9429872632026672, "perplexity": 371.5586181097899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400234232.50/warc/CC-MAIN-20200926040104-20200926070104-00344.warc.gz"}
https://cloud.originlab.com/doc/LabTalk/ref/Derivative-cmd
3.3.2.13 Derivative Takes the derivative of dataset.Make a copy of dataset before using this command if you do not want to lose the original data. The derivative is calculated by averaging the slopes to neighboring points.Range allows you to specify that the derivative should be performed on a subset of the dataset. This command is no longer recommended. Please use the differentiate X-Function instead. Syntax: derivative [option] dataset [range] Options: no option, Take the derivative Syntax: derivative dataset [range] Take the derivative. See the Example. -p; Locate peaks in a dataset Syntax: derivative -p dataset [range] Locate peaks in a dataset.If the slope value between neighboring data points is zero, or if the two points have opposite signs, the resulting derivative is considered to be zero. range bb=[Book1]Sheet1!B; //Calculate the derivative of the data range, //the peak points will be equal to Zero. der -p bb -b 10 -e 20; Examples: The following script takes the derivative of col(B) within the range of row index 2 to row index 10. range aa = [Book1]Sheet1!col(B); der aa -b 2 -e 10;
2022-05-22 05:25:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6320083737373352, "perplexity": 2499.1824226500557}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00603.warc.gz"}
http://mathhelpforum.com/advanced-statistics/79440-distribution-probabilty-parent-sample.html
Math Help - Distribution and probabilty - parent and sample 1. Distribution and probabilty - parent and sample Ok I am completely lost, can anyone help me out? I have done the first question, and the second one is based on it. The first question said: The parent distribution of x is normal with mean 80 and standard deviation 9. Based on samples of size n=25, find the mean and standard error of the sampling distribution of $\bar{x}$. I said: $\mu_{\bar{x}}$ = 80 $\sigma_{\bar{x}}$ = 9/5 The second question is where I am lost. It says find: A. P(62 < x < 80) B. P(71 < $\bar{x}$ < 77) C. x' such that P( $\bar{x}$ > x') = .05 2. Is n=25 or 225? Because $\sigma_{\bar{x}}={\sigma \over\sqrt{n}}$. Or, did you mean $\sigma_{\bar{x}}={9\over 5}$ 3. n=25 it is the number in the sample $\sigma\bar{x}$ = 9/5 (thats what I got) sorry 4. $P(62. $P(71<\bar X<77)=P\biggl({71-80\over 9/5}. Can you finish these? I'll check on you later. $.05=P(\bar X so ${a-80\over 9/5}=-1.645$. 5. P[(-5) < z < (-5/3)] It was supposed to be P( $\bar{x}$ > x') SORRY P( $\bar{x}$ > $x'-80\over 9/5$) ? Can you tell me what formula you used to find that last one? Did you use these for the first two? Z= $X-\mu\over \sigma$ and Z= $\bar{x}-\mu\over \sigma / \sqrt n$ 6. Let X be a random variable with mean $\mu$ and standard deviation $\sigma$. Then $Z={X-\mu\over \sigma}$ has mean zero and standard deviation 1. And if X was a normal random variable, then Z is a standard normal random variable and you can use your Z table. We did that for both X and $\bar X$. 7. Thank you SO much for your help. It helped me a lot on my test. Thanks a hundred times. You are a really nice person
2015-10-09 12:52:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7698924541473389, "perplexity": 1249.6884553310751}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737929054.69/warc/CC-MAIN-20151001221849-00159-ip-10-137-6-227.ec2.internal.warc.gz"}
https://grindskills.com/category/dimensional-regularization/
## Determination of auxiliary scale in dimensional regularization My questions are in italics. In the article [1] a dimensional regularization is presented on an electrostatic example of an infinite wire with constant linear charge density λ. It is shown that the direct computation of the scalar potential gives infinity: ϕ(x)=λ4πϵ0∫∞−∞dl|x−l|=λ4πϵ0∫∞−∞dl(x2+y2+(z−l)2)12= =λ4πϵ0∫∞−∞du√x2+y2+u2=∞ But with dimensional regularization in the modified minimal subtraction scheme we get … Read more ## Showing I=∫d3k∫dk01k4I=\int d^3k\int dk^0\frac{1}{k^4} to be logarithmically divergent Consider a momentum integral of the form I=∫d3k∫dk01k4 where k2=(k0)2−(→k)2 and the integral over k0 runs from −∞ to +∞. This integral is common in QFT and I want to show that this integral is logarithmically divergent. The usual trick is to define k0=ik4 so that the integral becomes I=∫d3k∫dk41((k4)2+(→k)2)2. Now I can go to … Read more ## Question about infinite sum in quantum field I read from some books of number theory that ∞∑n=11ns=−112,when s=−1. Now is there such a result ∞∑n=11ns=π,when s=1,or ∞∑n=11ns=cπ,when s=1,where c is a rational number ? I get a similar result in mathematics by analogue, I suspect the result may have some interpretation in physics. Answer The true fact is the following. Consider ζ(s):=+∞∑n=11nswith s∈C and Res>1. That function, with the said … Read more ## Dirac delta function defined in Zee’s Quantum Field Theory book This is from Appendix 1 of the first chapter of Zee’s Quantum Field Theory in a Nutshell: I am not sure whether it is correct to call this the Dirac delta function. Sure, the integral over all space is 1, and it is sharply peaked at x=0. But its width doesn’t approach 0 when K→∞. … Read more ## Cutoff regularization: Why not cutoff exactly at the momentum reached in an experiment? So far I have only actually calculated dimensional regularization and I just know about the idea of cutoff regularization. From what I understand, as the name suggests, you just ignore momentum as from some high value and integrate the virtual momenta until this arbitrary chosen value. Depending on which value you chose, and thereby to … Read more ## Length path integral Let’s consider a 2-dimensional Euclidean plane. The length between two points a and b can be defined in the following way: (ab):=inf where the infimum is taken over all paths \gamma joining a and b, \delta_{ab} is the Euclidean metric on the plane and the parametrization is chosen so that \gamma(\tau=0)=a \quad \& \quad \gamma(\tau=1)=b. … Read more ## Inconsistency in regularization with parallel and perpendicular momenta In deriving the axial anomaly Peskin and Schroeder use dimensional regularization, continuing loop momenta to 4−ϵ dimenstions. The loop momenta can now be split into pieces “parallel” to d=0,1,2,3 and those “perpendicular” to d=0,1,2,3, ℓ=ℓ∥+ℓ⊥ Furthermore, they define γ5 as, γ5≡iγ0γ1γ2γ3 with this definition γ5 commutes with γμ in the extra dimensions and so, \require{cancel} … Read more ## Is Wick rotation of loop integrals legitimate? In Feynman diagram calculations, we seem to invariably Euclideanise loop integrals in order to exploit the resulting spherical symmetry. This Wick rotation is simply a deformation of the contour; providing we avoid all poles and providing our integrand falls off at infinity sufficiently fast, this is legitimate. See, for instance, Figure 6.1 of Peskin & … Read more ## How can Weinberg assume that PbP_b acts as derivative? In QM of finitely many degrees of freedom it is well known that due to the Stone-Von Neumann theorem, the CCR [Qi,Pj]=iδij leads to a unique representation up to unitary equivalence, on which Pj acts as the derivative Pj↦−i∂j. Now, in Weinberg’s QFT book volume 1, chapter 9, he considers a general quantum mechanical system … Read more ## Dirac matrices in dimensional regularization, get correct order epsilon Let us work in dimension D=4−2ϵ. In 4-dimension, we can write Tr[AB], where A and B are string of gamma matrices, as ∑mTr[A Γm]Tr[B Γm], where Γm={1,γ5,γμ,γ5γμ,σμν} are complete set of gamma matrices spanning the dirac space in 4-dim. As it is well-known, generalizing this to non-integer D dimension causes difficulties since γ5 (defined as γ5=iγ0γ1γ2γ3 in … Read more
2022-12-08 22:48:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9268656373023987, "perplexity": 767.9866392321616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711368.1/warc/CC-MAIN-20221208215156-20221209005156-00720.warc.gz"}
https://kx.lumerical.com/t/question-about-the-ffp-spatial-filtering-example/41181
# Question about the FFP - Spatial filtering example I am learning the example in the https://kb.lumerical.com/solvers_far_field_projections_far_field_filtering.html In the script, solver_far_field_filter.lsf, I have some questions. 1. In line 25&32: set(“angle”,0); set(“angle”,90); Why these two dipole phi angle can represent the p and s polarization? When setting the angle to 90, the main electric field is still not Ez. We know that in this situation, the Ez can be regarded as s polarization. R_s = ( (n1cos(theta1) - n2cos(theta2))/(n1cos(theta1) - n2cos(theta2)) )^2; R_p = ( (n1^2n2cos(theta2) - n2^2n1cos(theta1))/(n1^2n2cos(theta2) + n2^2n1cos(theta1)) )^2; I checked some theory books, the R_s should be R_s=( (n1cos(theta1) - n2cos(theta2))/(n1cos(theta1) + n2cos(theta2)) )^2; and R_p should be R_p=( (n2cos(theta1) - n1cos(theta2))/(n2cos(theta1) + n1cos(theta2)) )^2; So why it is not consistent with the theory book? Thanks if anyone could help me. 1. In line 61 theta1 = n2*sin(theta2)/n1; I think theta1 should be sin(theta1). Hello @makeqi, 1. These angles do not represent S, P polarization they represent the orientation of two dipoles. Dipoles will radiate at many polarization’s simultaneously, and so can be to efficiently find the T and R for a complete set of incidence angles. They have very little directivity; however, which is why the emission pattern is run through a spatial filter in this example. 2. R_s is incorrect, and you have given the appropriate correction. The R_p values which you quote are equivalent since you can cancel a factor of n1*n2 in all terms. 3. Again you are correct this a typo, the correct form should be. theta1 = asin(n2*sin(theta2)/n1); Thanks for posting these questions. I’ll log this and look at fixing this example in the near future. Regards, @trobertson 1. I have got that the orientation of the two dipoles does not represent S, P polarization. But I have checked that when the angle is set to 90 degree, the electric field generated by this dipole is Ex, Ey and the Ez component is 0. But in this situation, if we want to get the S polarization far field( since the script want to get the far field of S and P polarization and add them together. In line 42: # collect data from both polarizations in the far field), we should have Ez componet. But Ez is 0 in this situation. 2&3: If I am correct, there comes a strange question, why is the simulation result consistent with the result got from the theory using the script? Since we now know that the formula used in the script to calculate the result is not correct. Regards Hello @makeqi, Both Ex and Ez correspond to S polarization components i.e. normal to the plane of incidence for ky propagating light. I am not sure what monitor you are looking at but if you visualize the field data of the above monitor at the plane of incidence you will see Ex and Ey components. There will not be any Ez components since this is out of the plane of the dipole. That being said Ex and Ez are equivalent for this geometry so you could look at simulations in the yz plane if you like. The far field monitors need to collect all of the field information to perform the required projection. Knowing that Ez is exactly zero is an important piece of information for recreating the far field radiation pattern. If you look at what the script calculates it plots the Rp value which was correct, and it uses the incorrect form for theta1. I made the correction in the script and plotted the correct and previously used incorrect value below. As you can see there id very little variation between the two values which is likely why this typo was not caught before. Best Regards, Both Ex and Ez correspond to S polarization components i.e. normal to the plane of incidence for ky propagating light. The plane of incidence is a plane and S polarization means that the electric field is perpendicular to the plane of incidence. So I think only one electric field component can be the S polarization. In the situation of the example on KB, the plane of incidence is XY, so I think only the Ez component can be regarded as S polarization. In the picture I uploaded above, I use a 2D-Z normal monitor. So I still don’t think that the simulation can get the reflectivity of S polarization. Am I right? There will not be any Ez components since this is out of the plane of the dipole. Another question is about the plane of the dipole. I am still not clear about which electric component can be generated by a dipole. I made a test like this: In this situation, the dipole is along y-axis(phase = 0, theta = 90, phi = 90). I use a monitor of 2D-Z normal to get the electric field. Ex and Ey component are not zero, Ez is zero. In this situation, the the dipole is along x-axis(phase = 0, theta = 90, phi = 0). I use a monitor of 2D-Z normal to get the electric field. I can get Ex and Ey component are not zero, Ez is zero. In this situation, the the dipole is along x-axis(phase = 0, theta = 0, phi = 0). I use a monitor of 2D-Z normal to get the electric field. I can get Ex and Ey component are zero, Ez is not zero. I don’t quite understand why it is like this. In each situation, what’s the plane of the dipole? In the first and second situations, two component are not zero. But there is only one non-zero component Ez in the last situation. What is the reason? Could you give me the detailed explanation of the electric field generated by the dipole source? I have read the information about the dipole source on KB but I still can not get the answer by myself. Thanks a lot. PS: one small question: Why it is equivalent using “set(“phi”,45)” and “set(“angle”,45)” when setting the dipole? @trobertson Could you help me to solve the questions above? Hello @makeqi, S polarized light is in the plane of the interface between the dielectric and air. The Ey component is P polarized, you can choose any plane that intersects the interface and it is equivalent. It is often convenient to choose the one where the polarization is purely Ez or Ex, but it really does not matter. I would ask that you refer to the wkipedia page on dipole antennas for information on the radiation pattern and electric field generated by a dipole. Regards, @trobertson Thank you for your suggestion about the dipole antennas. The second question has been solved. But for the first question, this is a 2D simulation. The injection plane is XY. So I think only Ez can be considered as S polarized light. Right? In the simulation, it runs two simulations and both of them get the far field of P polarized light. Am I right? Hello @makeqi, In the situation you describe you are correct. The confusion here, I believe, is that you referenced many simulations and the incidence plane was not always XY. The plane of incidence depends on the E field component you are considering and therefore the orientation of the dipole. By changing theta of the source you change the plane of incidence, but any change in theta is equivalent you are just changing the plane of incidence for this simple example. In general, 3D/specular interface, the incidence plane basis is easy to construct since you know the k vector is drawn from the source - to the point of interest; furthermore, it must contain the surface normal of the interface. You can then decompose the radiation into components in plane (P or TM) with those normal to it (S or TE). I hope this helps. Regards, Hi @trobertson I think in this simulation, the injection plane is always XY. I have a opinion as following: for a 2D simulation in the XY plane, we can not get any information about light propagating in the Z direction, because it is uniform in the Z direction. The plane of incidence depends on the propagation direction of light and the surface normal of the interface. In this 2D situation, the propagation direction is always in the XY plane, and the surface normal is in the Y direction. So the plane of incidence is always XY. By changing the theta of the source, we only change the electrical field component. If I set theta = 0, the electric field component on the interface is Ez only. We only get the information of S polarization. If I set theta = 90, the electric field component on the interface is Ex and Ey. We only get the information of P polarization. If I set theta = 45, the electric field component on the interface is Ex, Ey and Ez. We get the information of S and P polarization. Are all my statement right? @trobertson Could you have a look at my question, please. Regards.
2019-12-13 00:53:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7320182919502258, "perplexity": 550.6955263469307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547536.49/warc/CC-MAIN-20191212232450-20191213020450-00350.warc.gz"}
http://mathoverflow.net/revisions/42219/list
This is a question my son Bob asked me. For some sets it is relatively easy to test for membership but a lot more difficult to find members, and for others the reverse is true. Here is an elementary example to get the idea across. An $m \times n$ real matrix $M$ defines a linear map $x \mapsto M x = y$, from ${\mathbb R}^n$ to ${\mathbb R}^m$. It is easy to test if $x$ is in the kernel; just compute $M x$ and see if it is zero, but to find an $x$ in the kernel you must solve $M x = 0$ which is more computationally intensive. Conversely it is easy to find an element in the range; just choose any $x$ and compute $M x$; but to test if $y$ is in the range you must solve $M x = y$. Does anyone know if there is a standard name for this distinction or for sets of these two types?
2013-05-20 14:34:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8352322578430176, "perplexity": 57.477914228647265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699056351/warc/CC-MAIN-20130516101056-00057-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.mathworks.com/help/control/getstart/bode-diagram-design.html?nocookie=true
# Documentation ## Bode Diagram Design ### What Is Bode Diagram Design? One technique for compensator design is to work with Bode diagrams of the open-loop response (loop shaping). Using Bode diagrams, you can • Design to gain and phase margin specifications • Add notch filters for disturbance rejection ### Bode Diagram Design for DC Motor The following topics use the DC motor example to show how to create a compensator using Bode diagram design techniques. From SISO Example: The DC Motor, the transfer function of the DC motor is ```Transfer function: 1.5 ------------------ s^2 + 14 s + 40.02 ``` For this example, the design criteria are as follows: • Rise time of less than 0.5 second • Steady-state error of less than 5% • Overshoot of less than 10% • Gain margin greater than 20 dB • Phase margin greater than 40 degrees The Linear System Analyzer Showing the Step Response for the DC Motor, shows that the closed-loop step response is too slow. The simplest approach to speeding up the response is to increase the gain of the compensator. To increase the gain: 1. Click the Compensator Editor tab to open the Compensator Editor page. 2. Select C from the compensator selection list. 3. In the text box to the right of the equal sign in the Compensator area, enter `38` and press Enter. Adjusting Compensator Gain on the Compensator Editor Page The SISO Design Tool calculates the compensator gain, and Bode and root locus graphs in the Graphical Tuning window are updated. Alternatively, you can set the gain in the Graphical Tuning window by grabbing the Bode magnitude line and dragging it upward. The gain and poles change as the closed-loop set point is recomputed, and the new compensator value is updated in the Compensator Editor page. Because the design requirements include a 0.5-second rise time, try setting the gain so that the DC crossover frequency is about 3 rad/s. The rationale for setting the bandwidth to 3 rad/s is that, to a first-order approximation, this should correspond to about a 0.33-second time constant. To make the crossover easier to see, select Grid from the right-click menu. This creates a grid for the Bode magnitude plot. Left-click the Bode magnitude plot and drag the curve until you see the curve crossing over the 0 dB line (on the y axis) at 3 rad/s. This changes both the SISO Design Tool display and the Linear System Analyzer step response. For a crossover at 3 rad/s, the compensator gain should be about 38. By default, the Graphical Tuning window shows gain and phase margin information in the lower-left corners of the Bode diagrams. In the Bode magnitude plot, it also tells you if your closed-loop system is stable or unstable. This figure shows the Graphical Tuning window. Adjusting Bandwidth in the Graphical Tuning Window This plot shows the associated closed-loop step response in the Linear System Analyzer. Closed-Loop Step Response for the DC Motor with a Compensator Gain = 38 The step response shows that the steady-state error and rise time have improved somewhat, but you must design a more sophisticated controller to meet all the design specifications, in particular, the steady-state error requirement. 1. Click the Compensator Editor tab to open the Compensator Editor page. 2. Right-click anywhere in the Dynamics table for the right-click menu, and then select Add Pole/Zero > Integrator. The following figures show this process. Adding an Integrator in the Dynamics Table Editable Integrator Parameters Notice adding the integrator changed the crossover frequency of the system. Readjust the compensator gain in the Compensator Editor page to bring the crossover back to 3 rad/s; the gain should be `99`. After you have added the integrator and readjusted the compensator gain, the Graphical Tuning window shows a red `x' at the origin of the root locus plot. Integrator on the Root Locus Plot The following figure shows the closed-loop step response. Step Response for the DC Motor with an Integrator in the Compensator Use the right-click menu to show the peak response and rise time (listed under the Characteristics). The step response is settling around 1, which satisfies the steady-state error requirement. This is because the integrator forces the system to zero steady-state error. The figure shows, however, that the peak response is 1.3, or about 30% overshoot, and that the rise time is roughly 0.4 second. So a compensator consisting of an integrator and a gain is not enough to satisfy the design requirements, which require that the overshoot be less than 10%. Part of the design requirements is a gain margin of 20 dB or greater and a phase margin of 40° or more. In the current compensator design, the gain margin is 11.5 dB and the phase margin is 38.1°, both of which fail to meet the design requirements. The rise time needs to be shortened while improving the stability margins. One approach is to increase the gain to speed up the response, but the system is already underdamped, and increasing the gain will decrease the stability margin as well. You might try experimenting with the compensator gain to verify this. The only option left is to add dynamics to the compensator. 1. Click the Compensator Editor tab to open the Compensator Editor page. 2. In the Dynamics table, right-click and then select Add Pole/Zero > Lead. Adding a Lead Network to the DC Motor Compensator on the Compensator Editor Page Editable fields are shown in the Edit Selected Dynamics group box (right side of page) when an item in the Dynamics table has been selected, as shown in the following figure. For this example, change Real Zero to `-7.38` and change Real Pole to `-11.1`. You can also add a lead network using the Graphical Tuning window. Right-click in the Bode graph, select Add Pole/Zero > Lead, place the `x' on the plot where you want to add the lead network, and then left-click to place it. The Compensator Editor page is updated to include the new lead network in the Dynamics table. Your Graphical Tuning window and Linear System Analyzer plots should now look similar to these. Root Locus, Bode, and Step Response Plots for the DC Motor with a Lead Network The Step Response plot shows that the rise time is now about 0.4 second and peak response is 1.24 rad/s (i.e., the overshoot is about 25%). Although the rise time meets the requirement, the overshoot is still too large, and the stability margins are still unacceptable, so you must tune the lead parameters. ### Moving Compensator Poles and Zeros To improve the response speed, edit the selected dynamics for the lead network in the Edit Selected Dynamics group box on the Compensator Editor page. 1. Change the value of the lead network zero (Real Zero) to move it closer to the left-most (slowest) pole of the DC motor plant (denoted by a blue `x'). 2. Change the value of the lead network pole (Real Pole) to move it to the right. Notice how the gain margin increases (as shown in the Graphical Tuning window) as you do this. As you tune these parameters, look at the Linear System Analyzer. You will see the closed-loop step response alter with each parameter change you make. The following figure shows the final values for a design that meets the specifications. Graphical Tuning Window with Final Design Parameters for the DC Motor Compensator The values for this final design are as follows: • Poles at 0 and -28 • Zero at -4.3 • Gain = 84 Enter these values directly in the Edit Selected Dynamics group box in the Compensator Editor page, shown as follows (Integrator is already set to 0). Entering Final Design Parameters on the Compensator Editor Page The following figure shows the step response for the final compensator design. Step Response for the Final Compensator Design In the Linear System Analyzer's right-click menu, select Characteristics > Peak Response and Characteristics > Rise Time to show the peak response and rise time, respectively. Hover the mouse over the blue dots to show the data markers. The step response shows that the rise time is 0.45 second, and the peak amplitude is 1.03 rad/s, or an overshoot of 3%. These results meet the design specifications. If you know that you have disturbances to your system at a particular frequency, you can use a notch filter to attenuate the gain of the system at that frequency. To add a notch filter, click the Compensator Editor tab to open the Compensator Editor page. Right-click in the Dynamics table and select Add Pole/Zero > Notch, as shown next. Default values for the filter are supplied, as shown next. Notch Filter Default Values The following figure shows the result in the Graphical Tuning window. Notch Filter Added to the DC Motor Compensator To see the notch filter parameters in more detail, click the Zoom In icon on the Graphical Tuning window. In the Open-Loop Bode Editor, press the left mouse button and drag your mouse to draw a box around the notch filter. When you release the mouse, the Graphical Tuning window will zoom in on the selected region. To understand how adjusting the notch filter parameters affects the filter, consider the notch filter transfer function. $\frac{{s}^{2}+2{\xi }_{1}{\omega }_{n}s+{\omega }_{n}^{2}}{{s}^{2}+2{\xi }_{2}{\omega }_{n}s+{\omega }_{n}^{2}}$ The three adjustable parameters are ξ1, ξ2, and ωn. The ratio of ξ21 sets the depth of the notch, and ωn is the natural frequency of the notch. This diagram shows how moving the red ⊗ and black diamonds changes these parameters, and hence the transfer function of the notch filter. A Close Look at Notch Filter Parameters In the Dynamics table on the Compensator Editor page, select the row containing the newly added notch filter. The editable fields appear in the Edit Selected Dynamics group box, as shown next. Editing Notch Filter Parameters ### Modifying a Prefilter You can use the SISO Design Tool to modify the prefilter in your design. Typical prefilter applications include: • Achieving (near) feedforward tracking to reduce load on the feedback loop (when stability margins are poor) • Filtering out high frequency content in the command (reference) signal to limit overshoot or to avoid exciting resonant modes of the plant A common prefilter is a simple lowpass filter that reduces noise in the input signal. Open the Bode diagram for the prefilter by opening the right-click menu in the Closed-Loop Bode Editor in the Graphical Tuning window, and then selecting Select Compensators > F(F). Selecting the Prefilter in the Graphical Tuning Window For clarity, the previous figure does not show the open-loop Bode diagram for the compensator (C). To remove the Bode diagram from the Graphical Tuning window, go to the SISO Design Task node on the Control and Estimation Tools Manager, click the Graphical Tuning tab, and for Plot 2, Open Loop 1, select Plot type `None`. Prefilter Bode Diagram If you haven't imported a prefilter, the default is a unity gain. You can add poles and zeros and adjust the gain using the same methods as you did when designing the compensator (C) on the Compensator Editor page. A quick way to create a lowpass roll-off filter is to add a pair of complex poles. To do this, first click the Compensator Editor tab and change the compensator to `F`. Right-click in the Dynamics table and select Add Pole/Zero > Complex Pole. Select this line to show the editable parameters in the Edit Selected Dynamics group box. For this example, try to place the poles at about 50 rad/s. The following figure shows the poles added to the prefilter Bode diagram. Adding a Complex Pair of Poles to the Prefilter Bode Diagram By default, the damping ratio of the complex pair is 1.0, which means that there are two real-valued poles at about -50 rad/s. The green curve, which represents the prefilter Bode response, shows the -3 dB point for the roll-off is at about 50 rad/s. The magenta curve, which represents the closed-loop response from the prefilter to the plant output, shows that after the -3 dB point, the closed-loop gain rolls off at -40 dB/decade to provide some noise disturbance rejection. #### Importing a Prefilter As an alternative approach, you can design a prefilter using the Control System Toolbox™ commands like `ss` or `tf` and then import the design directly into the prefilter. For example, to create the lowpass filter using `zpk`, try ```prefilt=zpk([],[-35 + 35i, -35 - 35i],1) ``` and import `prefilt` by clicking System Data on the Architecture page. This opens the System Data dialog box. Click Browse to open the Model Import dialog box, as shown next. Importing a Prefilter Select `prefilt` from the Available Models list and click Import to import the prefilter model. Click Close to close the Import Model dialog box. After you have imported the prefilter model, you can modify it using the same methods as described in this chapter for compensator design.
2015-07-31 23:41:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5721155405044556, "perplexity": 2500.710588813361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988317.67/warc/CC-MAIN-20150728002308-00061-ip-10-236-191-2.ec2.internal.warc.gz"}
http://mymathforum.com/applied-math/34473-order-relation.html
My Math Forum Order relation Applied Math Applied Math Forum March 7th, 2013, 09:01 AM #1 Newbie   Joined: Sep 2012 Posts: 17 Thanks: 0 Order relation I have the following relation: $R= \{(x,y) \in \mathbb{R} \times \mathbb{R} | |x| < |y| \vee x= y\}$ Show that R is partially ordered. Is it totally ordered? My solution: It's not a total order since $\forall x \in \mathbb{R}: ( \neg(|-x| < |x|) \wedge -x \neq x)$ A relation is partially ordered iff it is reflexive, antisymmetric and transitive. Reflexitivity: $|x|=< |x| \vee x= x$. This is a tautology, hence R is reflexive. Symmetry: $(|x| < |y| \vee x = y) \wedge (|y| < |x| \vee y = x) \Leftrightarrow x = y \vee (|x| < |y| \wedge |y| < |x|) \Leftrightarrow x = y$ This statement is true iff x = y. That means R is antisymmetric. Transitivity: $(|x| < |y| \vee x = y) \wedge (|y| < |z| \vee y = z) \text{case 1:} x = y \Rightarrow x=y \wedge (|y| < |z| \vee y = z) \Leftrightarrow x=z \vee |x| < |z|. \text{case 2:} x \neq y \Rightarrow |x| < |y| \wedge (|y| < |z| \vee y = z) \Leftrightarrow |x| < |z| \vee x = z$ Thus R is transitive. Since R is reflexive, antisymmetric and transitive, it is partially ordered. However I'm very unsure on the transitive part. Thanks for help. Tags order, relation Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post Kappie Abstract Algebra 0 April 22nd, 2012 01:52 PM Grayham1990 Calculus 2 March 30th, 2012 06:24 AM Norm850 Calculus 2 March 7th, 2012 04:08 PM Jamers328 Number Theory 1 December 2nd, 2007 08:21 PM gaussrelatz Algebra 0 December 31st, 1969 04:00 PM Contact - Home - Forums - Cryptocurrency Forum - Top
2018-10-17 01:16:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 5, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.716098964214325, "perplexity": 2223.5213527755345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510932.57/warc/CC-MAIN-20181017002502-20181017024002-00377.warc.gz"}
https://herebeseaswines.net/essays/2020-04-13-the-notion-of-progress-and-user-interfaces
# The Notion of Progress in User Interfaces #### 2020-04-13 One trait of modern Western culture is the notion of progress. A view claiming, at large, everything is getting better and better. How should we think about progress? Both in general and regarding technology? In some areas the claim of progress arguable is true. We acknowledge that more people than ever in history have food and water and access to education, basic medicine, and health-care. We have good reasons to be worried about the implications of technology, social media, a sedentary way of life but we just the same have good reasons to see how much good technology has done and is doing. Especially when thinking about the Web, we should get better at thinking of the state in dialectical terms, as a balance where good and bad will always be present and the aim is to keep the weight and the tipping point on the right side. What is progress and what is not is a very complicated question. When we speak about software applications we are more naive than in other areas. This is at least my feeling that we seldom question the progress claim of modern software. There are important differences. No one can deny that the performance of technical devices in general and computers, in particular, has increased. But what about how we use computers? And what about our user interfaces? Is it true we are making progress? I don’t know. Microsoft and Apple made the Graphical User Interface mainstream influenced by ideas from Xerox PARC who in turn was inspired by this. If you only know the standard history writing and have recollections from your youth in the 90-s and remember Windows 3.1 you will be surprised to see this Smalltalk interface from 1976. Or why not Acme of Plan9? Watch a tour of the application and you have good reasons to disbelief that modern GUI:s for programmers by default is better in all areas. Even though we have more powerful features in our modern GUI:s, are they as thought through in terms of UX as this interface? While we, without doubt, improve our applications generation by generation, some workflows and modern UIs are more primitive than the interfaces of old such as Unix systems of the 1970s, or the graphical environments like those of Xerox PARC. While features improve, I sometimes get the feeling, that the overall philosophy of applications and the implicit consequence of this philosophy and how it is materialized in concrete GUI:s today all too often are commercialized. And there is a difference between making money and commercialization, in a sense. New software philosophy contrasts with the philosophies of the elders in the hacker culture. They had simple but powerful ideas and made those ideas come alive in software. This is why some of the applications of the 70s are still used, with small modifications surely but essentially the same. Vim, Emacs, grep, curl, etc. If this will not be true for the applications we make today, can we speak about progress? We must separate new features from the core of applications, interfaces. Adding more powerful features for programmers to WebStorm, Atom or VS Code doesn’t make the interface as such more powerful. One more problem can be solved, but nothing important in how we work is changed. Nothing in how we think has changed. Interfaces in the old sense are about possibilities and means of combination. In Vim, for instance, knowing a small set of keys and the means for combinations (say 30) can produce hundreds of effects. This is possible with a clever means of combination. Compare this with the specific key-bindings of other applications and the power of the user interface of Vim shines. Who says things intuitive need to come easy? Immediate understanding of an interface is separated from an intuitive design, a design which in theory can take a long time to master. This is also how intuition has been used for a very long time. In modern civilization, we sometimes forget this is how people before us have thought about things being ‘intuitive’. A small child can’t read, but should we then consider letters to be un-intuitive? Bret Victor writes asks a similar question in Magic Ink. Information Software and The Graphical Interface, Some people claim that no interface can be fundamentally more “intuitive” than another, because intuition is simply a result of familiarity. But surely these people were parsing and producing complete sentences long before they could manage a dialog box. The human brain does have some hard-wiring. Some people claim that no interface can be fundamentally more “intuitive” than another, because intuition is simply a result of familiarity. But surely these people were parsing and producing complete sentences long before they could manage a dialog box. The human brain does have some hard-wiring. Much of the old discussion related to the notion of Bildung relates to the term intuition. Bildung is only secondarily about knowledge, and primarily about ‘resistance’. Bildung is traditionally thought of as Sisyphean work. No one can in a sense have Bildung in general in this view, only to a degree and under special circumstances, cultivating and replacing bad forms of intuition with better ones. This is perhaps why Bildung and the modern scientific process emerged in the same time frame. When you just do things, you have an intuitive environment. But this says nothing about your prior training. In Vim you have to know how to combine elements, but with this knowledge, you can intuitively — like using letters and the knowledge on how to read — perform very specific tasks. If you know that c stands for change and $ end of line, you just do ‘change until end of line’ (it’s an expected consequence of combining the elements, c$). You don’t need to look it up. It just works. The means to combine make Vim more intuitive than most modern UIs, I think and if Unix and related interfaces were about snobbery, we would have thrown them overboard long ago. This is generally what happens with old applications with the advent of a new, better application. But because of their quality, they persist and they will most likely endure for some more. In progress, we must admit failure and iterative take steps ‘back’ and have another try. Some modern applications have fantastic user interfaces. But I feel we have lost some things that should not have been forgotten. We can do better. And even when we do good, we should attempt to do better still. We are stuck with technology when what we really want is just stuff that works. – Douglas Adams
2021-04-22 22:16:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37540820240974426, "perplexity": 1093.7281720328833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039563095.86/warc/CC-MAIN-20210422221531-20210423011531-00631.warc.gz"}
https://academy.vertabelo.com/course/python-data-science/data-frames-in-pandas/series/operations-on-series
Kickstart 2020 with new opportunities! - hours only!Up to 80% off on all courses and bundles.-Close Introduction Series 6. Operations on Series What DataFrames are DataFrame columns DataFrame rows Filtering rows and columns Filtering data frames Sorting rows Summary Instruction Good job. Series are very convenient when we want to perform an operation on all elements in a certain way. Consider this example: magic_numbers = pd.Series([100, 200, 300, 400, 500, 600]) magic_numbers/10 The code above will divide every element of magic_numbers by 10. This means we will get [10.0, 20.0, 30.0, 40.0, 50.0, 60.0] Note that because of division, the elements were converted to floats (i.e. fractions - numbers with decimal points). Exercise Add five to all elements of the respondents_age Series. Stuck? Here's a hint! Simply write: respondents_age + 5 on a new line.
2020-01-21 16:29:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3839186728000641, "perplexity": 4735.026309207}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250604849.31/warc/CC-MAIN-20200121162615-20200121191615-00177.warc.gz"}
https://math.stackexchange.com/questions/2251020/computing-a-vector-with-geometrical-constraints
# Computing a vector with geometrical constraints (Large Version here) $A$ and $B$ are vectors with the origin at $\mathcal O$, the red dot near the bottom. The short bright purple line segments perpendicular to vectors $A$ and $B$ have magnitude $w$. Two colored line segments are drawn from the ends of the purple line segments stemming from $A$ and $B$, parallel to vectors $A$ and $B$. These line segments intersect at the point highlighted by the green circle. I wish to find the magnitude of $v$, the vector stemming from the origin and ending at the intersection of the two colored line segments. I understand that the direction of $v$ must be $$\frac{A}{||A||} + \frac{B}{||B||}$$ But I am having a very hard time finding the correct magnitude of $v$. I understand that the magnitude of $v$ should approach infinity as the angle between $A$ and $B$ reaches $0$. ## Note: Ideally, this is done with vector operations on $A, B$, and $w$ as I am writing some software to compute $v$ given $A$ and $B$. Let $2\theta$ be the angle between the vectors. Then the length of the two black segments from the origin to the colored lines is $w/\sin2\theta$. The magnitude of $v$ is therefore $$2{w\over\sin2\theta}\cos\theta={2w\cos\theta\over2\cos\theta\sin\theta}={w\over\sin\theta}.$$ You can compute $\sin\theta$ via either the dot product or (two-dimensional) cross product of $A$ and $B$ and various trigonometric identities. For instance, $A\cdot B=\|A\|\,\|B\|\cos2\theta$ and $\sin\theta=\sqrt{(1-\cos2\theta)/2}$.
2019-07-19 12:12:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8652275800704956, "perplexity": 101.67105154828843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526237.47/warc/CC-MAIN-20190719115720-20190719141720-00077.warc.gz"}
https://www.escolaamiga.com.br/5vxonz/bae56e-how-to-divide-functions-long-division
25 × 1 = 25: The answer from the above operation is multiplied by the divisor. Recall that polynomials are functions of the following form: When you divide two such functions together, you get what is called a rational expression. To divide binomials, set up a long division problem the way you would with any numbers, adding any missing terms. I will then subtract the bottom line from the line on top of it like I do with regular long division. You may recall the long division algorithm for ordinary arithmetic. What is different is the long division steps. Likewise our function g(x) is not just x^2, but x^2+1. I will keep repeating the whole process until the remainder has an x with a lower exponent than the first term of my g function. ... & Calculus. Here is how the problem looks now: I stopped right here because my remainder x+7 has an exponent of 1, which is lower than the exponent of 2 in my g function of x^2+1. See how we can multiply or divide two functions to create a new function. Now multiply this term by the divisor x+2, and write the answer . flashcard set{{course.flashcardSetCoun > 1 ? Another Example. A "1" goes on top of the 132 and divide. Not sure what college you want to attend yet? Division of Rational Functions. It uses a circular pattern of comparing, multiplying, subtracting, and carrying down. | {{course.flashcardSetCount}} Biology Lesson Plans: Physiology, Mitosis, Metric System Video Lessons, Lesson Plan Design Courses and Classes Overview, Online Typing Class, Lesson and Course Overviews, Diary of an OCW Music Student, Week 4: Circular Pitch Systems and the Triad, Personality Disorder Crime Force: Study.com Academy Sneak Peek. Create your account. This lesson covers Session 8: Dividing polynomials. of the divisor to obtain -2x, and add this term to the on the top line: You have to repeat the procedure one more time. Please post your question on our Decisions Revisited: Why Did You Choose a Public or Private College? There is one WeBWorK assignments on today’s material: 95 goes into 132, 1 time. This is not so with functions. When dividing numbers, you generally look at and compare all the digits in the numbers. Steps 2, 3, and 4: Divide the term with the highest power inside the division symbol by the term with the highest power outside the division symbol. Let's look at our example. Learn how to solve long division with remainders, or practice your own long division problems and use this calculator to check your answers.Long division with remainders is one of two methods of doing long division by hand. S.O.S. A rational expression is … Fill in the division problem with your numbers, then click "Divide." Because we didn't, we need to put in a 0x as a placeholder just like we do with numbers. To unlock this lesson you must be a Study.com Member. Adding and subtracting functions. Need to learn how to divide functions? Remember: A decimal number, say, 3 can be written as 3.0, 3.00 and so on. 's' : ''}}. Consequently. How do you do this? Remember from long division that when writing out the answer, the remainder is written over the divisor, in our case, the g function. So function f(x) is not just x^4, but x^4+3x^2+x+9. in more detail. Using Long Division to Divide Polynomials. First, I compare the first terms in each function, the x^2 with the x^4. If there should be a remainder, it will also be shown. The step by step work reveals how to do long division between different combination of dividend and divisor. credit by exam that is accepted by over 1,500 colleges and universities. Other Stuff. This page will show you a complete "long division" solution for the division of two numbers. A cubic equation has a maximum of three distinct solutions. 95 can not go into 13 so 0 goes on top of 13. Other ways of checking include graphing both sides (if you have a graphing calculator), or plugging in a few numbers on both sides (this is not always 100% foolproof). The steps match the steps you take to do a long division problem with numbers. In this section you will learn how to rewrite a rational function such as. Let's use polynomial long division to rewrite Write the expression in a form reminiscent of long division: First divide the leading term of the numerator polynomial by the leading term x of the divisor, and write the answer on the top line: . Now you probably use a calculator for most division problems. Look at our g function. First, we need to set up the problem for long division. © copyright 2003-2021 Study.com. I will see what I get as a remainder. Divide the first term of the numerator by the first term of the denominator, and put that in the answer. As we’ve seen, long division of polynomials can involve many steps and be quite cumbersome. First, find the complex conjugate of the denominator, multiply the numerator and denominator by that conjugate and simplify. First multiply the numerators of the two fractions together: 2 * 7 = 14. Apply remainder with next number. Write the expression in a form reminiscent of long division: First divide the leading term of the numerator polynomial by the leading term of the divisor, and write the answer 3x on the top line: Now multiply this term 3x by the divisor , and write the answer. When written in fraction form, the expression becomes a rational expression. Adding functions. To learn more, visit our Earning Credit Page. This video is about solving long division. Mathematics CyberBoard. Do you need more help? Use the long division algorithm to divide two polynomials, determining the quotient and remainder, Understand the connection between long division, factors, and roots, and use this connection to solve problems. For example, suppose you had to divide 1/2 by 3/7. I need a 2. of the divisor to obtain -11, and add this term to the 3x on the top line: Then multiply "back": and write the answer In algebra, polynomial long division is an algorithm for dividing a polynomial by another polynomial of the same or lower degree, a generalised version of the familiar arithmetic technique called long division. imaginable degree, area of CCSS.Math: HSF.BF.A.1b. We’ll have to remember all those long division skills so that we can divide polynomials. We begin by dividing into the digits of the dividend that have the greatest place value. Already registered? Long division of functions uses a very similar process to long division of numbers, as we will see. Services. Because I am dealing with polynomials, I need to separate the answer terms with either a '+' or a '-' for either positive or negative values, respectively. Determine the conjugate of the denominator Remember from regular long division that the top number goes inside the division bracket. We put in the zero as a placeholder for the tens place even though the number 101 does not have a tens value. All rights reserved. We can use the factor theorem to find one factor of a cubic function, and then use polynomial long division to find the remaining factor(s). Show Instructions In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. under the numerator polynomial, carefully lining up terms of equal degree: Bring down next digit 0. flashcard set, {{courseNav.course.topics.length}} chapters | Amy has a master's degree in secondary education and has taught math at a public charter high school. Working Scholars® Bringing Tuition-Free College to the Community. The whole number result is placed at the top. Multiplying both sides by the divisor yields: In this case, we have factored the polynomial , i.e., we have written it as a product of two "easier" (=lower degree) polynomials. Any remainders are ignored at this point. under the numerator polynomial, lining up terms of equal degree: Next subtract the last line from the line above it: Now repeat the procedure: Over 83,000 lessons in all major subjects, {{courseNav.course.mDynamicIntFields.lessonCount}}, Functions: Identification, Notation & Practice Problems, Transformations: How to Shift Graphs on a Plane, Partial Derivative: Definition, Rules & Examples, Partial Differentiation: Definition, Rules & Application, Biological and Biomedical This is the first term of the quotient. You can test out of the first two years of college and save thousands off your degree. As a member, you'll also get unlimited access to over 83,000 To divide complex numbers. just create an account. 42 − 25 = 17 Email. Learn how the process is similar to using long division for numbers. Now repeat the procedure: Combining functions. The Function Analysis Calculator computes critical points, roots and other properties with the push of a button. What is special about the way the expression above is written? Multiply your fractions. It is somewhat easier than solving a division problem by finding a quotient answer with a decimal. I will then multiply that value I got from comparing the x^2 with the 2x^2, the 2 with my g function, and write that on a new line. Simplify: {(k^2 + 6 k + 9)} / {(k^2 + 12 k + 27)}. Divide 10 by 2. Get the unbiased info you need to find the right school. Let's use polynomial long division to rewrite. Write the expression in a form reminiscent of long division: First divide the leading term of the numerator polynomial by the leading term x of the divisor, and write the answer on the top line: Now multiply this term by the divisor x+2, and write the answer. Study.com has thousands of articles about every Plus, get practice tests, quizzes, and personalized coaching to help you Earn Transferable Credit & Get your Degree, How to Add, Subtract, Multiply and Divide Functions, Applying Function Operations Practice Problems, Function Operation: Definition & Overview, Solving Equations & Inequalities Involving Rational Functions, Modeling With Rational Functions & Equations, Rational Function: Definition, Equation & Examples, Representations of Functions: Function Tables, Graphs & Equations, Composition of Functions: Definition & Examples, Vertical Line Test: Definition & Examples, How to Solve Logarithmic & Exponential Inequalities, What is Magmatism? under the numerator polynomial, carefully lining up terms of equal degree: Now repeat the procedure: Here are the steps in dividing polynomials using the long method: Arrange the indices of the polynomial in descending order. Learn the tips and tricks, and then try it out on our step-by-step guided examples to understand the concept. Replace the missing term (s) with 0. Our long division properly set up looks like this: Look at the zero values now. For the number one hundred and one we write it out as 101 and not 11. You can divide two functions that are polynomials. Select a subject to preview related courses: We've added our zero values where they need to go. Divide Two Numbers - powered by WebMath. This is most clear in some example problems. Google Classroom Facebook Twitter. My g function has three terms and my remainder only has one term, so I will bring the next two terms down. Anyone can earn To divide polynomials using long division, first divide the first term of the dividend by the first term of the divisor. It's a bit easier with functions actually as you are only concerned about the first terms at every step. Log in here for access. Bring down next number to get 370. Before we begin the long division process, I want to point out to you one difference between dividing functions and dividing numbers. Enrolling in a course lets you earn progress by passing quizzes and exams. Pre-Calculus - How to divide polynomials using long division You are done! is called the remainder. Multiplying and dividing functions. Let's see how it works by dividing function f by function g. Remember, our function includes all the terms and not just the first term. Next multiply (or distribute) the answer obtained in the previous step by the polynomial in front of the division symbol. I write that number on the top line on top of the x^2 terms because that is where I am in my long division. Let's create a formula in cell B2 that divides the data in cell A2 … Visit the College Algebra: Help and Review page to learn more. For example, put the dividend under the long division bar and the diviser to the left. Intro to combining functions. Even though we concern ourselves with only the first terms at every step, as we go along, we will have taken care of all the terms by the time we are done. The remainder is the last line: 28x+30, and the quotient is the expression on the very top: 3x-11. In this case, we should get x 3 /x 2 = x and x (x 2 + x – 6). Sociology 110: Cultural Studies & Diversity in the U.S. CPA Subtest IV - Regulation (REG): Study Guide & Practice, Properties & Trends in The Periodic Table, Solutions, Solubility & Colligative Properties, Electrochemistry, Redox Reactions & The Activity Series, Distance Learning Considerations for English Language Learner (ELL) Students, Roles & Responsibilities of Teachers in Distance Learning. Think about dividing polynomials as long division, but with variables. courses that prepare you to earn My answer is x^2, and so I will write that on top of the x^4. Repeat division to last number. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. 30 chapters | Divide x2 – 9x – 10 by x + 1 Think back to when you were doing long division with plain old numbers. divide. method of dividing polynomials using long division with polynomials digits in the zero values.... Write it out on our step-by-step guided examples to understand the concept point is 2x^2 us ... Our Calculator does polynomial long division 25 × 1 = 25: the from. And next digit after decimal is 0, we need to add this lesson to a Custom Course how... A divisor, and personalized coaching to help you succeed the page, or contact customer support with numbers... You will learn how the process is also called INVERT and multiply. 3.0 3.00. Have the greatest place value 6 k + 27 ) } together: 2 * 7 =.! The steps match the steps match the steps you take to do long division of functions you test! Rational number division ( division of two polynomials polynomial with equal or lower degree where I am in my division! Info you need to set up a long division with polynomials did you Choose a public or private?! X^2 position or divide two numbers, a dividend and divisor to a Custom Course with to get x^4 to. We begin the long method: Arrange the indices of the 132 divide! Must use long division works how to divide functions long division dividing polynomials for the number does not have a zero in the next,., first divide the following 2 complex numbers $\frac { how to divide functions long division } { 7 + 4i } step! Revisited: why did you Choose a public or private college of three distinct solutions ) the.! For the tens place even though the number does not have a tens value because that is where I in. Function such as 3 can be written as 3.0, 3.00 and so.! Any numbers, when you read or hear about dividing polynomials as long division process, recall the at! Enrolling in a zero in the next step, you generally Look at the x^2 position in fraction form the. An otherwise complex division problem into … multiply your fractions following 2 numbers... Remember to include any remainder if the number does not have a zero in the next step, agree! Of age or education level 9 ) } / { ( k^2 + k. 28X+30, and put that in the previous step by step work how! Be a Study.com Member the bottom number from the line on top of it like I do regular...$ \frac { 162a } { 7 + 4i } $step 1 zero! Expression becomes a rational expression digit after decimal is 0, so 5x is equivalent . The whole number result is placed at the top number goes inside the division of numbers! Apply polynomial long division a formula in cell A2 … this video is about solving long division algorithm ordinary! Equation has a master 's degree in secondary education and has taught math at a public or college! Answer from the top number of functions uses a very similar process to long division for numbers, when set! How long division to divide polynomials term, so we need to add in our zero values in sign. Division for numbers, adding any missing terms x, not yielding a polynomial by any with. Evenly, your answer will become a polynomial between different combination of dividend and divisor /x =. To attend yet of their respective owners to take whatever path through the material best their! Wrote a +2 because the 2 is positive in or sign up to how to divide functions long division in a Course lets you progress... Tips and tricks, and carrying down do long division process, I want to point out to one... Fraction form, the numerator would be the divisor because the 2 is positive we 've added zero..., users are free to take whatever path through the material best serves their needs Earning! By writing out the long method: Arrange the indices of the polynomial descending... The step by the divisor ) = 0 ) must use long division algorithm for ordinary arithmetic exponent!, users are free to take whatever path through the material best serves their needs answer x^2! To do a long division of regular fractions ), multiply, subtract include. Is written cell A2 … this video is about solving long division for numbers, as we will.... The difference between Blended Learning & Distance Learning two functions to create a new.... For example, ( 9x^2 + … remember long division a very similar process long... Zero value, so 5x how to divide functions long division equivalent to 5 * x line! X 3 /x 2 = x and x ( x ) point 2x^2... 1 '' goes on top of the dividend that have the greatest place value position, and is less! With the push of a button fraction form, the remainder is 0 so. Denominator by that conjugate and simplify: \sqrt { \frac { 162a } { {... A2 … this video is about solving long division to divide binomials, up! 2 * 7 = 14 expression is the expression above is written page... Be given one number ( called the dividend ( the polynomial in front of the term! Added our zero values now, then click divide. your polynomial the same you... Confusing at first, I want to point out to you one difference between Blended Learning & Learning. First terms at every step method of dividing by a linear factor whose leading coefficient is 1 numerator be. Digits in the tens place even though the number one hundred and we... Data in cell A2 … this video is about solving long division problem the way the expression on the number... Our function g ( x ) is not just x^4, but once you get the unbiased info need! College and save thousands off your degree had to divide functions never mind the why. Divisor x+2, and then try it out as 101 and not 11 from regular long division solution... Remainder 28x+30 has degree 1, and put that in the answer find the complex conjugate the... Your answer algebraically is to multiply x^2 with to get x^4 3.0, and! Greatest place value with steps shown to create a new function myself, do. Of dividend and a divisor, how to divide functions long division then try it out as 101 and not 11 process I! Our polynomial did n't have anything there I ask myself, what do I need to both! First term of the polynomial in descending order the 132 and divide. at the values... Invert and multiply. compare the first terms at every step also called INVERT and multiply. a,. To divide polynomials using the long division of your polynomial the same way you with. + 12 k + 27 ) } / { ( k^2 + 6 k + 27 }. We write it out as 101 and not 11 's divide the following complex. X – 6 ) … as we’ve seen, long division of your polynomial the same you... Easier with functions actually as you are only concerned about the way you would be given one (! Before we begin the long division – 6 ) to go dividend and divisor properly up... Would be the divisor you read or hear about dividing polynomials for the x^2+1 function did... divide. zero in its place even though the number 101 does not have a zero value a! Can be written as 3.0, 3.00 and so on regular fractions ), multiply the would! This point is 2x^2 to check your answer will become a polynomial!! Divides evenly into the polynomials involved were written in fraction form, the with! In this case, the numerator and denominator by that conjugate and simplify: { ( +. The digits of the dividend ) conjugate and simplify divide two polynomials using the long division problem into … your... Risk-Free for 30 days, just INVERT and multiply. is placed at the x^2 position can be to. First terms in each function, the expression on the top number is function! A decimal every step other trademarks and copyrights are the steps you take to do a long division for! Term by the divisor the material best serves their needs equivalent to 5 * x division division. Number does not divide evenly k + 9 ) } / { ( k^2 + 12 k + ). Out to you one difference between Blended Learning & Distance Learning 162a } { +! Number from the above operation is multiplied by the first term of the first term of x^2... Called INVERT and multiply. so divides evenly into + x 6... Obtained in the answer + 27 ) } / { ( k^2 + 12 k + 27 }... Becomes a rational expression with 0 6a^ { 3 } } } 28x+30 has degree,. Is not just x^4, but x^2+1 between dividing functions, they are usually talking about dividing.. 2 complex numbers$ \frac { 162a } { 7 + 4i } step... As we will see never mind the reason why, just create an account are at the x^2.... Sides by the first term of the section steps needed to perform the calculation ), the. Whatever path through the material best serves their needs next multiply ( or distribute ) the answer by.! Finding three values of xfor which P ( x 2 + x – 6 ) do you see we. Reminds us to how to divide functions long division never mind the reason why, just INVERT and multiply ''... The typical procedure reminds us to ` never mind the reason why, just an!, a dividend and divisor as we will see top: 3x-11 terms...
2021-04-11 20:47:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5895297527313232, "perplexity": 719.5715684783016}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038065492.15/warc/CC-MAIN-20210411204008-20210411234008-00228.warc.gz"}
https://economics.stackexchange.com/questions/42167/general-equilibrium-with-perfect-substitutes/42168
# General Equilibrium with Perfect Substitutes I came across the following problem: The quantities of an economy’s only two goods are denoted by $$X$$ and $$Y$$; no production is possible. Ann’s and Ben’s preferences are described by the utility functions $$u_A(x,y) = X+Y$$ and $$u_B(x,y) = XY$$. Ann owns the bundle $$(0,5)$$ and Ben owns the bundle $$(30,5)$$. Determine the Walrasian equilibrium price(s) and allocation(s). It is quite easy to determine B's offer curve (with the price of $$Y$$ ($$p_2$$) normalised to 1): $$OC_B = \left(\frac{30p_1 + 5}{2p_1}; \frac{30p_1 + 5}{2}\right)$$ However, I don't know how to proceed now, since consumer A is indifferent between good X and good Y. The general Marshallian demands for a function of the form $$u_A(x,y) = \alpha X+ \beta Y$$ are (if we assume that the consumer consumes the same quantities in case we have $$p_1 = \frac{\alpha}{\beta}p_2$$) are the following: $$X^{M} = \left\{ \begin{array}{ll} m/p_1 & \mbox{if } p_1 < \frac{\alpha}{\beta}p_2 \\ m/2p_1 & \mbox{if } p_1 = \frac{\alpha}{\beta}p_2 \\ \ 0 & \mbox{else} \end{array} \right.$$ and $$Y^{M} = \left\{ \begin{array}{ll} m/p_2 & \mbox{if } p_1 > \frac{\alpha}{\beta}p_2 \\ m/2p_2 & \mbox{if } p_1 = \frac{\alpha}{\beta}p_2 \\ \ 0 & \mbox{else} \end{array} \right.$$ So if we replace exogenous income with the endowments $$\omega = e_1p_1 + e_2p_2$$, and normalize the price of Y to 1 again, our offer curve would look exactly the same as the one of consumer B (since $$\alpha = \beta = 1$$), because we are in the case where consumer A splits his consumption equally (by assumption), since we have that $$MRS_A = \frac{1}{1} = 1$$. And since $$MRS = \frac{p_1}{p_2}$$ in equilibrium, we must have that $$p_1 = p_2 = 1$$ if I'm not mistaken. $$OC_A = \left(\frac{30p_1 + 5}{2p_1}; \frac{30p_1 + 5}{2}\right)$$ Solving for the Walrasian Equilibrium, we would get that $$p_1 = 1/6$$, which corresponds to our price ratio, since $$p_2 = 1$$. This somehow doesn't seem right to me, because the fact that consumer A wants to consumer both goods equally if $$p_1 = \frac{\alpha}{\beta}p_2$$ is just an assumption that we have made to be able to define that case. In reality, however, consumer A is totally indifferent between goods X and Y, so he can freely switch between the two and does not have to consume them $$50:50$$. Also, what makes this approach very confusing to me as well is the fact that the prices are endogenous within this framework. So the prices should be able to adjust freely according to demand, which makes the above definition of the Marshallian demands redundant. My second question is that given this logic, would it even make sense to look at what happens if $$p_1 \neq \frac{\alpha}{\beta}p_2$$? As I said, the prices are endogenous, so we cannot impose certain prices in the beginning already. How do you approach such a problem with perfect substitutes in general then? Say we have for example $$u_A(x,y) = 2X+3Y$$ and $$u_B(x,y) = XY$$. Does the result change? To solve for competitive equilibrium, we can first find the demand : Demand for commodity $$X$$ by A is $$x_A = \frac{5}{p_x}$$ if $$p_x < 1$$, $$x_A \in [0,5]$$ if $$p_x = 1$$, $$x_A = 0$$ otherwise. Demand for commodity $$X$$ by B is $$x_B = \frac{(30p_x+5)}{2p_x}$$. Now we can equate demand and supply and solve for $$p_x$$. $$x_A + x_B = 30$$ yields $$p_x = \frac{1}{2}$$.
2022-01-19 08:12:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 37, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9263019561767578, "perplexity": 356.9611247940472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301264.36/warc/CC-MAIN-20220119064554-20220119094554-00596.warc.gz"}
http://www.uvm.edu/storylab/category/psychology/
# The Ferguson protests: Quantifying state-level sentiment on Twitter Reporting on the August 9, 2014 shooting death of Michael Brown, David Carr concluded his August 17 piece for the New York Times by observing that “nothing much good was happening in Ferguson until it became a hashtag”. Following the story’s rise and spread on Twitter, the protests in Missouri swiftly captured the news cycle in the U.S., and brought into focus the consequences of militarization and racial inequality in police forces throughout the country. Using our new and improved instruments at hedonometer.org, we can strongly quantify and visualize the texture of sentiment surrounding the protests on Twitter.  Over the course of a week starting Wednesday, August 13, our all-of-Twitter time series dipped several times, and we saw a large increase in negative words related to events in Ferguson as viewed in the word shift below.  Because we currently break tweets into individual words, “tear” is separated from “tear gas” but is still a negative term that rises to the top (we will move to phrases in a major future update of hedonometer). Click on the graphic below for an interactive version of the word shift or here. In comparing the week of August 13 to 20 with the last 90 days, we see Missouri’s happiness ranking dropped from 18th to 32nd. The geography of happiness for the U.S. is remarkably stable, and this is the first time we’ve observed such a large, rapid change for a state ranking. Over the 7 days of August 15 to 21, the positive words “lol”, “hahaha”, and “laughing” have been used relatively less frequently in Missouri than in the entire U.S., and the negative words “racist”, “violence”, and “protest” have been used relatively more frequently. Click here or on the image above to explore our sentiment map. We’re in the process of building interactive sentiment maps for other languages and at scales of cities, countries, and regions.   Our hope is that through hedonometer, anyone will able to make and share geographically localized observations of crowd-sourced public opinion, and generate a defensible quantification of the collective conversation on Twitter and elsewhere. # How our storytelling nature means we deeply misunderstand the mechanics of fame (and much else…) Should the Mona Lisa be our most famous painting? Was Harry Potter destined to (repeatedly) sweep the globe? What would happen to everyone and everything famous if we ran the experiment that is our world over again? Find out why fame is truly unpredictable, how it lives and dies entirely in our social stories, and why “… there is no such thing as fate, only the story of fate” in a current Nautilus Magazine piece by the Computational Story Lab’s co-team leader Peter Dodds: “Homo Narrativus and the Trouble with Fame: We think that fame is deserved. We are wrong.” Nautilus is a new, design-driven publication on science published both online (free) and in print (unfree).  The Nautilus team is creating a beautiful showcase for scientific knowledge, and we encourage you to explore everything they have on offer. # How does movement influence your daily happiness? Imagine commuting an hour to work, one way, grinding through miles of traffic to get from your suburban home to a desk job in the big city. Excited yet? Ok, now imagine that you lead a life of leisure traveling the world. You fly coast-to-coast to see a concert, soak in some culture, and drink fine wine. Does this lifestyle seem more appealing? Lets try to quantify the influence of these travel patterns on individual happiness. We do this using geolocated tweets, which we have previously used to reveal the happiness of cities, and to quantify patterns of movement. Each point corresponds to a geo-located tweet from 2011. (A) USA (B) Washington, D.C. (C) Los Angeles (D) Earth First, we find the average location of each individual’s tweets. We call this their expected location. Then we draw circles emanating from this spot, like rings on a dart board. Some messages are written close to home, others from very far away. Then we collect all of the words written at each distance, roughly 500,000 tweets per ring. Averaging the happiness of words found at each distance, remarkably we find that happiness increases logarithmically with distance from expected location. Tweets authored far from home contain a smaller number of negative words. Tweets are grouped into ten equally populated bins by the distance from their author’s average location, and the average happiness of words written at each distance is plotted. Expressed happiness grows logarithmically with distance from home. Home is where the hate is? What? No. Below we look at the difference between the happiest and saddest distances from home. Words appearing on the right increase the happiness of the 2500km distance relative to the 1km distance. For example, tweets authored far from an individual’s expected location are more likely to contain the positive words beach’, new’, great’, park’, restaurant’, dinner’, resort’, coffee’, lunch’, cafe’, and food’, and less likely to contain the negative words no’, don’t’, not’, hate’, can’t’, damn’, and never’ than tweets posted close to home. Words going against the trend appear on the left, decreasing the happiness of the 2500km distance group relative to the 1km group. Word shift graph comparing the lowest average word happiness distance group to the words authored farthest from home. Tweets written close to home are more likely to contain the positive words me’, lol’, love’, like’, haha’, my’, you’, and good’. Moving clockwise, the three insets show that the two text sizes are comparable, the biggest contributor to the happiness difference is the decrease in negative words authored by individuals very far from their average location, and the 50 words listed make up roughly 50% of the total difference between the two bags of words. For you visual learning folks, here is a short video explaining how these word shifts work. Take home story: people tweeting far from home talk about food more, and they swear less than people tweeting close to home. These people are probably enjoying awesome vacations, and tweeting about it! In summary, if you are a fellow with a daily commute that makes you feel a little bit sad, you are not alone! Try swearing less. Or ride your bike. If you are lucky enough to travel often, then keep smiling…maybe send the rest of us some pictures to cheer us up! For more details on our analysis, check our paper “Happiness and the Patterns of Life: A Study of Geolocated Tweets” recently published in Nature Scientific Reports. # Now Published: The Geography of Happiness Today we’re pleased to announce that our article “The Geography of Happiness: Connecting Twitter sentiment and expression, demographics, and objective characteristics of place” has been officially published by PLoS ONE.  We wanted to tell you about one key piece we’ve added to the paper and an unusual new Twitter account we’ve created. After our three blog posts (which coincided with the release of the preprint), we received plenty of media attention, as well as some fantastic feedback from readers (thanks!). One very important question kept coming up: “How well does happiness agree with other measures of well-being?”, or more simply: “Why should we believe you?” Well, we’re glad you asked.  For the final paper, we’ve added a US state-level comparison between our happiness measure and five other kinds of well-being indices: • the Behavioral Risk Factor Surveillance Survey (BRFSS)  for which people were asked to rate their life satisfaction on a scale of 1 to 4 (the BRFSS was explored in this Science paper on well-being from a few years back); • Gallup’s health survey-based well-being index; • the Peace Index, which aggregates various crime data; • the America’s Health Ranking, which aggregates health data; and • gun violence, specifically the number of shootings per 100,000 people. In the figure below, we show a series of scatter plots comparing all pairs of well-being metrics  (happiness runs along the top row).  Each dot represents a US state, and the colors represent strength of correlation or agreement between measures, with blue meaning strong agreement, and red representing no (statistically significant) agreement. (We include the exact Spearman correlation coefficienr and p-value in each scatter plot.) Scatter matrix showing comparison between different well-being metrics for all US states. The top row shows comparisons with happiness. Colors indicate the strength of correlation between pairs of metrics; shades of blue indicate increasingly significant correlation. Looking at the top row, we can immediately see that happiness agrees with all measures except for the BRFSS. However, the BRFSS itself doesn’t agree with any other measure except for the Gallup well-being index.  The most striking departure was the BRFSS ranking Louisiana as the happiest state whereas our happiness measure placed it last.  There are a number of possible explanations for these disagreements: one is that the BRFSS data was taken between 2005 and 2008, while all other data is from 2011 only; another is that unlike the other measures, happiness is self-reported in the BRFSS. How would you answer if asked how happy you are? Do you expect that your answer is representative of the population you live in at large? There are certainly many different ways to define “happiness”, as a number of different readers have pointed out. Of course, this is not to criticize the BRFSS (it remains a significant data source, and Oswald & Wu did fine work analyzing it in their Science paper), but merely to suggest that our word happiness score is measuring something different but perhaps complementary to traditional survey-based techniques. There certainly appears to be plenty of value to observing people “in the wild” via social network data, e.g. with the real-time instrument hedonometer.org. Finally, to celebrate the publication of our article we created a Twitter feed, @geographyofhapp, dedicated to tweeting the happiest and saddest city every day, and we invite you to follow.  We’re hoping that this is the first research article with its own Twitter account, but perhaps not hoping that it represents the future of scientific publishing… # Now online: the Dow Jones Index of Happiness Total excitement people: our website hedonometer.org has gone live.  We’re measuring Twitter’s happiness in real time.  Please check it out! If you’re still here, here’s the blurb from the site’s about page: Happiness: It’s what most people say they want. So how do we know how happy people are? You can’t improve or understand what you can’t measure. In a blow to happiness, we’re very good at measuring economic indices and this means we tend to focus on them. With hedonometer.org we’ve created an instrument that measures the happiness of large populations in real time. Our hedonometer is based on people’s online expressions, capitalizing on data-rich social media, and we’re measuring how people present themselves to the outside world. For our first version of hedonometer.org, we’re using Twitter as a source but in principle we can expand to any data source in any language. We’ll also be adding an API soon. So this is just a start – we invite you to explore the Twitter time series, let us know what you think, and follow the daily updates through the hedonometer twitter feed: . How does food (or talking about food online) relate to how happy you are? This is part 3 of our series on the Geography of Happiness. Previously we’ve looked at how happiness varies across the United States (as measured from word frequencies in geotagged tweets), and then at how different socioeconomic factors relate to variations in happiness. Now we focus in on one particular important health factor that might influence happiness, obesity. We looked at how happiness varied with obesity across the 190 largest metropolitan statistical areas in the United States, giving us the following scatter plot: Each point represents one city; for example the city with both(!) lowest obesity and greatest happiness in this set is Boulder, CO, located at the top left. The red line is a linear trend through the data (a line of best fit). Again, for the mathematically minded onehappybird watchers, we show the Spearman correlation coefficient and its corresponding p-value at the lower left. We do this to convince you that there is, in fact, a statistically significant downward trend in the blob of points in the picture! The big story here is of course that as obesity goes up, happiness goes down. The natural next question to ask is: are there any words which could be indicators of obesity? What foods are people in obese cities eating, or talking about? To answer this question we correlated word frequencies with obesity, and searched for the most strongly-correlating food-related words. Below are two examples: on the left, “mcdonalds”, and on the right, “cafe”. As obesity goes up, so does talk (at least on Twitter) about McDonalds, but talk about cafes follows the opposite trend! Does that mean that in order to lose weight we should spend more time sipping lattes in cafes? I wish. Looking through the list of words, the top 5 food-related words that increase in frequency as obesity went up were: 1. mcdonalds 2. eat 3. wings 4. hungry 5. heartburn We were surprised by ‘hungry’! On the other hand, the top food-related words which were used more as obesity went down were: 1. cafe 2. sushi 3. brewery 4. restaurant 5. bar Perhaps unsurprisingly, these are words typically used by the high-socioeconomic group described in our previous post on city happiness, suggesting that better health correlates with higher socioeconomic status. You can find the complete list of how all words correlate with happiness here (page best viewed using Google Chrome). One surprising result was the observation that far more food-related words appeared in the low-obesity group than in the high-obesity group; in other words, food was being talked about more in the less-obese cities! Summarizing: based on word usage, the Twitter diet consists of: breakfast at your favorite cafe, a delicious sushi lunch, dinner out at a fancy restaurant, with a nightcap at the best local bar or brewery. Thank you Twitter, don’t mind if I do. All jokes aside, this sort of technique has great potential. Imagine being able to predict whether obesity was going to rise or fall in a city, or estimate changes in other demographics, just by analyzing the words people use online. Perhaps New York City Mayor Michael Bloomberg would find some early indicators of the success or failure of his war on soda! And that’s all for this series of posts on the geography of happiness. More information on all of the results in this series can be found in our recently submitted arxiv paper. Please take a look at it and the accompanying online appendices, where you can look through all of the data yourself. As a special bonus feature, you can check out this video of me talking about this work at our recent TEDxUVM conference.  Thanks for reading! # What makes a city happy? Welcome back, onehappybird watchers! Wow, what a crazy week of coverage of our post about how happiness varies by city and state across the United States. Many, many people read, shared, and commented on the post, for which we are grateful. For the detailed explanation of the results, check out the full paper we recently submitted to PLoS ONE. A number of readers wondered how variations in happiness relate to different underlying social and economic factors. To try to answer this question, we took data from the 2011 census (all helpfully available online on the Census Bureau’s American FactFinder website) and correlated it with our measure of happiness. Surprisingly, happiness generally decreases with the number of tweets per capita in a city (this doesn’t mean that tweeting more will make you less happy, it’s only a correlation): Next, we grouped covarying demographic characteristics obtained from the census, and looked at how these clusters varied with happiness. For example, it might not surprise you that cities with a larger percentage of married couples also contain a larger percentage of children – this is what we mean by covarying demographics.  And you might or might not be surprised that more marriage is positively correlated with happiness.  There’s plenty of scatter but the connection is there: Scatter plot of happiness vs. percentage of population married. Each dot represents one city, the rho and p-values reported are Spearman correlations. We used an automated algorithm to bin the census data for us into eight groups and then compared the happiness of those groups, leading to the following figure: Each point represents a characteristic from the census (for example, the % married/happiness plot above is now represented by one point in this figure), with the horizontal groupings representing covarying demographic characteristics. A point’s position on the vertical axis shows how that characteristic varies with happiness across all cities. A positive value means that happiness is higher in cities where that characteristic is higher, while a negative value means that happiness is lower in cities where that characteristic is higher. For example, the figure shows that as the percentage of married couples in a city increases, so does the average happiness of that city (no causality is implied). Only two groupings (the colored dots on the far left and right) showed strong correlation (either positive or negative) with happiness. Looking at which characteristics make up these groups, it appears that the general story here is a socioeconomic one, and one that holds only at the extremes. With our peculiar Twitter-based lens, we see money statistically correlates with happiness, which is not quite as catchy as “money buys happiness” (see the debate over the Easterlin Paradox for more). You can delve into the data yourself – the correlations of all 432 characteristics of cities recorded by the census with happiness can be found here (page best viewed using Google Chrome). A more interesting question might be how word usage varies with different demographics – to do this we correlated each word with each demographic characteristic across all 373 cities in our dataset, leading to a lot of data to sift through! (And you can too, by following the link in the above paragraph.) As an example, take a look at how the word “cafe” varies with the percentage of population with a college degree: Each point in the figure represents one city, and broadly the trend is that the more “college-y” the city is, the more people talk about cafes online. (You can decide for yourself whether that’s surprising or not). The top 10 emotive words whose usage went up as percentage of population with a college degree went up turned out to be: 1. cafe 2. pub 3. software 4. yoga 5. grill 6. development 7. emails 8. wine 9. art 10. library And the emotive words which went up as college degrees went down? 1. me 2. love 3. my 4. like 5. hate 6. tired 7. sleep 8. stupid 9. bored 10. you We saw similar patterns of word use across many socioeconomic characteristics – emotive words and words about interpersonal relationships (‘me’ and ‘you’) at one end of the spectrum, and words about more complex social or intellectual themes at the other. Interestingly, we find more food-related words in this group as well. Of course, all of this is open to interpretation. As many commenters last week pointed out, Twitter users (indeed, specifically those users who geotag their tweets using a mobile device) are a small, non-representative sample of the global population. Furthermore, our method is undeniably crude, and by breaking texts up into their constituent words ignores the context in which those words were used. That said, many of these results agree with our intuition (for example, many of the cities with low happiness scores also appeared on a list of America’s “most miserable cities” published late last week by Forbes), while some surprise us. There is certainly a lot to be learned by looking at what the data can tell us, and we encourage you to do so by exploring our website of supplementary data. Again, you can read the full technical details in our research paper here. We’ll pick up on the theme of food again in our next post, which will focus on one important health factor relating to happiness – obesity. # Where is the happiest city in the USA? (Update: this work is now published at PLoS ONE) Is Disneyland really the happiest place on Earth?* How happy is the city you live in? We have already seen how the hedonometer can be used to find the happiest street corner in New York City, now it’s time to let it loose on the entire United States. We plotted over 10 million geotagged tweets from 2011 (all our results are in this paper, also on the arxiv), coloring each point by the average happiness of nearby words (detail on how we calculate happiness can be found in this article published in PLoS ONE): As well as cities and the roads between them, we can make out many regions of higher and lower happiness, even within individual cities. As an example, check out this tweet-generated map of the city of Chicago: Tweet-generated map of Chicago. Click to enlarge. Notice the striking contrast between the relatively happy Central/North Side of the city, and the sadder South Side. You can also find a few airports in this map, and if you look very closely you might even be able to pick out happy and sad terminals! To quantify this variation in happiness a bit better, let’s look at the average happiness of each state: Southern states tend to produce sadder words than those in northern New England or out west. Hawaii emerges as the happiest state and Louisiana as the saddest, due to relative differences in the frequencies of happy and sad words used in each state. Here at onehappybird, we characterize such differences by “word shifts”, which are basically word clouds for grown-ups. You can find examples of these, as well as the full list of the average happiness of each state, here (page best viewed using Google Chrome). Zooming in further to the level of cities, we produced a similar list for 373 cities in the lower 48 states (you can find the full list, as well as maps and word shifts for each city, here). With a score of 6.25, we found the happiest city to be Napa, CA, due to a relative abundance of such happy words as “restaurant”, “wine”, and even “cheers”, along with a lack of profanity. At the other end of the spectrum, we found the saddest city to be Beaumont, TX, with a score of 5.82. In general, cities in the south tended to be less happy than those in the north, with a major contributing factor being the relative abundance of profanity used in those cities. We can go even further than this, and group cities by similarities in word usage. Each square in the heatmap below represents the similarity (Spearman correlation for you mathematically minded onehappybird watchers) between word distributions for the largest cities in the US. Red squares mean that the corresponding cities use words in a similar fashion, while blue means that those cities tend to use different types of words with respect to each other. The colors in the tree diagram at the top signify clusters of cities exhibiting similar word usage (below a certain threshold). As we might expect for two cities that are geographically nearby, New Orleans and Baton Rouge are clumped together at the bottom right of the figure. On the other hand, New York and Seattle get clumped together as well, suggesting that similarities in language depend on more than just geographical proximity. You can find more information about happiness and cities, as well as details on the methods used to produce these results, in our arxiv research article. In our next post, we’ll look at how these results are related to various underlying socioeconomic characteristics of cities. What makes a city happy or sad? Can we use Big Data to predict future changes in the demographics, health, or happiness of a city? How does happiness relate to the food you eat? *By the way, to answer the question at the start of this post: According to this analysis Disneyland is not the happiest place on Earth; it isn’t even the happiest place in Southern California! See if you can find it in this tweet-generated map of LA! Or find your city here. # The Daily Unraveling of the Human Mind Each morning we find ourselves in wide flung arms of drowsy possibilites. Cradled by the warm embrace of our beds, we begin our day, rebooted and rejuvenated. Having not eaten for a full eight hours, we can enjoy a guilt free breakfast, setting a blissful tone for the day. Hourly frequency of meal references on twitter. See figure 1 page 3 of our paper for details. Last night’s dreams of victory and triumph bolster our delusions of adequacy, preparing us to surmount any of life’s challenges. But the moment we step outside, reality commences its slow and insidious descent. Its weight, compressing our spine, crushing our dreams, alters the course of the day completely.  The soul crushing litany of work, interacting with people, and generally living our lives takes its toll. As our sanity unravels, apathy takes root. The profane becomes our standard of expression. In the throes of despair, we swear just to feel something. We swear increasingly as we realize the inevitability of repeating this all again tomorrow. F***, that’s a terrifying thought. This ephemeral pattern is reflected in our tweets, our spontaneous burst of being. Below, we see our happiness peaks during the early hours of the day, and degrades as the hours progress (yellow circles). The proportion of profanity in our tweets, however, follows a reverse cycle. Profanity appears in a smaller percent of tweets at the start of each day, and increases gradually as time wears on. Daily Unraveling See figure 10 page 15 of our paper for details. Remarkably, the relative frequency of these five expressions of frustration (a******,  b****, s***, f***, m***********) are quite similar. Well done, humans. To avoid experiencing the daily unraveling, we recommend eating organic, local dark chocolate all day long. # If you’re happy and we know it … are your friends? Do your friends influence your behavior?  Of course they do.  But it’s hard to actually measure their influence.  Social contagion is difficult to distinguish from homophily, the tendency we have to seek relationships with people like ourselves. In response to the “happiness is contagious” phenomenon promoted by Nicholas Christakis and James Fowler, we here at onehappybird were wondering whether happy Twitter users were more likely to be connected to each other.  In other words, is happiness assortative in the Twitter social network?  (See related work here.) In the image below, each circle represents a person in the social network of the center node.  We color nodes by the happiness of their tweets during a single week.  Pink colors are happier, gray colors are sadder, and nodes depicted with the color black did not meet our thresholding criteria (50 labMT words). We established a friendship link between two users if they both replied directly to the other at least once during the week. As users are added to this network, it quickly becomes difficult to tell whether pink nodes are disproportionately connected to each other, so instead we look at the correlation of their happiness scores.  The plot below shows the Spearman correlation coefficient of the happiness ranks for roughly 100,000 people, with blue squares and green diamonds indicating different word thresholds, and red circles representing the same network but with randomly shuffled happiness scores. The larger correlation for friends indicates that happy users are likely to be connected to each other, as are sad users. Moving further away from one’s local social neighborhood to friends of friends, and friends of friends of friends, the strength of assortativity decreases as expected. We also looked at the average happiness of users as a function of their number of friends (degree k). Happiness increases gradually with popularity, with large degree nodes demonstrating a larger average happiness than small degree nodes. The most popular users used words such as “you,” “thanks,” and “lol” more frequently than small degree nodes, while the latter group used words such as “damn,” “hate,” and “tired” more frequently.  The transition appears to occur near Dunbar’s number (around 150), demonstrating a quantitative difference between personal and professional relationships. Finally, here we show a visualization of the reciprocal-reply network for the day of October 28, 2008. The size of the nodes is proportional to their degree, and colors indicate communities detected by Gephi’s community detection algorithm. For more details, see the publication: C. A. Bliss, I. M. Kloumann, K. D. Harris, C. M. Danforth, P. S. Dodds.  Twitter Reciprocal Reply Networks Exhibit Assortativity with Respect to Happiness. Journal of Computational Science. 2012. [pdf] Abstract: Based on nearly 40 million message pairs posted to Twitter between September 2008 and February 2009, we construct and examine the revealed social network structure and dynamics over the time scales of days, weeks, and months. At the level of user behavior, we employ our recently developed hedonometric analysis methods to investigate patterns of sentiment expression. We find users’ average happiness scores to be positively and significantly correlated with those of users one, two, and three links away. We strengthen our analysis by proposing and using a null model to test the effect of network topology on the assortativity of happiness. We also find evidence that more well connected users write happier status updates, with a transition occurring around Dunbar’s number. More generally, our work provides evidence of a social sub-network structure within Twitter and raises several methodological points of interest with regard to social network reconstructions.
2015-03-27 19:04:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2781926691532135, "perplexity": 2347.047495633321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296603.6/warc/CC-MAIN-20150323172136-00149-ip-10-168-14-71.ec2.internal.warc.gz"}
http://nag.com/numeric/fl/nagdoc_fl24/html/D02/d02qxf.html
D02 Chapter Contents D02 Chapter Introduction NAG Library Manual # NAG Library Routine DocumentD02QXF Note:  before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details. ## 1  Purpose D02QXF is a diagnostic routine which may be called after a call to either of the integration routines D02QFF and D02QGF. ## 2  Specification SUBROUTINE D02QXF ( NEQF, YP, TCURR, HLAST, HNEXT, ODLAST, ODNEXT, NSUCC, NFAIL, TOLFAC, BADCMP, RWORK, LRWORK, IWORK, LIWORK, IFAIL) INTEGER NEQF, ODLAST, ODNEXT, NSUCC, NFAIL, BADCMP, LRWORK, IWORK(LIWORK), LIWORK, IFAIL REAL (KIND=nag_wp) YP(NEQF), TCURR, HLAST, HNEXT, TOLFAC, RWORK(LRWORK) ## 3  Description D02QXF permits you to extract information about the performance of D02QFF or D02QGF. It may only be called after a call to D02QFF or D02QGF. None. ## 5  Parameters 1:     NEQF – INTEGERInput On entry: the number of first-order ordinary differential equations solved by the integration routine. It must be the same parameter NEQF supplied to the setup routine D02QWF and the integration routines D02QFF or D02QGF. 2:     YP(NEQF) – REAL (KIND=nag_wp) arrayOutput On exit: the approximate derivative of the solution component ${y}_{i}$, as supplied in ${y}_{i}$ on output from the integration routine at the output value of T. These values are obtained by the evaluation of ${y}^{\prime }=f\left(x,y\right)$ except when the output value of the parameter T in the call to the integration routine is TOUT and ${\mathbf{TCURR}}\ne {\mathbf{TOUT}}$, in which case they are obtained by interpolation. 3:     TCURR – REAL (KIND=nag_wp)Output On exit: the value of the independent variable which the integrator has actually reached. TCURR will always be at least as far as the output value of the argument T (from the integration routine) in the direction of integration, but may be further. 4:     HLAST – REAL (KIND=nag_wp)Output On exit: the last successful step size used by the integrator. 5:     HNEXT – REAL (KIND=nag_wp)Output On exit: the next step size which the integration routine would attempt. 6:     ODLAST – INTEGEROutput On exit: the order of the method last used (successfully) by the integration routine. 7:     ODNEXT – INTEGEROutput On exit: the order of the method which the integration routine would attempt on the next step. 8:     NSUCC – INTEGEROutput On exit: the number of steps attempted by the integration routine that have been successful since the start of the current problem. 9:     NFAIL – INTEGEROutput On exit: the number of steps attempted by the integration routine that have failed since the start of the current problem. 10:   TOLFAC – REAL (KIND=nag_wp)Output On exit: a tolerance scale factor, ${\mathbf{TOLFAC}}\ge 1.0$, returned when the integration routine exits with ${\mathbf{IFAIL}}={\mathbf{3}}$. If RTOL and ATOL are uniformly scaled up by a factor of TOLFAC and D02QWF is called, the next call to the integration routine is deemed likely to succeed. On exit: if the integration routine returned with ${\mathbf{IFAIL}}={\mathbf{4}}$, then BADCMP specifies the index of the component which forced the error exit. Otherwise BADCMP is $0$. 12:   RWORK(LRWORK) – REAL (KIND=nag_wp) arrayCommunication Array On entry: this must be the same parameter RWORK as supplied to D02QFF or D02QGF. It is used to pass information from the integration routine to D02QXF and therefore the contents of this array must not be changed before calling D02QXF. 13:   LRWORK – INTEGERInput On entry: the dimension of the array RWORK as declared in the (sub)program from which D02QXF is called. This must be the same parameter LRWORK as supplied to D02QWF. 14:   IWORK(LIWORK) – INTEGER arrayCommunication Array On entry: this must be the same parameter IWORK as supplied to D02QFF or D02QGF. It is used to pass information from the integration routine to D02QXF and therefore the contents of this array must not be changed before calling D02QXF. 15:   LIWORK – INTEGERInput On entry: the dimension of the array IWORK as declared in the (sub)program from which D02QXF is called. This must be the same parameter LIWORK as supplied to D02QWF. 16:   IFAIL – INTEGERInput/Output On entry: IFAIL must be set to $0$, $-1\text{​ or ​}1$. If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{​ or ​}1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is $0$. When the value $-\mathbf{1}\text{​ or ​}\mathbf{1}$ is used it is essential to test the value of IFAIL on exit. On exit: ${\mathbf{IFAIL}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6). ## 6  Error Indicators and Warnings If on entry ${\mathbf{IFAIL}}={\mathbf{0}}$ or $-{\mathbf{1}}$, explanatory error messages are output on the current error message unit (as defined by X04AAF). Errors or warnings detected by the routine: ${\mathbf{IFAIL}}=1$ An integration routine (D02QFF or D02QGF) has not been called or one or more of the parameters LRWORK, LIWORK and NEQF does not match the corresponding parameter supplied to D02QWF. This error exit may be caused by overwriting elements of RWORK. Not applicable.
2014-10-26 01:24:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9913924336433411, "perplexity": 4627.244005000688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119653628.45/warc/CC-MAIN-20141024030053-00141-ip-10-16-133-185.ec2.internal.warc.gz"}
https://learn.careers360.com/engineering/question-two-masses-are-separated-at-a-distance-d-from-each-other-then-point-of-zero-gravitational-potential-is-given-by-64506/
# Two masses are separated at a distance d from each other, then Point of zero Gravitational potential is given by ??? Let m1 and m2 are separated at a distance d from each other as shown in the figure. And P is the point where net Gravitational potential $V=V_{1}+V_{2} =0$ Then P is the point of zero Gravitational potential So the distance x from mis given by $x=\frac{m_1d}{m_1-m_2}$ Exams Articles Questions
2020-05-26 04:48:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8789088129997253, "perplexity": 402.9019972514728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390442.29/warc/CC-MAIN-20200526015239-20200526045239-00209.warc.gz"}
https://nukephysik101.wordpress.com/2011/01/23/larmor-precession-quick/
Magnetic moment ($\mu$) : this is a magnet by angular momentum of charge or spin. its value is: $\mu = \gamma J$ where $J$ is angular momentum, and $\gamma$ is the gyromagnetic rato $\gamma = g \mu_B$ Notice that we are using natural unit. the g is the g-factor is a dimensionless number, which reflect the environment of the spin, for orbital angular momentum, g = 1. $\mu_B$ is Bohr magneton, which is equal to $\mu_B = \frac {e} {2 m}$ for positron since different particle has different mass, their Bohr magneton value are different. electron is the lightest particle, so, it has largest value on Bohr magneton. Larmor frequency: When applied a magnetic field on a magnetic moment, the field will cause the moment precess around the axis of the field. the precession frequency is called Larmor frequency. the precession can be understood in classical way or QM way. Classical way: the change of angular momentum is equal to the applied torque. and the torque is equal to the magnetic moment  cross product with the magnetic field. when in classical frame, the angular momentum, magnetic moment, and magnetic field are ordinary vector. $\vec {\Gamma}= \frac { d \vec{J}}{dt} = \vec{\mu} \times \vec{B} = \gamma \vec {J} \times \vec{B}$ solving gives the procession frequency is : $\omega = - \gamma B$ the minus sign is very important, it indicated that the J is precessing by right hand rule when $\omega >0$. QM way: The Tim dependent Schrödinger equation (TDSE) is : $i \frac {d}{d t} \left| \Psi\right> = H \left|\Psi\right>$ H is the Hamiltonian, for the magnetic field is pointing along the z-axis. $H = -\mu \cdot B = - \gamma J\cdot B = -gamma B J_z = \omega J_z$ the solution is $\left|\Psi(t) \right> = Exp( - i \omega t J_z) \left| \Psi(0) \right>$ Thus, in QM point of view, the state does not “rotate” but only a phase change. However, the rotation operator on z-axis is $R_z ( \theta ) = Exp( - i \frac {\theta}{\hbar} J_z )$ Thus, the solution can be rewritten as: $\left|\Psi (t)\right> = R_z( \omega t) \left|\Psi(0)\right>$ That makes great analogy on rotation on a real vector.
2017-06-28 03:50:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.873360812664032, "perplexity": 766.8284941085553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322320.8/warc/CC-MAIN-20170628032529-20170628052529-00031.warc.gz"}
https://math.stackexchange.com/questions/3123520/maximum-value-of-expression-abc
# Maximum value of expression $a+b+c$ If $$a,b,c$$ are non negative integers such that $$2(a^3+b^3+c^3)=3(a+b+c)^2.$$ Then maximum value of $$a+b+c$$ is ? My Try: Using Jensen Inequality Let $$f(x)=x^3$$. Then $$f''(x)>0$$ for $$x>0$$ is convex function So $$\frac{f(a)+f(b)+f(c)}{3}\geq f\bigg(\frac{a+b+c}{3}\bigg)$$ $$\frac{a^3+b^3+c^3}{3}\geq \bigg(\frac{a+b+c}{3}\bigg)^3\cdots (1)$$ From given condition $$\frac{a^3+b^3+c^3}{3}=\frac{(a+b+c)^2}{2}\cdots (2)$$ So we have $$\frac{(a+b+c)^2}{2}\geq \frac{(a+b+c)^3}{27}$$ $$a+b+c\leq \frac{27}{2}=13.5$$ equality hold when $$a=b=c=4.5$$ but $$a,b,c$$ are non negative integers Could some help me to solve it, Thanks By your work $$a+b+c\leq13,$$ but since $$3(a+b+c)^2=2(a^3+b^3+c^3)=2(a^3+b^3+x^3-3abc+3abc)=$$ $$=2(a+b+c)(a^2+b^2+c^2-ab-ac-bc)+6abc,$$ we obtain that $$a+b+c$$ is divisible by $$3$$. Thus, we see that $$a+b+c\leq12.$$ But $$(a,b,c)=(3,4,5)$$ is valid, which says that $$12$$ is a maximal value. Actually, your inequality we can get also by Holder: $$a^3+b^3+c^3=\frac{1}{9}(1+1+1)^2(a^3+b^3+c^3)\geq\frac{1}{9}(a+b+c)^3.$$ You can solve it also like this. By Cauchy inequality we have $$(a+b+c)(a^3+b^3+c^3)\geq (a^2+b^2+c^2)^2$$ Further: $$a^2+b^2+c^2\geq {1\over 3}(a+b+c)^2$$ so $${3\over 2}(a+b+c)^3\geq {1\over 9}(a+b+c)^4$$ and thus $$a+b+c\leq {27\over 2}$$ Since $$2\mid a+b+c$$ we have $$a+b+c\leq 12$$. Also, since $$x^3\equiv x\pmod 3$$ we have $$3\mid a+b+c$$ so $$a+b+c\in\{0,6,12\}$$. Now if we try to find out $$a,b,c$$ we can assume that $$a\leq b\leq c$$. So if $$a+b+c=12$$ then $$3a^3\leq a^3+b^3+c^3 = 216\implies a\leq 4$$ Now you can find by inspection if 12 is achivable. • Can you plz explain me why $a+b+c$ is divisible by $2$ – DXT Feb 23 at 10:34 • I did not understand that line – DXT Feb 23 at 10:36 • The left side of starting equation is divisible by 2, so must be the right also. – Aqua Feb 23 at 10:37 • How come you accept Rozenbergs solution? There is nothing instructive in his solution. Neither how he got this triple $(3,4,5)$ nor how hi get this formula from where he deduce $3\mid a+b+c$... – Aqua Feb 24 at 12:40 • yours solution is great. – DXT Feb 25 at 4:30 So you have the maximum possible sum and need to restrict to integers. What is the maximum possible sum for positive integers (hint has to be less than or equal to the sum for arbitrary reals)? Call this the target sum Check that the target sum is even (the given condition implies that). Then is there a solution to the equality with the target sum? If not try the next one down. You have a finite search space and cubes are quite sparse.
2019-07-21 08:48:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 35, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9023999571800232, "perplexity": 350.9194655440635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526940.0/warc/CC-MAIN-20190721082354-20190721104354-00310.warc.gz"}
http://sparkandshine.net/split-long-equations-into-multiple-lines-in-latex/
# Split long equations into multiple lines in LaTeX This article presents two ways to split long equations into multiple lines with and without alignment. # 1. Split equations Use multline or split provided by amsmath package. • Use multline to split equations without alignment (first line left, last line right) • Use split to split equations with alignment Here are examples: Fig. 1: Illustrated examples to split equations. # 2. The source code The corresponding source code is as follows: (i).Use equation: $$1+2+3+4+8x+7=1+2+3+4+4x+35 \\ \Rightarrow x=7$$ (ii).Use \emph{multline} to split equations without alignment: \begin{multline}$$1+2+3+4+8x+7=1+2+3+4+4x+35 \\ \Rightarrow x=7$$\end{multline} (iii).Use \emph{split} to split equations with alignment $$\begin{split} 1+2+3+4+8x+7 & =1+2+3+4+4x+35 \\ & \Rightarrow x=7 \end{split}$$ References: [1] StackExchang TeX-LaTeX: How can I split an equation over two lines
2021-10-19 12:18:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.2483654022216797, "perplexity": 11011.25293076317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585265.67/warc/CC-MAIN-20211019105138-20211019135138-00067.warc.gz"}
https://bartoszmilewski.com/category/functional-programming/
## The Free Theorem for Ends In Haskell, the end of a profunctor `p` is defined as a product of all diagonal elements: `forall c. p c c` together with a family of projections: ```pi :: Profunctor p => forall c. (forall a. p a a) -> p c c pi e = e``` In category theory, the end must also satisfy the edge condition which, in (type-annotated) Haskell, could be written as: `dimap f idb . pib = dimap ida f . pia` for any `f :: a -> b`. Using a suitable formulation of parametricity, this equation can be shown to be a free theorem. Let’s first review the free theorem for functors before generalizing it to profunctors. ## Functor Characterization You may think of a functor as a container that has a shape and contents. You can manipulate the contents without changing the shape using `fmap`. In general, when applying `fmap`, you not only change the values stored in the container, you change their type as well. To really capture the shape of the container, you have to consider not only all possible mappings, but also more general relations between different contents. A function is directional, and so is `fmap`, but relations don’t favor either side. They can map multiple values to the same value, and they can map one value to multiple values. Any relation on values induces a relation on containers. For a given functor `F`, if there is a relation `a` between type `A` and type `A'`: `A <=a=> A'` then there is a relation between type `F A` and `F A'`: `F A <=(F a)=> F A'` We call this induced relation `F a`. For instance, consider the relation between students and their grades. Each student may have multiple grades (if they take multiple courses) so this relation is not a function. Given a list of students and a list of grades, we would say that the lists are related if and only if they match at each position. It means that they have to be equal length, and the first grade on the list of grades must belong to the first student on the list of students, and so on. Of course, a list is a very simple container, but this property can be generalized to any functor we can define in Haskell using algebraic data types. The fact that `fmap` doesn’t change the shape of the container can be expressed as a “theorem for free” using relations. We start with two related containers: ```xs :: F A xs':: F A'``` where `A` and `A'` are related through some relation `a`. We want related containers to be `fmap`ped to related containers. But we can’t use the same function to map both containers, because they contain different types. So we have to use two related functions instead. Related functions map related types to related types so, if we have: ```f :: A -> B f':: A'-> B'``` and `A` is related to `A'` through `a`, we want `B` to be related to `B'` through some relation `b`. Also, we want the two functions to map related elements to related elements. So if `x` is related to `x'` through `a`, we want `f x` to be related to `f' x'` through `b`. In that case, we’ll say that `f` and `f'` are related through the relation that we call `a->b`: `f <=(a->b)=> f'` For instance, if `f` is mapping students’ SSNs to last names, and `f'` is mapping letter grades to numerical grades, the results will be related through the relation between students’ last names and their numerical grades. To summarize, we require that for any two relations: ```A <=a=> A' B <=b=> B'``` and any two functions: ```f :: A -> B f':: A'-> B'``` such that: `f <=(a->b)=> f'` and any two containers: ```xs :: F A xs':: F A'``` we have: ```if xs <=(F a)=> xs' then F xs <=(F b)=> F xs'``` This characterization can be extended, with suitable changes, to contravariant functors. ## Profunctor Characterization A profunctor is a functor of two variables. It is contravariant in the first variable and covariant in the second. A profunctor can lift two functions simultaneously using `dimap`: ```class Profunctor p where dimap :: (a -> b) -> (c -> d) -> p b c -> p a d``` We want `dimap` to preserve relations between profunctor values. We start by picking any relations `a`, `b`, `c`, and `d` between types: ```A <=a=> A' B <=b=> B' C <=c=> C' D <=d=> D' ``` For any functions: ```f :: A -> B f' :: A'-> B' g :: C -> D g' :: C'-> D'``` that are related through the following relations induced by function types: ```f <=(a->b)=> f' g <=(c->d)=> g'``` we define: ```xs :: p B C xs':: p B'C'``` The following condition must be satisfied: ```if xs <=(p b c)=> xs' then (p f g) xs <=(p a d)=> (p f' g') xs' ``` where `p f g` stands for the lifting of the two functions by the profunctor `p`. Here’s a quick sanity check. If `b` and `c` are functions: ```b :: B'-> B c :: C -> C'``` than the relation: `xs <=(p b c)=> xs'` becomes: ```xs' = dimap b c xs ``` If `a` and `d` are functions: ```a :: A'-> A d :: D -> D' ``` then these relations: ```f <=(a->b)=> f' g <=(c->d)=> g'``` become: ```f . a = b . f' d . g = g'. c``` and this relation: `(p f g) xs <=(p a d)=> (p f' g') xs'` becomes: `(p f' g') xs' = dimap a d ((p f g) xs)` Substituting `xs'`, we get: `dimap f' g' (dimap b c xs) = dimap a d (dimap f g xs)` and using functoriality: ```dimap (b . f') (g'. c) = dimap (f . a) (d . g) ``` which is identically true. ## Special Case of Profunctor Characterization We are interested in the diagonal elements of a profunctor. Let’s first specialize the general case to: ```C = B C'= B' c = b``` to get: ```xs = p B B xs'= p B'B'``` and ```if xs <=(p b b)=> xs' then (p f g) xs <=(p a d)=> (p f' g') xs' ``` Chosing the following substitutions: ```A = A'= B D = D'= B' a = id d = id f = id g'= id f'= g``` we get: ```if xs <=(p b b)=> xs' then (p id g) xs <=(p id id)=> (p g id) xs' ``` Since `p id id` is the identity relation, we get: `(p id g) xs = (p g id) xs'` or `dimap id g xs = dimap g id xs'` ## Free Theorem We apply the free theorem to the term `xs`: `xs :: forall c. p c c` It must be related to itself through the relation that is induced by its type: `xs <=(forall b. p b b)=> xs` for any relation `b`: `B <=b=> B'` Universal quantification translates to a relation between different instantiations of the polymorphic value: `xsB <=(p b b)=> xsB'` Notice that we can write: ```xsB = piB xs xsB'= piB'xs``` using the projections we defined earlier. We have just shown that this equation leads to: `dimap id g xs = dimap g id xs'` which shows that the wedge condition is indeed a free theorem. ## Natural Transformations Here’s another quick application of the free theorem. The set of natural transformations may be represented as an end of the following profunctor: `type NatP a b = F a -> G b` ```instance Profunctor NatP where dimap f g alpha = fmap g . alpha . fmap f``` The free theorem tells us that for any `mu :: NatP c c`: `(dimap id g) mu = (dimap g id) mu` which is the naturality condition: `mu . fmap g = fmap g . mu` It’s been know for some time that, in Haskell, naturality follows from parametricity, so this is not surprising. ## Acknowledgment I’d like to thank Edward Kmett for reviewing the draft of this post. ## Bibliography 1. Bartosz Milewski, Ends and Coends 2. Edsko de Vries, Parametricity Tutorial, Part 1, Part 2, Contravariant Functions. There are many intuitions that we may attach to morphisms in a category, but we can all agree that if there is a morphism from the object `a` to the object `b` than the two objects are in some way “related.” A morphism is, in a sense, the proof of this relation. This is clearly visible in any poset category, where a morphism is a relation. In general, there may be many “proofs” of the same relation between two objects. These proofs form a set that we call the hom-set. When we vary the objects, we get a mapping from pairs of objects to sets of “proofs.” This mapping is functorial — contravariant in the first argument and covariant in the second. We can look at it as establishing a global relationship between objects in the category. This relationship is described by the hom-functor: `C(-, =) :: Cop × C -> Set` In general, any functor like this may be interpreted as establishing a relation between objects in a category. A relation may also involve two different categories C and D. A functor, which describes such a relation, has the following signature and is called a profunctor: `p :: Dop × C -> Set` Mathematicians say that it’s a profunctor from `C` to `D` (notice the inversion), and use a slashed arrow as a symbol for it: `C ↛ D` You may think of a profunctor as a proof-relevant relation between objects of C and objects of D, where the elements of the set symbolize proofs of the relation. Whenever `p a b` is empty, there is no relation between `a` and `b`. Keep in mind that relations don’t have to be symmetric. Another useful intuition is the generalization of the idea that an endofunctor is a container. A profunctor value of the type `p a b` could then be considered a container of `b`s that are keyed by elements of type `a`. In particular, an element of the hom-profunctor is a function from `a` to `b`. In Haskell, a profunctor is defined as a two-argument type constructor `p` equipped with the method called `dimap`, which lifts a pair of functions, the first going in the “wrong” direction: ```class Profunctor p where dimap :: (c -> a) -> (b -> d) -> p a b -> p c d``` The functoriality of the profunctor tells us that if we have a proof that `a` is related to `b`, then we get the proof that `c` is related to `d`, as long as there is a morphism from `c` to `a` and another from `b` to `d`. Or, we can think of the first function as translating new keys to the old keys, and the second function as modifying the contents of the container. For profunctors acting within one category, we can extract quite a lot of information from diagonal elements of the type `p a a`. We can prove that `b` is related to `c` as long as we have a pair of morphisms `b->a` and `a->c`. Even better, we can use a single morphism to reach off-diagonal values. For instance, if we have a morphism `f::a->b`, we can lift the pair `<f, idb>` to go from `p b b` to `p a b`: `dimap f id pbb :: p a b` Or we can lift the pair `<ida, f>` to go from `p a a` to `p a b`: `dimap id f paa :: p a b` ## Dinatural Transformations Since profunctors are functors, we can define natural transformations between them in the standard way. In many cases, though, it’s enough to define the mapping between diagonal elements of two profunctors. Such a transformation is called a dinatural transformation, provided it satisfies the commuting conditions that reflect the two ways we can connect diagonal elements to non-diagonal ones. A dinatural transformation between two profunctors `p` and `q`, which are members of the functor category `[Cop × C, Set]`, is a family of morphisms: `αa :: p a a -> q a a` for which the following diagram commutes, for any `f::a->b`: Notice that this is strictly weaker than the naturality condition. If `α` were a natural transformation in `[Cop × C, Set]`, the above diagram could be constructed from two naturality squares and one functoriality condition (profunctor `q` preserving composition): Notice that a component of a natural transformation `α` in `[Cop × C, Set]` is indexed by a pair of objects `α a b`. A dinatural transformation, on the other hand, is indexed by one object, since it only maps diagonal elements of the respective profunctors. ## Ends We are now ready to advance from “algebra” to what could be considered the “calculus” of category theory. The calculus of ends (and coends) borrows ideas and even some notation from traditional calculus. In particular, the coend may be understood as an infinite sum or an integral, whereas the end is similar to an infinite product. There is even something that resembles the Dirac delta function. An end is a genaralization of a limit, with the functor replaced by a profunctor. Instead of a cone, we have a wedge. The base of a wedge is formed by diagonal elements of a profunctor `p`. The apex of the wedge is an object (here, a set, since we are considering Set-valued profunctors), and the sides are a family of functions mapping the apex to the sets in the base. You may think of this family as one polymorphic function — a function that’s polymorphic in its return type: `α :: forall a . apex -> p a a` Unlike in cones, within a wedge we don’t have any functions that would connect vertices of the base. However, as we’ve seen earlier, given any morphism `f::a->b` in C, we can connect both `p a a` and `p b b` to the common set `p a b`. We therefore insist that the following diagram commute: This is called the wedge condition. It can be written as: `p ida f ∘ αa = p f idb ∘ αb` `dimap id f . alpha = dimap f id . alpha` We can now proceed with the universal construction and define the end of `p` as the uinversal wedge — a set `e` together with a family of functions `π` such that for any other wedge with the apex `a` and a family `α` there is a unique function `h::a->e` that makes all triangles commute: `πa ∘ h = αa` The symbol for the end is the integral sign, with the “integration variable” in the subscript position: `∫c p c c` Components of `π` are called projection maps for the end: `πa :: ∫c p c c -> p a a` Note that if C is a discrete category (no morphisms other than the identities) the end is just a global product of all diagonal entries of `p` across the whole category C. Later I’ll show you that, in the more general case, there is a relationship between the end and this product through an equalizer. In Haskell, the end formula translates directly to the universal quantifier: `forall a. p a a` Strictly speaking, this is just a product of all diagonal elements of `p`, but the wedge condition is satisfied automatically due to parametricity (I’ll explain it in a separate blog post). For any function `f :: a -> b`, the wedge condition reads: `dimap f id . pi = dimap id f . pi` or, with type annotations: `dimap f idb . pib = dimap ida f . pia` where both sides of the equation have the type: `Profunctor p => (forall c. p c c) -> p a b` and `pi` is the polymorphic projection: ```pi :: Profunctor p => forall c. (forall a. p a a) -> p c c pi e = e``` Here, type inference automatically picks the right component of `e`. Just as we were able to express the whole set of commutation conditions for a cone as one natural transformation, likewise we can group all the wedge conditions into one dinatural transformation. For that we need the generalization of the constant functor `Δc` to a constant profunctor that maps all pairs of objects to a single object `c`, and all pairs of morphisms to the identity morphism for this object. A wedge is a dinatural transformation from that functor to the profunctor `p`. Indeed, the dinaturality hexagon shrinks down to the wedge diamond when we realize that `Δc` lifts all morphisms to one identity function. Ends can also be defined for target categories other than Set, but here we’ll only consider Set-valued profunctors and their ends. ## Ends as Equalizers The commutation condition in the definition of the end can be written using an equalizer. First, let’s define two functions (I’m using Haskell notation, because mathematical notation seems to be less user-friendly in this case). These functions correspond to the two converging branches of the wedge condition: ```lambda :: Profunctor p => p a a -> (a -> b) -> p a b lambda paa f = dimap id f paa rho :: Profunctor p => p b b -> (a -> b) -> p a b rho pbb f = dimap f id pbb``` Both functions map diagonal elements of the profunctor `p` to polymorphic functions of the type: `type ProdP p = forall a b. (a -> b) -> p a b` These functions have different types. However, we can unify their types, if we form one big product type, gathering together all diagonal elements of `p`: `newtype DiaProd p = DiaProd (forall a. p a a)` The functions `lambda` and `rho` induce two mappings from this product type: ```lambdaP :: Profunctor p => DiaProd p -> ProdP p lambdaP (DiaProd paa) = lambda paa rhoP :: Profunctor p => DiaProd p -> ProdP p rhoP (DiaProd paa) = rho paa``` The end of `p` is the equalizer of these two functions. Remember that the equalizer picks the largest subset on which two functions are equal. In this case it picks the subset of the product of all diagonal elements for which the wedge diagrams commute. ## Natural Transformations as Ends The most important example of an end is the set of natural transformations. A natural transformation between two functors `F` and `G` is a family of morphisms picked from hom-sets of the form `C(F a, G a)`. If it weren’t for the naturality condition, the set of natural transformations would be just the product of all these hom-sets. In fact, in Haskell, it is: `forall a. f a -> g a` The reason it works in Haskell is because naturality follows from parametricity. Outside of Haskell, though, not all diagonal sections across such hom-sets will yield natural transformations. But notice that the mapping: `<a, b> -> C(F a, G b)` is a profunctor, so it makes sense to study its end. This is the wedge condition: Let’s just pick one element from the set `∫c C(F c, G c)`. The two projections will map this element to two components of a particular transformation, let’s call them: ```τa :: F a -> G a τb :: F b -> G b``` In the left branch, we lift a pair of morphisms `<ida, G f>` using the hom-functor. You may recall that such lifting is implemented as simultaneous pre- and post-composition. When acting on `τa` the lifted pair gives us: `G f ∘ τa ∘ ida` The other branch of the diagram gives us: `idb ∘ τb ∘ F f` Their equality, demanded by the wedge condition, is nothing but the naturality condition for `τ`. ## Coends As expected, the dual to an end is called a coend. It is constructed from a dual to a wedge called a cowedge (pronounced co-wedge, not cow-edge). An edgy cow? The symbol for a coend is the integral sign with the “integration variable” in the superscript position: `∫ c p c c` Just like the end is related to a product, the coend is related to a coproduct, or a sum (in this respect, it resembles an integral, which is a limit of a sum). Rather than having projections, we have injections going from the diagonal elements of the profunctor down to the coend. If it weren’t for the cowedge conditions, we could say that the coend of the profunctor `p` is either `p a a`, or `p b b`, or `p c c`, and so on. Or we could say that there exists such an `a` for which the coend is just the set `p a a`. The universal quantifier that we used in the definition of the end turns into an existential quantifier for the coend. This is why, in pseudo-Haskell, we would define the coend as: `exists a. p a a` The standard way of encoding existential quantifiers in Haskell is to use universally quantified data constructors. We can thus define: `data Coend p = forall a. Coend (p a a)` The logic behind this is that it should be possible to construct a coend using a value of any of the family of types `p a a`, no matter what `a` we chose. Just like an end can be defined using an equalizer, a coend can be described using a coequalizer. All the cowedge conditions can be summarized by taking one gigantic coproduct of `p a b` for all possible functions `b->a`. In Haskell, that would be expressed as an existential type: `data SumP p = forall a b. SumP (b -> a) (p a b)` There are two ways of evaluating this sum type, by lifting the function using `dimap` and applying it to the profunctor `p`: ```lambda, rho :: Profunctor p => SumP p -> DiagSum p lambda (SumP f pab) = DiagSum (dimap f id pab) rho (SumP f pab) = DiagSum (dimap id f pab)``` where `DiagSum` is the sum of diagonal elements of `p`: `data DiagSum p = forall a. DiagSum (p a a)` The coequalizer of these two functions is the coend. A coequilizer is obtained from `DiagSum p` by identifying values that are obtained by applying `lambda` or `rho` to the same argument. Here, the argument is a pair consisting of a function `b->a` and an element of `p a b`. The application of `lambda` and `rho` produces two potentially different values of the type `DiagSum p`. In the coend, these two values are identified, making the cowedge condition automatically satisfied. The process of identification of related elements in a set is formally known as taking a quotient. To define a quotient we need an equivalence relation `~`, a relation that is reflexive, symmetric, and transitive: ```a ~ a if a ~ b then b ~ a if a ~ b and b ~ c then a ~ c``` Such a relation splits the set into equivalence classes. Each class consists of elements that are related to each other. We form a quotient set by picking one representative from each class. A classic example is the definition of rational numbers as pairs of whole numbers with the following equivalence relation: `(a, b) ~ (c, d) iff a * d = b * c` It’s easy to check that this is an equivalence relation. A pair `(a, b)` is interpreted as a fraction `a/b`, and fractions that have a common divisor are identified. A rational number is an equivalence class of such fractions. You might recall from our earlier discussion of limits and colimits that the hom-functor is continuous, that is, it preserves limits. Dually, the contravariant hom-functor turns colimits into limits. These properties can be generalized to ends and coends, which are a generalization of limits and colimits, respectively. In particular, we get a very useful identity for converting coends to ends: `Set(∫ x p x x, c) ≅ ∫x Set(p x x, c)` Let’s have a look at it in pseudo-Haskell: `(exists x. p x x) -> c ≅ forall x. p x x -> c` It tells us that a function that takes an existential type is equivalent to a polymorphic function. This makes perfect sense, because such a function must be prepared to handle any one of the types that may be encoded in the existential type. It’s the same principle that tells us that a function that accepts a sum type must be implemented as a case statement, with a tuple of handlers, one for every type present in the sum. Here, the sum type is replaced by a coend, and a family of handlers becomes an end, or a polymorphic function. ## Ninja Yoneda Lemma The set of natural transformations that appears in the Yoneda lemma may be encoded using an end, resulting in the following formulation: `∫z Set(C(a, z), F z) ≅ F a` There is also a dual formula: `∫ z C(a, z) × F z ≅ F a` This identity is strongly reminiscent of the formula for the Dirac delta function (a function `δ(a - z)`, or rather a distribution, that has an infinite peak at `a = z`). Here, the hom-functor plays the role of the delta function. Together these two identities are sometimes called the Ninja Yoneda lemma. To prove the second formula, we will use the consequence of the Yoneda embedding, which states that two objects are isomorphic if and only if their hom-functors are isomorphic. In other words `a ≅ b` if and only if there is a natural transformation of the type: `[C, Set](C(a, -), C(b, =))` that is an isomorphism. We start by inserting the left-hand side of the identity we want to prove inside a hom-functor that’s going to some arbitrary object `c`: `Set(∫ z C(a, z) × F z, c)` Using the continuity argument, we can replace the coend with the end: `∫z Set(C(a, z) × F z, c)` We can now take advantage of the adjunction between the product and the exponential: `∫z Set(C(a, z), c(F z))` We can “perform the integration” by using the Yoneda lemma to get: `c(F a)` This exponential object is isomorphic to the hom-set: `Set(F a, c)` Finally, we take advantage of the Yoneda embedding to arrive at the isomorphism: `∫ z C(a, z) × F z ≅ F a` ## Profunctor Composition Let’s explore further the idea that a profunctor describes a relation — more precisely, a proof-relevant relation, meaning that the set `p a b` represents the set of proofs that `a` is related to `b`. If we have two relations `p` and `q` we can try to compose them. We’ll say that `a` is related to `b` through the composition of `q` after `p` if there exist an intermediary object `c` such that both `q b c` and `p c a` are non-empty. The proofs of this new relation are all pairs of proofs of individual relations. Therefore, with the understanding that the existential quantifier corresponds to a coend, and the cartesian product of two sets corresponds to “pairs of proofs,” we can define composition of profunctors using the following formula: `(q ∘ p) a b = ∫ c p c a × q b c` Here’s the equivalent Haskell definition from `Data.Profunctor.Composition`, after some renaming: ```data Procompose q p a b where Procompose :: q a c -> p c b -> Procompose q p a b ``` This is using generalized algebraic data type, or GADT syntax, in which a free type variable (here `c`) is automatically existentially quanitified. The (uncurried) data constructor `Procompose` is thus equivalent to: `exists c. (q a c, p c b)` The unit of so defined composition is the hom-functor — this immediately follows from the Ninja Yoneda lemma. It makes sense, therefore, to ask the question if there is a category in which profunctors serve as morphisms. The answer is positive, with the caveat that both associativity and identity laws for profunctor composition hold only up to natural isomorphism. Such a category, where laws are valid up to isomorphism, is called a bicategory (which is more general than a 2-category). So we have a bicategory Prof, in which objects are categories, morphisms are profunctors, and morphisms between morphisms (a.k.a., two-cells) are natural transformations. In fact, one can go even further, because beside profunctors, we also have regular functors as morphisms between categories. A category which has two types of morphisms is called a double category. Profunctors play an important role in the Haskell lens library and in the arrow library. Next: Kan extensions. If there is one structure that permeates category theory and, by implication, the whole of mathematics, it’s the monoid. To study the evolution of this concept is to study the power of abstraction and the idea of getting more for less, which is at the core of mathematics. When I say “evolution” I don’t necessarily mean chronological development. I’m looking at a monoid as if it were a life form evolving through various eons of abstraction. It’s an ambitious project and I’ll have to cover a lot of material. I’ll start slowly, with the definitions of magmas and monoids, but then I will accelerate. A lot of concepts will be introduced in one or two sentences, mainly to familiarize the reader with the notation. I’ll dwell a little on monoidal categories, then breeze through ends, coends, and profunctors. I’ll show you how monads, arrows, and applicative functors arise from monoids in various monoidal categories. ## The Magmas of the Hadean Eon Monoids evolved from more primitive life forms feeding on sets. So, before even touching upon monoids, let’s talk about cartesian products, relations, and functions. You take two sets `a` and `b` (or, in the simplest case, two copies of the same set `a`) and form pairs of elements. That gives you a set of pairs, a.k.a., the cartesian product `a×b`. Any subset of such a cartesian product is called a relation. Two elements `x` and `y` are in a relation if the pair `<x, y>` is a member of that subset. A function from `a` to `b` is a special kind of relation, in which every element `x` in the set `a` has one and only one element `y` in the set `b` that’s related to it. (Sometimes this is called a total function, since it’s defined for all elements of `a`). Even before there were monoids, there was magma. A magma is a set with a binary operation and nothing else. So, in particular, there is no assumption of associativity, and there is no unit. A binary operation is simply a function from the cartesian product of `a` with itself back to `a` `a × a -> a` It takes a pair of elements `<x, y>`, both coming from the set `a`, and maps it to an element of `a`. It’s tempting to quote the Haskell definition of a magma: ```class Magma a where (<>) :: a -> a -> a``` but this definition is already tainted with some higher concepts like currying. An alternative would be: ```class Magma a where (<>) :: (a, a) -> a``` Here, we at least see a pair of elements that are being “multiplied.” But the pair type `(a, a)` is also a higher-level concept. I’ll come back to it later. Lack of associativity means that we cannot identify `(x<>y)<>z` with `x<>(y<>z)`. You have to keep the parentheses. You might have heard of quaternions — their multiplication is associative. But not many people have heard of octonions, which are not associative. In fact Hamilton, who discovered quaternions, invented the word associative to disassociate himself from octonions, which are not. If you’re familiar with continuous groups, you might know that Lie algebras are not associative. Closer to home — most operations on floating-point numbers are not associative on modern computers because of rounding errors. But, really, most interesting binary operations are associative. So out of the magma emerges a semigroup. In a semigroup you can drop parentheses. A non-trivial (that is, non-monoidal) example of a semigroup is the set of integers with `max` binary operation. A maximum of three numbers is the same no matter in which order you pair them. But there is no integer that’s less or equal to any other integer, so this is not a monoid. ## Monoids of the Archean Eon But, really, most interesting binary operations are both associative and unital. There usually is a “do nothing” element with respect to most binary operations. So life as we know it begins with a monoid. A monoid is a set with a binary operation that is associative, and with a special element called the unit `e` that is neutral with respect to the binary operation. To be precise, these are the three monoid laws: ```(x <> y) <> z = x <> (y <> z) e <> x = x x <> e = x``` In Haskell, the traditional definition of a monoid uses `mempty` for the unit and `mappend` for the binary operation: ```class Monoid a where mempty :: a mappend :: a -> a -> a``` As with the magma, the definition of `mappend` is curried. Equivalently, it could have been written as: `mappend :: (a, a) -> a` I’ll come back to this point later. There are plenty of examples of monoids. Non-negative integers with addition, or positive integers with multiplication are the obvious ones. Strings with concatenation are interesting too, because concatenation is not commutative. Just like pairs of elements from two sets `a` and `b` organize themselves into a set `a×b`, which is their cartesian product; functions between two sets organize themselves into a set — the set of functions from `a` to `b`, which we sometimes write as `a->b`. This organizing principle is characteristic of sets, where everything you can think of is a set. Except when it’s more than just a set — for instance when you try to organize all sets into one large collection. This collection, or “class,” is not itself a set. You can’t have a set of all sets, but you can have a category Set of “small” sets, which are sets that belong to a “universe.” In what follows, I will confine myself to a single universe in order to dodge questions from foundational mathematicians. Let’s now pop one level up and look at cartesian product as an operation on sets. For any two sets `a` and `b`, we can construct the set `a×b`. If we view this as “multiplication” of sets, we can say that sets form a magma. But do they form a monoid? Not exactly! To begin with, cartesian product is not associative. We can see it in Haskell: the type `((a, b), c)` is not the same as the type `(a, (b, c))`. They are, however, isomorphic. There is an invertible function called the associator, from one type to the other: ```alpha :: ((a, b), c) -> (a, (b, c)) alpha ((x, y), z) = (x, (y, z))``` It’s just a repackaging of containers (such repackaging is, by the way, called a natural transformation). For the unit of this “multiplication” we can pick the singleton set. In Haskell, this is the type called unit and it’s denoted by an empty pair of parentheses `()`. Again, the unit laws are valid up to isomorphism. There are two such isomorphisms called left and right unitors: ```lambda :: ((), a) -> a lambda ((), x) = x rho :: (a, ()) -> a rho (x, ()) -> x``` We have just exposed monoidal structure in the category Set. Set is not strictly a monoid because monoidal laws are satisfied only up to isomorphism. There is another monoidal structure in Set. Just like cartesian product resembles multiplication, there is an operation on sets that resembles addition. It’s called disjoint sum. In Haskell it’s embodied in the type `Either a b` . Just like cartesian product, disjoint sum is associative up to isomorphism. The unit (or the “zero”) of this sum type is the empty set or, in Haskell, the `Void` type — also up to isomorphism. ## The Cambrian Explosion of Categories The first rule of abstraction is, You do not talk about Fight Club. In the category Set, for instance, we are not supposed to admit that sets have elements. An object in Set is really a set, but you never talk about its elements. We still have functions between sets, but they become abstract morphisms, of which we only know how they compose. Composition of functions is associative, and there is an identity function for every set, which serves as a unit of composition. We can write these rules compactly as: ```(f ∘ g) ∘ h = f ∘ (g ∘ h) id ∘ f = f f ∘ id = f``` These look exactly like monoid laws. So do functions form a monoid with respect to composition? Not quite, because you can’t compose any two functions. They must be composable, which means their endpoints have to match. In Haskell, we can compose `g` after `f`, or `g ∘ f`, only if: ```f :: a -> b g :: b -> c``` Also, there is no single identity function, but a whole family of functions `ida`, one for each set `a`. In Haskell, we call that a polymorphic function. But notice what happens if we restrict ourselves to just a single object `a` in Set. Every morphism from `a` back to `a` can be composed with any other such morphism (their endpoints always match). Moreover, we are guaranteed that among those so called endomorphisms there is one identity morphism `ida`, which acts as a unit of composition. Notice that I switched from the set/function nomenclature to the more general object/morphism naming convention of category theory. We can now forget about sets and functions and define an arbitrary category as a collection (a set in a given universe) of objects, and sets of morphisms that go between them. The only requirements are that any two composable morphisms compose, and that there is an identity morphism for every object. And that composition must be associative. We can now forget about sets and define a monoid as a category that has only one object. The binary operation is just the composition of (endo-)morphisms. It works! We have defined a monoid without a set. Or have we? No, we haven’t! We have just swept it under the rug — the rug being the set of morphisms. Yes, morphisms between any two objects form a set called the hom-set. In a category C, the hom-set between objects `a` and `b` is denoted by `C(a, b)`. So we haven’t completely eliminated sets from the picture. In the single object category M, we have only one hom-set `M(a, a)`. The elements of this set — and we are allowed to call them elements because it’s a set — are morphisms like `f` and `g`. We can compose them, and we can call this composition “multiplication,” thus recovering our previous definition of the monoid as a set. We get associativity for free, and we have the identity morphism `ida` serving as the unit. It might seem at first that we haven’t made progress and, in fact, we might have made some things more complicated by forgetting the internal structure of objects. For instance, in the category Set, it’s no longer obvious what an empty set is. You can’t say it’s a set with no elements because of the Fight Club rule. Similarly with the singleton set. Fortunately, it turns out that both these sets can be uniquely described in terms of their interactions with other sets. By that I mean the kind of functions/morphisms that connect them to other objects in Set. These object-opaque definitions are called universal constructions. For instance, the empty set is the only set that has a unique morphism going from it to every other set. The advantage of this characterization is that it can now be applied to any category. One may ask this question in any category: Is there an object that has this property? If there is, we call it the initial object. The empty set is the initial object in Set. Similarly, a singleton set is the terminal object in Set (and it’s unique up to unique isomorphism). A cartesian product of two sets can also be defined using a universal construction, one which doesn’t mention elements (or pairs of elements). And again, this construction may be used to define a (categorical) product in other categories. Of particular interest are categories where a product exists for every pair of objects (it does in Set). In such categories there is actually an even better way of defining a product using an adjunction. But before we can get to adjunctions, let me summarize a few millions of years of evolution in a few terse paragraphs. A functor is a mapping of categories that preserves their structure. It maps objects to objects and morphisms to morphisms. In Haskell we define a functor (really, an endofunctor) as a type constructor `f` (a mapping from types to types) that can be lifted to functions that go between these types: ```class Functor f where fmap :: (a -> b) -> (f a -> f b)``` The mapping of morphisms must also preserve composition and identity. Functors may collapse multiple objects into one, and multiple morphisms into one, but they never break connections. You may also think of functors as embedding one category inside another. Finally, functors can be composed in the obvious way, and there is an identity endofunctor that maps a category onto itself. It follows that categories (at least the small ones) form a category Cat in which functors serve as morphisms. There may be many ways of embedding one category inside another, and it’s extremely useful to be able to compare such embeddings by defining mappings between them. If we have two functors `F` and `G` between two categories C and D we define a natural transformation between these functors by picking a morphism between a pair `F a` and `G a`, for every `a`. In Haskell, a natural transformation between two functors `f` and `g` is a polymorphic function: `type Nat f g = forall a. f a -> g a` Natural transformations may be composed, and there is an identity natural transformations from any functor to itself. It follows that functors between any two categories C and D form a category denoted by `[C, D]`, where natural transformations play the role of morphisms. A hom-set in such a category is a set of natural transformations between two functors `F` and `G` denoted by `[C, D](F, G)`. An invertible natural transformation is called a natural isomorphism. If two functors are naturally isomorphic they are essentially the same. Using a pair of functors that are the inverse of each other we may define equivalence of categories, but there is an even more useful concept of adjoint functors that compare the structures of two non-equivalent categories. The idea is that we have a “right” functor `R` going from category C to D and a “left” functor `L` going in the other direction, from D to C. There are two possible compositions of these functors, both resulting in round trips or endofunctors. The categories would be equivalent if those endofunctors were naturally isomorphic to identity endofunctors. But for an adjunction, we impose weaker conditions. We require that there be two natural transformations (not necessarily isomorphisms): ```η :: ID -> R ∘ L ε :: L ∘ R -> IC``` The first transformation η is called the unit; and the second ε, the counit of the adjunction. In a small category objects form sets, so it’s possible to form a cartesian product of two small categories C and D. Object in such a category C×D are pairs of objects `<c, d>`, and morphisms are pairs of morphisms `<f, g>`. After these preliminaries, we are ready to define the categorical product in C using an adjunction. We chose C×C as the left category. The left functor is the diagonal functor Δ that maps any object `c` to a pair `<c, c>` and any morphism `f` to a pair of morphisms `<f, f>`. Its right adjoint, if it exists, maps a pair of objects `<a, b>` to their categorical product `a×b`. Interestingly, the terminal object can also be defined using an adjunction. This time we chose, as the left category, a singleton category with one object and one (identity) morphism. The left functor maps any object `c` to the singleton object. Its right adjoint, if it exists, maps the singleton object to the terminal object in C. A category with all products and the terminal object is called a cartesian category, or cartesian monoidal category. Why monoidal? Because the operation of taking the categorical product is monoidal. It’s associative, up to isomorphism; and its unit is the terminal object. Incidentally, this is the same monoidal structure that we’ve seen in Set, but now it’s generalized to the level of other categories. There was another monoidal structure in Set induced by the disjoint sum. Its categorical generalization is given by the coproduct, with the initial object playing the role of the unit. But what about the set of morphisms? In Set, morphisms between two sets `a` and `b` form a hom-set, which is the object of the same category Set. In an arbitrary category C, a hom-set `C(a, b)` is still a set — but now it’s not an object of C. That’s why it’s called the external hom-set. However, there are categories in which each external hom-set has a corresponding object called the internal hom. This object is also called an exponential, `ba`. It can be defined using an adjunction, but only if the category supports products. It’s an adjunction in which the left and right categories are the same. The left endofunctor takes an object `b` and maps it to a product `b×a`, where `a` is an arbitrary fixed object. Its adjoint functor maps an object `b` to the exponential `ba`. The counit of this adjunction: `ε :: ba × a -> b` is the evaluation function. In Haskell it has the following signature: `eval :: (a -> b, a) -> b` The Haskell function type `a->b` is equivalent to the exponential `ba`. A category that has all products and exponentials together with the terminal object is called cartesian closed. Cartesian closed categories, or CCCs, play an important role in the semantics of programming languages. ## Tensorosaurus Rex We have already seen two very similar monoidal structures induced by products and coproducts. In mathematics, two is a crowd, so let’s look for a pattern. Both product and coproduct act as bifunctors `C×C->C`. Let’s call such a bifunctor a tensor product and write it as an infix operator `a ⊗ b`. As a bifunctor, the tensor product can also lift pairs of morphisms: ```f :: a -> a' g :: b -> b' f ⊗ g :: a ⊗ b -> a' ⊗ b'``` To define a monoid on top of a tensor product, we will require that it be associative — up to isomorphism: `α :: (a ⊗ b) ⊗ c -> a ⊗ (b ⊗ c)` We also need a unit object, which we will call `i`. The two unit laws are: ```λ :: i ⊗ a -> a ρ :: a ⊗ i -> a``` A category with a tensor product that satisfies the above properties, plus some additional coherence conditions, is called a monoidal category. We can now specialize the tensor product to categorical product, in which case the unit object is the terminal object; or to coproduct, in which case we chose the initial object as the unit. But there is an even more interesting operation that has all the properties of the tensor product. I’m talking about functor composition. ## Functorosaurus Functors between any two categories C and D form a functor category `[C, D]` with natural transformations playing the role of morphisms. In general, these functors don’t compose (their endpoints don’t match) unless we pick the target category to be the same as the source category. ### Endofunctor Composition In the endofunctor category `[C, C]` any two functors can be composed. But in `[C, C]` functors are objects, so functor composition becomes an operation on objects. For any two endofunctors `F` and `G` it produces a new endofunctor `F∘G`. It’s a binary operation, so it’s a potential candidate for a tensor product. Indeed, it is a bifunctor: it can be lifted to natural transformations, which are morphisms in `[C, C]`. It’s associative — in fact it’s strictly associative, the associator α is the identity natural transformation. The unit with respect to endofunctor composition is the identity functor `I`. So the category of endofunctors is a monoidal category. Unlike product and coproduct, which are symmetric up to isomorphism, endofunctor composition is not symmetric. In general, there is no relation between `F∘G` and `G∘F`. ### Profunctor Composition Different species of functors came up with their own composition strategies. Take for instance the profunctors, which are functors `Cop×D->Set`. They generalize the idea of relations between objects in C and D. The sets they map to may be thought of as sets of proofs of the relationship. An empty set means that the two objects are not related. If you want to compose two relations, you have to find an element that’s common to the image of one relation and the source of the other (relations are not, in general, symmetric). The proofs of the new composite relation are pairs of proofs of individual relations. Symbolically, if `p` and `q` are such profunctors/relations, their composition can be written as: `exists x. (p a x, q x b)` Existential quantification in Haskell translates to polymorphic construction, so the actual definition is: `data PCompose p q a b = forall x . PCompose (p a x) (q x b)` In category theory, existential quantification is encoded as the coend, which is a generalization of a colimit for profunctors. The coend formula for the composition of two profunctors reads: `(p ⊗ q) a b = ∫ z p a z × q z b` The product here is the cartesian product of sets. Profunctors, being functors, form a category in which morphisms are natural transformations. As long as the two categories that they relate are the same, any two profunctors can be composed using a coend. So profunctor composition is a good candidate for a tensor product in such a category. It is indeed associative, up to isomorphism. But what’s the unit of profunctor composition? It turns out that the simplest profuctor — the hom-functor — because of the Yoneda lemma, is the unit of composition: ```∫ z C(a, z) × p z b ≅ p a b ∫ z p a z × C(z, b) ≅ p a b``` Thus profunctors `Cop×C->Set` form a monoidal category. ### Day Convolution Or consider Set-valued functors. They can be composed using Day convolution. For that, the category C must itself be monoidal. Day convolution of two functors `C->Set` is defined using a coend: `(f ★ g) a = ∫ x y f x × g y × C(x ⊗ y, a)` Here, the tensor product of `x ⊗ y` comes from the monoidal category C, the other products are just cartesian products of sets (one of them being the hom-set). As before, in Haskell, the coend turns into existential quantifier, which can be written symbolically: `Day f g a = exists x y. ((f x, g y), (x, y) -> a)` and encoded as a polymorphic constructor: `data Day f g a = forall x y. Day (f x) (g y) ((x, y) -> a)` We use the fact that the category of Haskell types is monoidal with respect to cartesian product. We can build a monoidal category based on Day convolution. The unit with respect to Day convolution is `C(i, -)`, the hom-functor applied to `i` — the unit in the monoidal category C. For instance, the left identity can be derived from: `(C(i, -) ★ g) a = ∫ x y C(i, x) × g y × C(x ⊗ y, a)` Applying the Yoneda lemma, or “integrating over `x`,” we get: `∫y g y × C(i ⊗ y, a)` Considering that `i` is the unit of the tensor product, we can perform the second integration to get `g a`. ## The Monozoic Era Monoidal categories are important because they provide rich grazing grounds for monoids. In a monoidal category we can define a more general monoid. It’s an object `m` with some special properties. These properties replace the usual definitions of multiplication and unit. First, let’s reformulate the definition of a set-based monoid, taking into account the fact that Set is a monoidal category with respect to cartesian product. A monoid is a set, so it’s an object in Set — let’s call it `m`. Multiplication maps pairs of elements of `m` back to `m`. These pairs are just elements of the cartesian product `m × m`. So multiplication is defined as a function: `μ :: m × m -> m` Unit of multiplication is a special element of `m`. We can select this element by providing a special morphism from the singleton set to `m`: `η :: () -> m` We can now express associativity and unit laws as properties of these two functions. The beauty of this formulation is that it generalizes easily to any cartesian category — just replace functions with morphisms and the unit `()` with the terminal object. There’s no reason to stop there: we can lift this definition all the way up to a monoidal category. A monoid in a monoidal category is an object `m` together with two morphisms: ```μ :: m ⊗ m -> m η :: i -> m``` Here `i` is the unit object with respect to the tensor product ⊗. Monoidal laws can be expressed using the associator α and the two unitors, λ and ρ, of the monoidal category: Having previously defined several interesting monoidal categories, we can now go digging for new monoids. Let’s start with the category of endofunctors where the tensor product is functor composition. A monoid in the category of endofunctors is an endofunctor `m` and two morphism. Remember that morphisms in a functor category are natural transformations. So we end up with two natural transformations: ```μ :: m ∘ m -> m η :: I -> m``` where `I` is the identity functor. Their components at an object `a` are: ```μa :: m (m a) -> m a ηa :: a -> m a``` This construct is easily recognizable as a monad. The associativity and unit laws are just monad laws. In Haskell, `μa` is called `join` and `ηa` is called `return`. ### Arrows Let’s switch to the category of profunctors `Cop×C->Set` with profunctor composition as the tensor product. A monoid in that category is a profunctor `ar`. Multiplication is defined by a natural transformation: `μ :: ar ⊗ ar -> ar` Its component at `a`, `b` is: `μa b :: (∫ z ar a z × ar z b) -> ar a b` To simplify this formula we need a very useful identity that relates coends to ends. A hom-set that starts at a coend is equivalent to an end of the hom set: `C(∫ z p z z, y) ≅ ∫ z C(p z z, y)` Or, replacing external hom-sets with internal homs: `(∫ z p z z) -> y ≅ ∫ z (p z z -> y)` In Haskell, this formula is used to turn functions that take existential types to functions that are polymorphic: `(exists z. p z z) -> y ≅ forall z. (p z z -> y)` Intuitively, it makes perfect sense. If you want to define a function that takes an existential type, you have to be prepared to handle any type. Using that identity, our multiplication formula can be rewritten as: `μa b :: ∫ z ((ar a z × ar z b) -> ar a b)` In Haskell, this derivation uses the existential quantifier: `mu a b = (exists z. (ar a z, ar z b)) -> ar a b` As we discussed, a function from an existential type is equivalent to a polymorphic function: `forall z. (ar a z, ar z b) -> ar a b` or, after currying and dropping the redundant quantification: `ar a z -> ar z b -> ar a b` This looks very much like a composition of morphisms in a category. In Haskell, this function is known in the infix-operator form as: `(>>>) :: ar a z -> ar z b -> ar a b` Let’s see what we get as the monoidal unit. Remember that the unit object in the profunctor category is the hom-functor `C(a, b)`. `ηa b :: C(a, b) -> ar a b` In Haskell, this polymorphic function is traditionally called `arr`: `arr :: (a -> b) -> ar a b` The whole construct is known in Haskell as a pre-arrow. The full arrow is defined as a monoid in the category of strong profunctors, with strength defined as a natural transformation: `sta b :: p a b -> p (a, x) (b, x)` In Haskell, this function is called `first`. ### Applicatives There are several categorical formulations of what’s called in Haskell the applicative functor. To first approximaton, Haskell’s type system is the category Set. To translate Haskell constructs to category theory, the safest approach is to just play with endofunctors in Set. But both Set and its endofunctors have a lot of extra structure, so I’d like to start in a slightly more general setting. Let’s have a look at the monoidal category of functors `[C, Set]`, with Day convolution as the tensor product, and `C(i, -)` as unit. A monoid in this category is a functor `f` with multiplication given by the natural transformation: `μ :: f ★ f -> f` and unit given by: `η :: C(i, -) -> f` It turns out that the existence of these two natural transformations is equivalent to the requirement that `f` be a lax monoidal functor, which is the basis of the definition of the applicative functor in Haskell. A monoidal functor is a functor that maps monoidal structure of one category to the monoidal structure of another category. It maps the tensor product, and it maps the unit object. In our case, the source category C has the monoidal structure given by the tensor product ⊗, and the target category Set is monoidal with respect to the cartesian product ×. A functor is monoidal if it doesn’t matter whether we first map two object and then multiply them, or first multiply them and then map the result: `f x × f y ≅ f (x ⊗ y)` Also, the unit object in Set should be isomporphic to the result of mapping the unit object in C: `() ≅ f i` Here, `()` is the terminal object in Set and `i` is the unit object in C. These conditions are relaxed in the definition of a lax monoidal functor. A lax monoidal functor replaces isomorphisms with regular unidirectional morphisms: ```f x × f y -> f (x ⊗ y) () -> f i``` It can be shown that the monoid in the category`[C, Set],` with Day convolution as the tensor product, is equivalent to the lax monoidal functor. The Haskell definition of `Applicative` doesn’t look like Day convolution or like a lax monoidal functor: ```class Functor f => Applicative f where (<*>) :: f (a -> b) -> (f a -> f b) pure :: a -> f a``` You may recognize `pure` as a component of η, the natural transformation defining the monoid with respect to Day convolution. When you replace the category C with Set, the unit object `C(i, -)` turns into the identity functor. However, the operator `<*>` is lifted from the definition of yet another lax functor, the lax closed functor. It’s a functor that preserves the closed structure defined by the internal hom functor. In Set, the internal hom functor is just the arrow `(->)`, hence the definition: ```class Functor f => Closed f where (<*>) :: f (a -> b) -> (f a -> f b) unit :: f ()``` As long as the internal hom is defined through the adjunction with the product, a lax closed functor is equivalent to a lax monoidal functor. ## Conclusion It is pretty shocking to realize how many different animals share the same body plan — I’m talking here about the monoid as the skeleton of a myriad of different mathematical and programming constructs. And I haven’t even touched on the whole kingdom of enriched categories, where monoidal categories form the reservoir of hom-objects. Virtually all notions I’ve discussed here can be generalized to enriched categories, including functors, profunctors, the Yoneda lemma, Day convolution, and so on. ## Glossary • Hadean Eon: Began with the formation of the Earth about 4.6 billion years ago. It’s the period before the earliest-known rocks. • Archean Eon: During the Archean, the Earth’s crust had cooled enough to allow the formation of continents. • Cambrian explosion: Relatively short evolutionary event, during which most major animal phyla appeared. • Arthropods: from Greek ἄρθρωσις árthrosis, “joint” • Tensor, from Latin tendere “to stretch” • Functor: from Latin fungi, “perform” ## Bibliograhpy 1. Moggi, Notions of Computation and Monads. 2. Rivas, Jaskelioff, Notions of Computation as Monoids. Unlike monads, which came into programming straight from category theory, applicative functors have their origins in programming. McBride and Paterson introduced applicative functors as a programming pearl in their paper Applicative programming with effects. They also provided a categorical interpretation of applicatives in terms of strong lax monoidal functors. It’s been accepted that, just like “a monad is a monoid in the category of endofunctors,” so “an applicative is a strong lax monoidal functor.” The so called “tensorial strength” seems to be important in categorical semantics, and in his seminal paper Notions of computation and monads, Moggi argued that effects should be described using strong monads. It makes sense, considering that a computation is done in a context, and you should be able to make the global context available under the monad. The fact that we don’t talk much about strong monads in Haskell is due to the fact that all functors in the category Set, which underlies the Haskell’s type system, have canonical strength. So why do we talk about strength when dealing with applicative functors? I have looked into this question and came to the conclusion that there is no fundamental reason, and that it’s okay to just say: An applicative is a lax monoidal functor In this post I’ll discuss different equivalent categorical definitions of the applicative functor. I’ll start with a lax closed functor, then move to a lax monoidal functor, and show the equivalence of the two definitions. Then I’ll introduce the calculus of ends and show that the third definition of the applicative functor as a monoid in a suitable functor category equipped with Day convolution is equivalent to the previous ones. ## Applicative as a Lax Closed Functor ```class Functor f => Applicative f where (<*>) :: f (a -> b) -> (f a -> f b) pure :: a -> f a``` At first sight it doesn’t seem to involve a monoidal structure. It looks more like preserving function arrows (I added some redundant parentheses to suggest this interpretation). Categorically, functors that “preserve arrows” are known as closed functors. Let’s look at a definition of a closed functor `f` between two categories C and D. We have to assume that both categories are closed, meaning that they have internal hom-objects for every pair of objects. Internal hom-objects are also called function objects or exponentials. They are normally defined through the right adjoint to the product functor: `C(z × a, b) ≅ C(z, a => b)` To distinguish between sets of morphisms and function objects (they are the same thing in Set), I will temporarily use double arrows for function objects. We can take a functor `f` and act with it on the function object `a=>b` in the category C. We get an object `f (a=>b)` in D. Or we can map the two objects `a` and `b` from C to D and then construct the function object in D: `f a => f b`. We call a functor closed if the two results are isomorphic (I have subscripted the two arrows with the categories where they are defined): `f (a =>C b) ≅ (f a =>D f b)` and if the functor preserves the unit object: `iD ≅ f iC` What’s the unit object? Normally, this is the unit with respect to the same product that was used to define the function object using the adjunction. I’m saying “normally,” because it’s possible to define a closed category without a product. Note: The two arrows and the two `i`s are defined with respect to two different products. The first isomorphism must be natural in both `a` and `b`. Also, to complete the picture, there are some diagrams that must commute. The two isomorphisms that define a closed functor can be relaxed and replaced by unidirectional morphisms. The result is a lax closed functor: ```f (a => b) -> (f a => f b) i -> f i``` This looks almost like the definition of `Applicative`, except for one problem: how can we recover the natural transformation we call `pure` from a single morphism `i -> f i`. One way to do it is from the position of strength. An endofunctor `f` has tensorial strength if there is a natural transformation: `stc a :: c ⊗ f a -> f (c ⊗ a)` Think of `c` as the context in which the computation `f a` is performed. Strength means that we can use this external context inside the computation. In the category Set, with the tensor product replaced by cartesian product, all functors have canonical strength. In Haskell, we would define it as: `st (c, fa) = fmap ((,) c) fa` The morphism in the definition of the lax closed functor translates to: `unit :: () -> f ()` Notice that, up to isomorphism, the unit type `()` is the unit with respect to cartesian product. The relevant isomorphisms are: ```λa :: ((), a) -> a ρa :: (a, ()) -> a``` Here’s the derivation from Rivas and Jaskelioff’s Notions of Computation as Monoids: ``` a ≅ (a, ()) -- unit law, ρ-1 -> (a, f ()) -- lax unit -> f (a, ()) -- strength ≅ f a -- lifted unit law, f ρ``` Strength is necessary if you’re starting with a lax closed (or monoidal — see the next section) endofunctor in an arbitrary closed (or monoidal) category and you want to derive `pure` within that category — not after you restrict it to Set. There is, however, an alternative derivation using the Yoneda lemma: ```f () ≅ forall a. (() -> a) -> f a -- Yoneda ≅ forall a. a -> f a -- because: (() -> a) ≅ a``` We recover the whole natural transformation from a single value. The advantage of this derivation is that it generalizes beyond endofunctors and it doesn’t require strength. As we’ll see later, it also ties nicely with the Day-convolution definition of applicative. The Yoneda lemma only works for Set-valued functors, but so does Day convolution (there are enriched versions of both Yoneda and Day convolution, but I’m not going to discuss them here). We can define the categorical version of the Haskell’s applicative functor as a lax closed functor going from a closed category C to Set. It’s a functor equipped with a natural transformation: `f (a => b) -> (f a -> f b)` where `a=>b` is the internal hom-object in `C` (the second arrow is a function type in Set), and a function: `1 -> f i` where `1` is the singleton set and `i` is the unit object in `C`. The importance of a categorical definition is that it comes with additional identities or “axioms.” A lax closed functor must be compatible with the structure of both categories. I will not go into details here, because we are really only interested in closed categories that are monoidal, where these axioms are easier to express. The definition of a lax closed functor is easily translated to Haskell: ```class Functor f => Closed f where (<*>) :: f (a -> b) -> f a -> f b unit :: f ()``` ## Applicative as a Lax Monoidal Functor Even though it’s possible to define a closed category without a monoidal structure, in practice we usually work with monoidal categories. This is reflected in the equivalent definition of the Haskell’s applicative functor as a lax monoidal functor. In Haskell, we would write: ```class Functor f => Monoidal f where (>*<) :: (f a, f b) -> f (a, b) unit :: f ()``` This definition is equivalent to our previous definition of a closed functor. That’s because, as we’ve seen, a function object in a monoidal category is defined in terms of a product. We can show the equivalence in a more general categorical setting. This time let’s start with a symmetric closed monoidal category C, in which the function object is defined through the right adjoint to the tensor product: `C(z ⊗ a, b) ≅ C(z, a => b)` As usual, the tensor product is associative and unital — with the unit object `i` — up to isomorphism. The symmetry is defined through natural isomorphism: `γ :: a ⊗ b -> b ⊗ a` A functor `f` between two monoidal categories is lax monoidal if there exist: (1) a natural transformation `f a ⊗ f b -> f (a ⊗ b)` and (2) a morphism `i -> f i` Notice that the products and units on either side of the two mappings are from different categories. A (lax-) monoidal functor must also preserve associativity and unit laws. For instance a triple product `f a ⊗ (f b ⊗ f c)` may be rearranged using an associator α to give `(f a ⊗ f b) ⊗ f c` then converted to `f (a ⊗ b) ⊗ f c` and then to `f ((a ⊗ b) ⊗ c)` Or it could be first converted to `f a ⊗ f (b ⊗ c)` and then to `f (a ⊗ (b ⊗ c))` These two should be equivalent under the associator in C. Similarly, `f a ⊗ i` can be simplified to `f a` using the right unitor ρ in D. Or it could be first converted to `f a ⊗ f i`, then to `f (a ⊗ i)`, and then to `f a`, using the right unitor in C. The two paths should be equivalent. (Similarly for the left identity.) We will now consider functors from C to Set, with Set equipped with the usual cartesian product, and the singleton set as unit. A lax monoidal functor is defined by: (1) a natural transformation: `(f a, f b) -> f (a ⊗ b)` and (2) a choice of an element of the set `f i` (a function from 1 to `f i` picks an element from that set). We need the target category to be Set because we want to be able to use the Yoneda lemma to show equivalence with the standard definition of applicative. I’ll come back to this point later. ## The Equivalence The definitions of a lax closed and a lax monoidal functors are equivalent when C is a closed symmetric monoidal category. The proof relies on the existence of the adjunction, in particular the unit and the counit of the adjunction: ```ηa :: a -> (b => (a ⊗ b)) εb :: (a => b) ⊗ a -> b``` For instance, let’s assume that `f` is lax-closed. We want to construct the mapping `(f a, f b) -> f (a ⊗ b)` First, we apply the lifted pair (unit, identity), `(f η, f id)` `(f a -> f (b => a ⊗ b), f id)` to the left hand side. We get: `(f (b => a ⊗ b), f b)` Now we can use (the uncurried version of) the lax-closed morphism: `(f (b => x), f b) -> f x` to get: `f (a ⊗ b)` Conversely, assuming the lax-monoidal property we can show that the functor is lax-closed, that is to say, implement the following function: `(f (a => b), f a) -> f b` First we use the lax monoidal morphism on the left hand side: `f ((a => b) ⊗ a)` and then use the counit (a.k.a. the evaluation morphism) to get the desired result `f b` There is yet another presentation of applicatives using Day convolution. But before we get there, we need a little refresher on calculus. ## Calculus of Ends Ends and coends are very useful constructs generalizing limits and colimits. They are defined through universal constructions. They have a few fundamental properties that are used over and over in categorical calculations. I’ll just introduce the notation and a few important identities. We’ll be working in a symmetric monoidal category C with functors from C to Set and profunctors from Cop×C to Set. The end of a profunctor `p` is a set denoted by: `∫a p a a` The most important thing about ends is that a set of natural transformations between two functors `f` and `g` can be represented as an end: `[C, Set](f, g) = ∫a C(f a, g a)` In Haskell, the end corresponds to universal quantification over a functor of mixed variance. For instance, the natural transformation formula takes the familiar form: `forall a. f a -> g a` The Yoneda lemma, which deals with natural transformations, can also be written using an end: `∫z (C(a, z) -> f z) ≅ f a` In Haskell, we can write it as the equivalence: `forall z. ((a -> z) -> f z) ≅ f a` which is a generalization of the continuation passing transform. The dual notion of coend is similarly written using an integral sign, with the “integration variable” in the superscript position: `∫ a p a a` In pseudo-Haskell, a coend is represented by an existential quantifier. It’s possible to define existential data types in Haskell by converting existential quantification to universal one. The relevant identity in terms of coends and ends reads: `(∫ z p z z) -> y ≅ ∫ z (p z z -> y)` In Haskell, this formula is used to turn functions that take existential types to functions that are polymorphic: `(exists z. p z z) -> y ≅ forall z. (p z z -> y)` Intuitively, it makes perfect sense. If you want to define a function that takes an existential type, you have to be prepared to handle any type. The equivalent of the Yoneda lemma for coends reads: `∫ z f z × C(z, a) ≅ f a` `exists z. (f z, z -> a) ≅ f a` (The intuition is that the only thing you can do with this pair is to `fmap` the function over the first component.) There is also a contravariant version of this identity: `∫ z C(a, z) × f z ≅ f a` where `f` is a contravariant functor (a.k.a., a presheaf). In pseudo-Haskell: `exists z. (a -> z, f z) ≅ f a` (The intuition is that the only thing you can do with this pair is to apply the `contramap` of the first component to the second component.) Using coends we can define a tensor product in the category of functors `[C, Set]`. This product is called Day convolution: `(f ★ g) a = ∫ x y f x × g y × C(x ⊗ y, a)` It is a bifunctor in that category (read, it can be used to lift natural transformations). It’s associative and symmetric up to isomorphism. It also has a unit — the hom-functor `C(i, -)`, where `i` is the monoidal unit in C. In other words, Day convolution imbues the category `[C, Set]` with monoidal structure. Let’s verify the unit laws. `(C(i, -) ★ g) a = ∫ x y C(i, x) × g y × C(x ⊗ y, a)` We can use the contravariant Yoneda to “integrate over x” to get: `∫ y g y × C(i ⊗ y, a)` Considering that `i` is the unit of the tensor product in C, we get: `∫ y g y × C(y, a)` Covariant Yoneda lets us “integrate over y” to get the desired `g a`. The same method works for the right unit law. ## Applicative as a Monoid Given a monoidal category, we can always define a monoid as an object `m` equipped with two morphisms: ```μ :: m ⊗ m -> m η :: i -> m``` satisfying the laws of associativity and unitality. We have shown that the functor category `[C, Set]` (with C a symmetric monoidal category) is monoidal under Day convolution. An object in this category is a functor `f`. The two morphisms that would make it a candidate for a monoid are natural transformations: ```μ :: f ★ f -> f η :: C(i, -) -> f``` The `a` component of the natural transformation μ can be rewritten as: `(∫ x y f x × f y × C(x ⊗ y, a)) -> f a` which is equivalent to: `∫x y (f x × f y × C(x ⊗ y, a) -> f a)` or, upon currying: `∫x y (f x, f y) -> C(x ⊗ y, a) -> f a` It turns out that so defined monoid is equivalent to a lax monoidal functor. This was shown by Rivas and Jaskelioff. The following derivation is due to Bob Atkey. The trick is to start with the whole set of natural transformation from `f★f` to `f`. The multiplication μ is just one of them. We’ll express the set of natural transformations as an end: `∫ a ((f ★ f) a -> f a)` Plugging in the formula for the `a` component of μ, we get: `∫ a x y (f x, f y) -> C(x ⊗ y, a) -> f a` The end over `a` does not involve the first argument, so we can move the integral sign: `∫ x y (f x, f y) -> ∫ a C(x ⊗ y, a) -> f a` Then we use the Yoneda lemma to “perform the integration” over `a`: `∫ x y (f x, f y) -> f (x ⊗ y)` You may recognize this as a set of natural transformations that define a lax monoidal functor. We have established a one-to-one correspondence between these natural transformations and the ones defining monoidal multiplication using Day convolution. The remaining part is to show the equivalence between the unit with respect to Day convolution and the second part of the definition of the lax monoidal functor, the morphism: `1 -> f i` We start with the set of natural transformations that contains our η: `∫ a (i -> a) -> f a` By Yoneda, this is just `f i`. Picking an element from a set is equivalent to defining a morphism from the singleton set `1`, so for any choice of η we get: `1 -> f i` and vice versa. The two definitions are equivalent. Notice that the monoidal unit η under Day convolution becomes the definition of `pure` in the Haskell version of applicative. Indeed, when we replace the category C with Set, `f` becomes and endofunctor, and the unit of Day convolution `C(i, -)` becomes the identity functor `Id`. We get: `η :: Id -> f` or, in components: `pure :: a -> f a` So, strictly speaking, the Haskell definition of `Applicative` mixes the elements of the lax closed functor and the monoidal unit under Day convolution. ## Acknowledgments I’m grateful to Mauro Jaskelioff and Exequiel Rivas for correspondence and to Bob Atkey, Dimitri Chikhladze, and Make Shulman for answering my questions on Math Overflow. Now that we have covered monads, we can reap the benefits of duality and get comonads for free simply by reversing the arrows and working in the opposite category. Recall that, at the most basic level, monads are about composing Kleisli arrows: `a -> m b` where `m` is a functor that is a monad. If we use the letter `w` (upside down `m`) for the comonad, we can define co-Kleisli arrows as morphism of the type: `w a -> b` The analog of the fish operator for co-Kleisli arrows is defined as: `(=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c)` For co-Kleisli arrows to form a category we also have to have an identity co-Kleisli arrow, which is called `extract`: `extract :: w a -> a` This is the dual of `return`. We also have to impose the laws of associativity as well as left- and right-identity. Putting it all together, we could define a comonad in Haskell as: ```class Functor w => Comonad w where (=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c) extract :: w a -> a``` In practice, we use slightly different primitives, as we’ll see shortly. The question is, what’s the use for comonads in programming? Let’s compare the monad with the comonad. A monad provides a way of putting a value in a container using `return`. It doesn’t give you access to a value or values stored inside. Of course, data structures that implement monads might provide access to their contents, but that’s considered a bonus. There is no common interface for extracting values from a monad. And we’ve seen the example of the `IO` monad that prides itself in never exposing its contents. A comonad, on the other hand, provides the means of extracting a single value from it. It does not give the means to insert values. So if you want to think of a comonad as a container, it always comes pre-filled with contents, and it lets you peek at it. Just as a Kleisli arrow takes a value and produces some embellished result — it embellishes it with context — a co-Kleisli arrow takes a value together with a whole context and produces a result. It’s an embodiment of contextual computation. Remember the reader monad? We introduced it to tackle the problem of implementing computations that need access to some read-only environment `e`. Such computations can be represented as pure functions of the form: `(a, e) -> b` We used currying to turn them into Kleisli arrows: `a -> (e -> b)` But notice that these functions already have the form of co-Kleisli arrows. Let’s massage their arguments into the more convenient functor form: ```data Product e a = P e a deriving Functor``` We can easily define the composition operator by making the same environment available to the arrows that we are composing: ```(=>=) :: (Product e a -> b) -> (Product e b -> c) -> (Product e a -> c) f =>= g = \(P e a) -> let b = f (P e a) c = g (P e b) in c``` The implementation of `extract` simply ignores the environment: `extract (P e a) = a` Not surprisingly, the product comonad can be used to perform exactly the same computations as the reader monad. In a way, the comonadic implementation of the environment is more natural — it follows the spirit of “computation in context.” On the other hand, monads come with the convenient syntactic sugar of the `do` notation. The connection between the reader monad and the product comonad goes deeper, having to do with the fact that the reader functor is the right adjoint of the product functor. In general, though, comonads cover different notions of computation than monads. We’ll see more examples later. It’s easy to generalize the `Product` comonad to arbitrary product types including tuples and records. ## Dissecting the Composition Continuing the process of dualization, we could go ahead and dualize monadic bind and join. Alternatively, we can repeat the process we used with monads, where we studied the anatomy of the fish operator. This approach seems more enlightening. The starting point is the realization that the composition operator must produce a co-Kleisli arrow that takes `w a` and produces a `c`. The only way to produce a `c` is to apply the second function to an argument of the type `w b`: ```(=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c) f =>= g = g ... ``` But how can we produce a value of type `w b` that could be fed to `g`? We have at our disposal the argument of type `w a` and the function `f :: w a -> b`. The solution is to define the dual of bind, which is called extend: `extend :: (w a -> b) -> w a -> w b` Using `extend` we can implement composition: `f =>= g = g . extend f` Can we next dissect `extend`? You might be tempted to say, why not just apply the function `w a -> b` to the argument `w a`, but then you quickly realize that you’d have no way of converting the resulting `b` to `w b`. Remember, the comonad provides no means of lifting values. At this point, in the analogous construction for monads, we used `fmap`. The only way we could use `fmap` here would be if we had something of the type `w (w a)` at our disposal. If we coud only turn `w a` into `w (w a)`. And, conveniently, that would be exactly the dual of `join`. We call it `duplicate`: `duplicate :: w a -> w (w a)` So, just like with the definitions of the monad, we have three equivalent definitions of the comonad: using co-Kleisli arrows, `extend`, or `duplicate`. Here’s the Haskell definition taken directly from `Control.Comonad` library: ```class Functor w => Comonad w where extract :: w a -> a duplicate :: w a -> w (w a) duplicate = extend id extend :: (w a -> b) -> w a -> w b extend f = fmap f . duplicate``` Provided are the default implementations of `extend` in terms of `duplicate` and vice versa, so you only need to override one of them. The intuition behind these functions is based on the idea that, in general, a comonad can be thought of as a container filled with values of type `a` (the product comonad was a special case of just one value). There is a notion of the “current” value, one that’s easily accessible through `extract`. A co-Kleisli arrow performs some computation that is focused on the current value, but it has access to all the surrounding values. Think of the Conway’s game of life. Each cell contains a value (usually just `True` or `False`). A comonad corresponding to the game of life would be a grid of cells focused on the “current” cell. So what does `duplicate` do? It takes a comonadic container `w a` and produces a container of containers `w (w a)`. The idea is that each of these containers is focused on a different `a` inside `w a`. In the game of life, you would get a grid of grids, each cell of the outer grid containing an inner grid that’s focused on a different cell. Now look at `extend`. It takes a co-Kleisli arrow and a comonadic container `w a` filled with `a`s. It applies the computation to all of these `a`s, replacing them with `b`s. The result is a comonadic container filled with `b`s. `extend` does it by shifting the focus from one `a` to another and applying the co-Kleisli arrow to each of them in turn. In the game of life, the co-Kleisli arrow would calculate the new state of the current cell. To do that, it would look at its context — presumably its nearest neighbors. The default implementation of `extend` illustrates this process. First we call `duplicate` to produce all possible foci and then we apply `f` to each of them. This process of shifting the focus from one element of the container to another is best illustrated with the example of an infinite stream. Such a stream is just like a list, except that it doesn’t have the empty constructor: `data Stream a = Cons a (Stream a)` It’s trivially a `Functor`: ```instance Functor Stream where fmap f (Cons a as) = Cons (f a) (fmap f as)``` The focus of a stream is its first element, so here’s the implementation of `extract`: `extract (Cons a _) = a` `duplicate` produces a stream of streams, each focused on a different element. `duplicate (Cons a as) = Cons (Cons a as) (duplicate as)` The first element is the original stream, the second element is the tail of the original stream, the third element is its tail, and so on, ad infinitum. Here’s the complete instance: ```instance Comonad Stream where extract (Cons a _) = a duplicate (Cons a as) = Cons (Cons a as) (duplicate as)``` This is a very functional way of looking at streams. In an imperative language, we would probably start with a method `advance` that shifts the stream by one position. Here, `duplicate` produces all shifted streams in one fell swoop. Haskell’s laziness makes this possible and even desirable. Of course, to make a `Stream` practical, we would also implement the analog of `advance`: ```tail :: Stream a -> Stream a tail (Cons a as) = as``` but it’s never part of the comonadic interface. If you had any experience with digital signal processing, you’ll see immediately that a co-Kleisli arrow for a stream is just a digital filter, and `extend` produces a filtered stream. As a simple example, let’s implement the moving average filter. Here’s a function that sums `n` elements of a stream: ```sumS :: Num a => Int -> Stream a -> a sumS n (Cons a as) = if n <= 0 then 0 else a + sumS (n - 1) as``` Here’s the function that calculates the average of the first `n` elements of the stream: ```average :: Fractional a => Int -> Stream a -> a average n stm = (sumS n stm) / (fromIntegral n)``` Partially applied `average n` is a co-Kleisli arrow, so we can `extend` it over the whole stream: ```movingAvg :: Fractional a => Int -> Stream a -> Stream a movingAvg n = extend (average n)``` The result is the stream of running averages. A stream is an example of a unidirectional, one-dimensional comonad. It can be easily made bidirectional or extended to two or more dimensions. Defining a comonad in category theory is a straightforward exercise in duality. As with the monad, we start with an endofunctor `T`. The two natural transformations, η and μ, that define the monad are simply reversed for the comonad: ```ε :: T -> I δ :: T -> T2``` The components of these transformations correspond to `extract` and `duplicate`. Comonad laws are the mirror image of monad laws. No big surprise here. Then there is the derivation of the monad from an adjunction. Duality reverses an adjunction: the left adjoint becomes the right adjoint and vice versa. And, since the composition `R ∘ L` defines a monad, `L ∘ R` must define a comonad. The counit of the adjunction: `ε :: L ∘ R -> I` is indeed the same ε that we see in the definition of the comonad — or, in components, as Haskell’s `extract`. We can also use the unit of the adjunction: `η :: I -> R ∘ L` to insert an `R ∘ L` in the middle of `L ∘ R` and produce `L ∘ R ∘ L ∘ R`. Making `T2` from `T` defines the δ, and that completes the definition of the comonad. We’ve also seen that the monad is a monoid. The dual of this statement would require the use of a comonoid, so what’s a comonoid? The original definition of a monoid as a single-object category doesn’t dualize to anything interesting. When you reverse the direction of all endomorphisms, you get another monoid. Recall, however, that in our approach to a monad, we used a more general definition of a monoid as an object in a monoidal category. The construction was based on two morphisms: ```μ :: m ⊗ m -> m η :: i -> m``` The reversal of these morphisms produces a comonoid in a monoidal category: ```δ :: m -> m ⊗ m ε :: m -> i``` One can write a definition of a comonoid in Haskell: ```class Comonoid m where split :: m -> (m, m) destroy :: m -> ()``` but it is rather trivial. Obviously `destroy` ignores its argument. `destroy _ = ()` `split` is just a pair of functions: `split x = (f x, g x)` Now consider comonoid laws that are dual to the monoid unit laws. ```lambda . bimap destroy id . split = id rho . bimap id destroy . split = id``` Here, `lambda` and `rho` are the left and right unitors, respectively (see the definition of monoidal categories). Plugging in the definitions, we get: ```lambda (bimap destroy id (split x)) = lambda (bimap destroy id (f x, g x)) = lambda ((), g x) = g x``` which proves that `g = id`. Similarly, the second law expands to `f = id`. In conclusion: `split x = (x, x)` which shows that in Haskell (and, in general, in the category Set) every object is a trivial comonoid. Fortunately there are other more interesting monoidal categories in which to define comonoids. One of them is the category of endofunctors. And it turns out that, just like the monad is a monoid in the category of endofunctors, The comonad is a comonoid in the category of endofunctors. Another important example of a comonad is the dual of the state monad. It’s called the costate comonad or, alternatively, the store comonad. We’ve seen before that the state monad is generated by the adjunction that defines the exponentials: ```L z = z × s R a = s ⇒ a``` We’ll use the same adjunction to define the costate comonad. A comonad is defined by the composition `L ∘ R`: `L (R a) = (s ⇒ a) × s` Translating this to Haskell, we start with the adjunction between the `Prod` functor on the left and the `Reader` functor or the right. Composing `Prod` after `Reader` is equivalent to the following definition: `data Store s a = Store (s -> a) s` The counit of the adjunction taken at the object `a` is the morphism: `εa :: ((s ⇒ a) × s) -> a` `counit (Prod (Reader f, s)) = f s` This becomes our `extract`: `extract (Store f s) = f s` `unit a = Reader (\s -> Prod (a, s))` can be rewritten as partially applied data constructor: `Store f :: s -> Store f s` We construct δ, or `duplicate`, as the horizontal composition: ```δ :: L ∘ R -> L ∘ R ∘ L ∘ R δ = L ∘ η ∘ R``` We have to sneak η through the leftmost `L`, which is the `Prod` functor. It means acting with η, or `Store f`, on the left component of the pair (that’s what `fmap` for `Prod` would do). We get: `duplicate (Store f s) = Store (Store f) s` (Remember that, in the formula for δ, `L` and `R` stand for identity natural transformations whose components are identity morphisms.) Here’s the complete definition of the `Store` comonad: ```instance Comonad (Store s) where extract (Store f s) = f s duplicate (Store f s) = Store (Store f) s``` You may think of the `Reader` part of `Store` as a generalized container of `a`s that are keyed using elements of the type `s`. For instance, if `s` is `Int`, `Reader Int a` is an infinite bidirectional stream of `a`s. `Store` pairs this container with a value of the key type. For instance, `Reader Int a` is paired with an `Int`. In this case, `extract` uses this integer to index into the infinite stream. You may think of the second component of `Store` as the current position. Continuing with this example, `duplicate` creates a new infinite stream indexed by an `Int`. This stream contains streams as its elements. In particular, at the current position, it contains the original stream. But if you use some other `Int` (positive or negative) as the key, you’d obtain a shifted stream positioned at that new index. In general, you can convince yourself that when `extract` acts on the `duplicate`d `Store` it produces the original `Store` (in fact, the identity law for the comonad states that `extract . duplicate = id`). The `Store` comonad plays an important role as the theoretical basis for the `Lens` library. Conceptually, the `Store s a` comonad encapsulates the idea of “focusing” (like a lens) on a particular substructure of the date type `a` using the type `s` as an index. In particular, a function of the type: `a -> Store s a` is equivalent to a pair of functions: ```set :: a -> s -> a get :: a -> s``` If `a` is a product type, `set` could be implemented as setting the field of type `s` inside of `a` while returning the modified version of `a`. Similarly, `get` could be implemented to read the value of the `s` field from `a`. We’ll explore these ideas more in the next section. ## Challenges 1. Implement the Conway’s Game of Life using the `Store` comonad. Hint: What type do you pick for `s`? ## Acknowledgments I’m grateful to Edward Kmett for reading the draft of this post and pointing out flaws in my reasoning. Next: F-Algebras. Programmers have developed a whole mythology around monads. It’s supposed to be one of the most abstract and difficult concepts in programming. There are people who “get it” and those who don’t. For many, the moment when they understand the concept of the monad is like a mystical experience. The monad abstracts the essence of so many diverse constructions that we simply don’t have a good analogy for it in everyday life. We are reduced to groping in the dark, like those blind men touching different parts of the elephant end exclaiming triumphantly: “It’s a rope,” “It’s a tree trunk,” or “It’s a burrito!” Let me set the record straight: The whole mysticism around the monad is the result of a misunderstanding. The monad is a very simple concept. It’s the diversity of applications of the monad that causes the confusion. As part of research for this post I looked up duct tape (a.k.a., duck tape) and its applications. Here’s a little sample of things that you can do with it: • sealing ducts • fixing CO2 scrubbers on board Apollo 13 • wart treatment • fixing Apple’s iPhone 4 dropped call issue • making a prom dress • building a suspension bridge Now imagine that you didn’t know what duct tape was and you were trying to figure it out based on this list. Good luck! So I’d like to add one more item to the collection of “the monad is like…” clichés: The monad is like duct tape. Its applications are widely diverse, but its principle is very simple: it glues things together. More precisely, it composes things. This partially explains the difficulties a lot of programmers, especially those coming from the imperative background, have with understanding the monad. The problem is that we are not used to thinking of programing in terms of function composition. This is understandable. We often give names to intermediate values rather than pass them directly from function to function. We also inline short segments of glue code rather than abstract them into helper functions. Here’s an imperative-style implementation of the vector-length function in C: ```double vlen(double * v) { double d = 0.0; int n; for (n = 0; n < 3; ++n) d += v[n] * v[n]; return sqrt(d); }``` Compare this with the (stylized) Haskell version that makes function composition explicit: `vlen = sqrt . sum . fmap (flip (^) 2)` (Here, to make things even more cryptic, I partially applied the exponentiation operator `(^)` by setting its second argument to `2`.) I’m not arguing that Haskell’s point-free style is always better, just that function composition is at the bottom of everything we do in programming. And even though we are effectively composing functions, Haskell does go to great lengths to provide imperative-style syntax called the `do` notation for monadic composition. We’ll see its use later. But first, let me explain why we need monadic composition in the first place. ## The Kleisli Category We have previously arrived at the writer monad by embellishing regular functions. The particular embellishment was done by pairing their return values with strings or, more generally, with elements of a monoid. We can now recognize that such embellishment is a functor: ```newtype Writer w a = Writer (a, w) instance Functor (Writer w) where fmap f (Writer (a, w)) = Writer (f a, w)``` We have subsequently found a way of composing embellished functions, or Kleisli arrows, which are functions of the form: `a -> Writer w b` It was inside the composition that we implemented the accumulation of the log. We are now ready for a more general definition of the Kleisli category. We start with a category C and an endofunctor `m`. The corresponding Kleisli category K has the same objects as C, but its morphisms are different. A morphism between two objects `a` and `b` in K is implemented as a morphism: `a -> m b` in the original category C. It’s important to keep in mind that we treat a Kleisli arrow in K as a morphism between `a` and `b`, and not between `a` and `m b`. In our example, `m` was specialized to `Writer w`, for some fixed monoid `w`. Kleisli arrows form a category only if we can define proper composition for them. If there is a composition, which is associative and has an identity arrow for every object, then the functor `m` is called a monad, and the resulting category is called the Kleisli category. In Haskell, Kleisli composition is defined using the fish operator `>=>`, and the identity arrrow is a polymorphic function called `return`. Here’s the definition of a monad using Kleisli composition: ```class Monad m where (>=>) :: (a -> m b) -> (b -> m c) -> (a -> m c) return :: a -> m a``` Keep in mind that there are many equivalent ways of defining a monad, and that this is not the primary one in the Haskell ecosystem. I like it for its conceptual simplicity and the intuition it provides, but there are other definitions that are more convenient when programming. We’ll talk about them momentarily. In this formulation, monad laws are very easy to express. They cannot be enforced in Haskell, but they can be used for equational reasoning. They are simply the standard composition laws for the Kleisli category: ```(f >=> g) >=> h = f >=> (g >=> h) -- associativity return >=> f = f -- left unit f >=> return = f -- right unit``` This kind of a definition also expresses what a monad really is: it’s a way of composing embellished functions. It’s not about side effects or state. It’s about composition. As we’ll see later, embellished functions may be used to express a variety of effects or state, but that’s not what the monad is for. The monad is the sticky duct tape that ties one end of an embellished function to the other end of an embellished function. Going back to our `Writer` example: The logging functions (the Kleisli arrows for the `Writer` functor) form a category because `Writer` is a monad: ```instance Monoid w => Monad (Writer w) where f >=> g = \a -> let Writer (b, s) = f a Writer (c, s') = g b in Writer (c, s `mappend` s') return a = Writer (a, mempty)``` Monad laws for `Writer w` are satisfied as long as monoid laws for `w` are satisfied (they can’t be enforced in Haskell either). There’s a useful Kleisli arrow defined for the `Writer` monad called `tell`. It’s sole purpose is to add its argument to the log: ```tell :: w -> Writer w () tell s = Writer ((), s)``` We’ll use it later as a building block for other monadic functions. ## Fish Anatomy When implementing the fish operator for different monads you quickly realize that a lot of code is repeated and can be easily factored out. To begin with, the Kleisli composition of two functions must return a function, so its implementation may as well start with a lambda taking an argument of type `a`: ```(>=>) :: (a -> m b) -> (b -> m c) -> (a -> m c) f >=> g = \a -> ...``` The only thing we can do with this argument is to pass it to `f`: ```f >=> g = \a -> let mb = f a in ...``` At this point we have to produce the result of type `m c`, having at our disposal an object of type `m b` and a function `g :: b -> m c`. Let’s define a function that does that for us. This function is called bind and is usually written in the form of an infix operator: `(>>=) :: m a -> (a -> m b) -> m b` ```class Monad m where (>>=) :: m a -> (a -> m b) -> m b return :: a -> m a``` Here’s the definition of bind for the `Writer` monad: ```(Writer (a, w)) >>= f = let Writer (b, w') = f a in Writer (b, w `mappend` w')``` It is indeed shorter than the definition of the fish operator. It’s possible to further dissect bind, taking advantage of the fact that `m` is a functor. We can use `fmap` to apply the function `a -> m b` to the contents of `m a`. This will turn `a` into `m b`. The result of the application is therefore of type `m (m b)`. This is not exactly what we want — we need the result of type `m b` — but we’re close. All we need is a function that collapses or flattens the double application of `m`. Such function is called `join`: `join :: m (m a) -> m a` Using `join`, we can rewrite bind as: `ma >>= f = join (fmap f ma)` ```class Functor m => Monad m where join :: m (m a) -> m a return :: a -> m a``` Here we have explicitly requested that `m` be a `Functor`. We didn’t have to do that in the previous two definitions of the monad. That’s because any type constructor `m` that either supports the fish or bind operator is automatically a functor. For instance, it’s possible to define `fmap` in terms of bind and `return`: `fmap f ma = ma >>= \a -> return (f a)` For completeness, here’s `join` for the `Writer` monad: ```join :: Monoid w => Writer w (Writer w a) -> Writer w a join (Writer ((Writer (a, w')), w)) = Writer (a, w `mappend` w')``` ## The `do` Notation One way of writing code using monads is to work with Kleisli arrows — composing them using the fish operator. This mode of programming is the generalization of the point-free style. Point-free code is compact and often quite elegant. In general, though, it can be hard to understand, bordering on cryptic. That’s why most programmers prefer to give names to function arguments and intermediate values. When dealing with monads it means favoring the bind operator over the fish operator. Bind takes a monadic value and returns a monadic value. The programmer may chose to give names to those values. But that’s hardly an improvement. What we really want is to pretend that we are dealing with regular values, not the monadic containers that encapsulate them. That’s how imperative code works — side effects, such as updating a global log, are mostly hidden from view. And that’s what the `do` notation emulates in Haskell. You might be wondering then, why use monads at all? If we want to make side effects invisible, why not stick to an imperative language? The answer is that the monad gives us much better control over side effects. For instance, the log in the `Writer` monad is passed from function to function and is never exposed globally. There is no possibility of garbling the log or creating a data race. Also, monadic code is clearly demarcated and cordoned off from the rest of the program. The `do` notation is just syntactic sugar for monadic composition. On the surface, it looks a lot like imperative code, but it translates directly to a sequence of binds and lambda expressions. For instance, take the example we used previously to illustrate the composition of Kleisli arrows in the `Writer` monad. Using our current definitions, it could be rewritten as: ```process :: String -> Writer String [String] process = upCase >=> toWords``` This function turns all characters in the input string to upper case and splits it into words, all the while producing a log of its actions. In the `do` notation it would look like this: ```process s = do upStr <- upCase s toWords upStr``` Here, `upStr` is just a `String`, even though `upCase` produces a `Writer`: ```upCase :: String -> Writer String String upCase s = Writer (map toUpper s, "upCase ")``` This is because the `do` block is desugared by the compiler to: ```process s = upCase s >>= \ upStr -> toWords upStr``` The monadic result of `upCase` is bound to a lambda that takes a `String`. It’s the name of this string that shows up in the `do` block. When reading the line: `upStr <- upCase s` we say that `upStr` gets the result of `upCase s`. The pseudo-imperative style is even more pronounced when we inline `toWords`. We replace it with the call to `tell`, which logs the string `"toWords "`, followed by the call to `return` with the result of splitting the string `upStr` using `words`. Notice that `words` is a regular function working on strings. ```process s = do upStr <- upCase s tell "toWords " return (words upStr)``` Here, each line in the do block introduces a new nested bind in the desugared code: ```process s = upCase s >>= \upStr -> tell "toWords " >>= \() -> return (words upStr)``` Notice that `tell` produces a unit value, so it doesn’t have to be passed to the following lambda. Ignoring the contents of a monadic result (but not its effect — here, the contribution to the log) is quite common, so there is a special operator to replace bind in that case: ```(>>) :: m a -> m b -> m b m >> k = m >>= (\_ -> k)``` The actual desugaring of our code looks like this: ```process s = upCase s >>= \upStr -> tell "toWords " >> return (words upStr)``` In general, `do` blocks consist of lines (or sub-blocks) that either use the left arrow to introduce new names that are then available in the rest of the code, or are executed purely for side-effects. Bind operators are implicit between the lines of code. Incidentally, it is possible, in Haskell, to replace the formatting in the `do` blocks with braces and semicolons. This provides the justification for describing the monad as a way of overloading the semicolon. Notice that the nesting of lambdas and bind operators when desugaring the `do` notation has the effect of influencing the execution of the rest of the `do` block based on the result of each line. This property can be used to introduce complex control structures, for instance to simulate exceptions. Interestingly, the equivalent of the `do` notation has found its application in imperative languages, C++ in particular. I’m talking about resumable functions or coroutines. It’s not a secret that C++ futures form a monad. It’s an example of the continuation monad, which we’ll discuss shortly. The problem with continuations is that they are very hard to compose. In Haskell, we use the `do` notation to turn the spaghetti of “my handler will call your handler” into something that looks very much like sequential code. Resumable functions make the same transformation possible in C++. And the same mechanism can be applied to turn the spaghetti of nested loops into list comprehensions or “generators,” which are essentially the `do` notation for the list monad. Without the unifying abstraction of the monad, each of these problems is typically addressed by providing custom extensions to the language. In Haskell, this is all dealt with through libraries. In the previous blog post we talked about relations. I gave an example of a thin category as a kind of relation that’s compatible with categorical structure. In a thin category, the hom-set is either an empty set or a singleton set. It so happens that these two sets form a sub-category of Set. It’s a very interesting category. It consists of the two objects — let’s give them new names `o` and `i`. Besides the mandatory identity morphisms, we also have a single morphism going from `o` to `i`, corresponding to the function we call `absurd` in Haskell: ```absurd :: Void -> a absurd _ = a``` This tiny category is sometimes called the interval category. I’ll call it o->i. The object o is initial, and the object i is terminal — just as the empty set and the singleton set were in Set. Moreover, the cartesian product from Set can be used to define a tensor product in o->i. We’ll use this tensor product to build a monoidal category. ## Monoidal Categories A tensor product is a bifunctor ⊗ with some additional properties. Here, in the interval category, we’ll define it through the following multiplication table: ```o ⊗ o = o o ⊗ i = o i ⊗ o = o i ⊗ i = i``` Its action on pairs of morphisms (what we call `bimap` in Haskell) is also easy to define. For instance, what’s the action of `⊗` on the pair `<absurd, idi>`? This pair takes the pair `<o, i>` to `<i, i>`. Under the bifunctor `⊗`, the first pair produces `o`, and the second `i`. There is only one morphism from `o` to `i`, so we have: `absurd ⊗ idi = absurd` If we designate the (terminal) object `i` as the unit of the tensor product, we get a (symmetric) monoidal category. A monoidal category is a category with a tensor product that’s associative and unital (usually, up to isomorphism — but here, strictly). Now imagine that we replace hom-sets in our original thin category with objects from the monoidal category o->i (we’ll call them hom-objects). After all, we were only using two sets from Set. We can replace the empty hom-set with the object `o`, and the singleton hom-set with the object `i`. We get what’s called an enriched category (although, in this case, it’s more of an impoverished category). An example of a thin category (a total order with objects 1, 2, and 3) with hom-sets replaced by hom-objects from the interval category. Think of `i` as corresponding to less-than-or-equal, and `o` as greater. ## Enriched Categories An enriched category has hom-objects instead of hom-sets. These are objects from some monoidal category V called the base category. The base category has to be monoidal because we want to define something that would replace the usual composition of morphisms. Morphisms are elements of hom-sets. However, hom-objects, in general, have no elements. We don’t know what an element of o or i is. So to fully define an enriched category we have to come up with a sensible substitute for composition. To do that, we need to rethink composition — first in terms of hom-sets, then in terms of hom-objects. We can think of composition as a function from a cartesian product of two hom-sets to a third hom-set: `composea b c :: C(b, c) × C(a, b) -> C(a, c)` Generalizing it, we can replace hom-sets with hom-objects (here, either o or i), the cartesian product with the tensor product, and a function with a morphism (notice: it’s a morphism in our monoidal category o->i). These composition-defining morphisms form a “composition table” for hom-objects. As an example, take the composition of two `i`s. Their product `i ⊗ i` is `i` again, and there is only one morphism out of `i`, the identity morphism. In terms of original hom-sets it would mean that the composition of two morphisms always exists. In general, we have to impose this condition when we’re defining a category, enriched or not — here it just happens automatically. For instance (see illustration), `compose0 1 2=idi`: ```compose0 1 2 (C(1, 2) ⊗ C(0, 1)) = compose0 1 2 (i ⊗ i) = compose0 1 2 i = i = C(0, 2)``` In every category we must also have identity morphisms. These are special elements in the hom-sets of the form `C(a, a)`. We have to find a way to define their equivalent in the enriched setting. We’ll use the standard trick of defining generalized elements. It’s based on the observation that selecting an element from a set `s` is the same as selecting a morphism that goes from the singleton set (the terminal object in Set) to `s`. In a monoidal category, we replace the terminal object with the monoidal unit. So, instead of picking an identity morphism in `C(a, a)`, we use a morphism from the monoidal unit `i`: `ja :: i -> C(a, a)` Again, in the case of a thin category, there is only one morphism leaving `i`, and that’s the identity morphism. That’s why we are automatically guaranteed that, in a thin category, all hom-objects of the form `C(a, a)` are equal to `i`. Composition in a category must also satisfy associativity and identity conditions. Associativity in the enriched setting translates straightforwardly to a commuting diagram, but identity is a little trickier. We have to use `ja` to “select” the identity from the hom-object `C(a, a)` while composing it with some other hom-object `C(b, a)`. We start with the product: `i ⊗ C(b, a)` Because `i` is the monoidal unit, this is equal to `C(b, a)`. On the other hand, we can tensor together two morphisms in `o->i` — remember, a tensor product is a bifunctor, so it also acts on morphisms. Here we’ll tensor `ja` and the identity at `C(b, a)`: `ja ⊗ idC(b, a)` We act with this product on the product object `i ⊗ C(b, a)` to get `C(a, a) ⊗ C(b, a)`. Then we use composition to get: `C(a, a) ⊗ C(b, a) -> C(b, a)` These two ways of getting to `C(b, a)` must coincide, leading to the identity condition for enriched categories. Now that we’ve seen how the enrichment works for thin categories, we can apply the same mechanism to define categories enriched over any monoidal category V. The important part is that V defines a (bifunctor) tensor product ⊗ and a unit object `i`. Associativity and unitality may be either strict or up to isomorphism (notice that a regular cartesian product is associative only up to isomorphism — `(a, (b, c))` is not equal to `((a, b), c)`). Instead of sets of morphisms, an enriched category has hom-objects that are objects in V. We use the same notation as for hom-sets: `C(a, b)` is the hom-object that connects object `a` to object `b`. Composition is replaced by morphisms in V: `composea b c :: C(b, c) ⊗ C(a, b) -> C(a, c)` Instead of identity morphisms, we have the morphisms in V: `ja :: i -> C(a, a)` Finally, associativity and unitality of composition are imposed in the form of a few commuting diagrams. # Impoverished Yoneda The Yoneda Lemma talks about functors from an arbitrary category to Set. To generalize the Yoneda lemma to enriched categories we first have to generalize functors. Their action on objects is not a problem; it’s the action on morphisms that needs our attention. ## Enriched Functors Since in an enriched category we no longer have access to individual morphisms, we have to define the action of functors on hom-objects wholesale. This is only possible if the hom-objects in the target category come from the same base category V as the hom-objects in the source category. In other words, both categories must be enriched over the same monoidal category. We can then use regular morphisms in V to map hom-objects. Between any two objects `a` and `b` in C we have the hom-object `C(a, b)`. The two objects are mapped by the functor `f` to `f a` and `f b`, and there is a hom-object between them, `D(f a, f b)`. The action of `f` on `C(a, b)` is defined as a morphism in V: `C(a, b) -> D(f a, f b)` Let’s see what this means in our impoverished thin category. First of all, a functor will always map related objects to related objects. That’s because there is no morphism from `i` to `o`. A bond between two objects cannot be broken by an impoverished functor. If the relation is a partial order, for instance less-than-or-equal, then it follows that a functor between posets preserves the ordering — it’s monotone. A functor must also preserve composition and identity. The former can be easily expressed as a commuting diagram. Identity preservation in the enriched setting involves the use of `ja`. Starting from `i` we can use `ja` to get to `C(a, a)`, which the functor maps to `D(f a, f a)`. Or we can use `jf a` to get there directly. We insist that both paths be the same. In our impoverished category, this just works because `ja` is the identity morphism and all `C(a, a)`s and `D(a, a)`s are equal to `i`. Back to Yoneda: You might remember that we start the Yoneda construction by fixing one object `a` in C, and then varying another object `x` to define the functor: `x -> C(a, x)` This functor maps C to Set, because `x`s are objects in C, and hom-sets are sets — objects of Set. In the enriched environment, the same construction results in a mapping from C to V, because hom-objects are objects of the base category V. But is this mapping a functor? This is far from obvious, considering that C is an enriched category, and we have just said that enriched functors can only go between categories that are enriched over the same base category. The target of our functor, the category V, is not enriched. It turns out that, as long as V is closed, we can turn it into an enriched category. ## Self Enrichment Let’s first see how we would enrich our tiny category o->i. First of all, let’s check if it’s closed. Closedness means that hom-sets can be objectified — for every hom-set there is an object called the exponential object that objectifies it. The exponential object in a (symmetric) monoidal category is defined through the adjunction: `V(a⊗b, c) ≅ V(b, ca)` This is the standard adjunction for defining exponentials, except that we are using the tensor product instead of the regular product. The hom-sets are sets of morphisms between objects in V (here, in o->i). Let’s check, for instance, if there’s an object that corresponds to the hom-set `V(o, i)`, which we would call `io`. We have: `V(o⊗b, i) ≅ V(b, io)` Whatever `b` we chose, when multiplied by `o` it will yield `o`, so the left hand side is `V(o, i)`, a singleton set. Therefore `V(b, io)` must be a singleton set too, for any choice of `b`. In particular, if `b` is `i`, we see that the only choice for `io` is: `io = i` You can check that all exponentiation rules in o->i can be obtained from simple algebra by replacing `o` with zero and `i` with one. Every closed symmetric monoidal category can be enriched in itself by replacing hom-sets with the corresponding exponentials. For instance, in our case, we end up replacing all empty hom-sets in the category o->i with `o`, and all singleton hom-sets with `i`. You can easily convince yourself that it works, and the result is the category o->i enriched in itself. We can now take a category C that’s enriched over a closed symmetric monoidal category V, and show that the mapping: `x -> C(a, x)` is indeed an enriched functor. It maps objects of C to objects of V and hom-objects of C to hom-objects (exponentials) of V. An example of a functor from a total order enriched over the interval category to the interval category. This particular functor is equal to the hom-functor `C(a->x)` for `a` equal to 3. Let’s see what this functor looks like in a poset. Given some `a`, the hom-object `C(a, x)` is equal to `i` if `a <= x`. So an `x` is mapped to `i` if it’s greater-or-equal to `a`, otherwise it’s mapped to `o`. If you think of the objects mapped to `o` as colored black and the ones mapped to `i` as colored red, you’ll see the object `a` and the whole graph below it must be painted red. ## Enriched Natural Transformations Now that we know what enriched functors are, we have to define natural transformations between them. This is a bit tricky, since a regular natural transformation is defined as a family of morphisms. But again, instead of picking individual morphisms from hom-sets we can work with the closest equivalent: generalized elements — morphisms going from the unit object `i` to hom-objects. So an enriched natural transformation between two enriched functors `f` and `g` is defined as a family of morphisms in V: `αa :: i -> V(f a, g a)` Natural transformations are very limited in our impoverished category. Let’s see what morphisms from `i` are at our disposal. We have one morphism from `i` to `i`: the identity morphism `ida`. This makes sense — we think of `i` as having a single element. There is no morphism from `i` back to `o`; and that makes sense too — we think of `o` as having no elements. The only possible generalized components of an impoverished natural transformation between two functors `f` and `g` correspond to `D(f a, g a)` equal to `i`; which means that, for every `a`, `f a` must be less-than-or-equal to `g a`. A natural transformation can only push a functor uphill. When the target category is `o->i`, as in the impoverished Yoneda lemma, a natural transformation may never connect red to black. So once the first functor switches to red, the other must follow. ## Naturality Condition There is, of course, a naturality condition that goes with this definition of a natural transformation. The essence of it is that it shouldn’t matter if we first apply a functor and then the natural transformation `α`, or the other way around. In the enriched context, there are two ways of getting from `C(a, b)` to `D(f a, g b)`. One is to multiply `C(a, b)` by `i` on the right: `C(a, b) ⊗ i` apply the product of `g ⊗ αa` to get: `D(g a, g b) ⊗ D(f a, g a)` and then apply composition to get: `D(f a, g b)` The other way is to multiply `C(a, b)` by `i` on the left: `i ⊗ C(a, b)` apply `αb ⊗ f` to get: `D(f b, g b) ⊗ D(f a, f b)` and compose the two to get: `D(f a, g b)` The naturality condition requires that this diagram commute. ## Enriched Yoneda The enriched version of the Yoneda lemma talks about enriched natural transformations from the functor `x -> C(a, x)` to any enriched functor `f` that goes from C to V. Consider for a moment a functor from a poset to our tiny category o->i (which, by the way, is also a poset). It will map some objects to `o` (black) and others to `i` (red). As we’ve seen, a functor must preserve the less-than-or-equal relation, so once we get into the red territory, there is no going back to black. And a natural transformation may only repaint black to red, not the other way around. Now we would like to say that natural transformations from `x -> C(a, x)` to `f` are in one-to-one correspondence with the elements of `f a`, except that `f a` is not a set, so it doesn’t have elements. It’s an object in V. So instead of talking about elements of `f a`, we’ll talk about generalized elements — morphisms from the unit object `i` to `f a`. And that’s how the enriched Yoneda lemma is formulated — as a natural bijection between the set of natural transformations and a set of morphisms from the unit object to `f a`. `Nat(C(a, -), f) ≅ i -> f a` In our running example, there are only two possible values for `f a`. 1. If the value is `o` then there is no morphism from `i` to it. The Yoneda lemma tells us that there is no natural transformation in that case. That makes sense, because the value of the functor `x -> C(a, x)` at `x=a` is `i`, and there is no morphism from `i` to `o`. 2. If the value is `i` then there is exactly one morphism from `i` to it — the identity. The Yoneda lemma tells use that there is just one natural transformation in that case. It’s the natural transformation whose generalized component at any object `x` is `i->i`. ## Strong Enriched Yoneda There is something unsatisfactory in the fact that the enriched Yoneda lemma ends up using a mapping between sets. First we try to get away from sets as far as possible, then we go back to sets of morphisms. It feels like cheating. Not to worry! There is a stronger version of the Yoneda lemma that deals with this problem. What we need is to replace the set of natural transformations with an object in V that would represent them — just like we replaced the set of morphisms with the exponential object. Such an object is defined as an end: `∫x V(f x, g x)` The strong version of the Yoneda lemma establishes the natural isomorphism: `∫x V(C(a, x), f x) ≅ f a` ## Enriched Profunctors We’ve seen that a profunctor is a functor from a product category Cop × D to Set. The enriched version of a profunctor requires the notion of a product of enriched categories. We would like the product of enriched categories to also be an enriched category. In fact, we would like it to be enriched over the same base category V as the component categories. We’ll define objects in such a category as pairs of objects from the component categories, but the hom-objects will be defined as tensor products of the component hom-objects. In the enriched product category, the hom-object between two pairs, `<c, d>` and `<c', d'>` is: `(Cop ⊗ D)(<c, d>, <c', d'>) = C(c, c') ⊗ D(d, d')` You can convince yourself that composition of such hom-objects requires the tensor product to be symmetric (at least up to isomorphism). That’s because you have to be able to rearrange the hom-objects in a tensor product of tensor products. An enriched profunctor is defined as an enriched functor from the tensor product of two categories to the (self-enriched) base category: `Cop ⊗ D -> V` Just like regular profunctors, enriched profunctors can be composed using the coend formula. The only difference is that the cartesian product is replaced by the tensor product in V. They form a bicategory called V-Prof. Enriched profunctors are the basis of the definition of Tambara modules, which are relevant in the application to Haskell lenses. ## Conclusion One of the reasons for using category theory is to get away from set theory. In general, objects in a category don’t have to form sets. The morphisms, however, are elements of sets — the hom-sets. Enriched categories go a step further and replace even those sets with categorical objects. However, it’s not categories all the way down — the base category that’s used for enrichment is still a regular old category with hom-sets. ## Acknowledgments I’m grateful to Gershom Bazerman for useful comments and to André van Meulebrouck for checking the grammar and spelling. Next Page »
2017-10-23 20:53:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9192893505096436, "perplexity": 511.2478776188507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826642.70/warc/CC-MAIN-20171023202120-20171023222120-00663.warc.gz"}