content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Characteristic functions
April 7th 2010, 08:42 AM #1
Junior Member
Nov 2009
Characteristic functions
If I know that $\phi(t)$ is the characteristic function of a random variable $X$, how do I go about showing that variants on $\phi$ are characteristic functions of some random variables?
I've got a bunch of these to do. Here's an example: show that $|\phi(t)|^2$ is the characteristic function of some random variable.
Why not reading this : Characteristic function (probability theory) - Wikipedia, the free encyclopedia ?
I'd actually seen that, but in my probability course we haven't seen any of these results so I figured I should find another way...I got it now, though, and the rest by using the definition of a
characteristic function (as the expectation of e^(itX), which I never grasped very well in the context of my course), super-basic properties of complex numbers (which I'm not very familiar with
and is why it hadn't occurred to me previously) and trig functions, and Euler.
Got a little prodding from my instructor in his office hours, but I may have damaged my "reputation" more than the point total of my assignment increased my grade due to the apparent remedial
nature of my difficulties (evidenced as well by the incredulous single-eyebrow-lifting going on in that office when I'm there).
The best way, when available, is to guess which random variable they are the characteristic functions of...
For instance, prove from the definition that $\phi(-t)$ is the characteristic function of $-X$, that $\phi(-t)=\overline{\phi(t)}$ (complex conjugate), and that $\phi(t)^2$ is the characteristic
function of $X+Y$ where $Y$ is independent of $X$ and has same distribution.
Then give another thought at your initial question: which random variable has characteristic function $|\phi(t)|^2$?
April 7th 2010, 01:02 PM #2
April 8th 2010, 08:45 AM #3
Junior Member
Nov 2009
April 8th 2010, 10:55 AM #4
MHF Contributor
Aug 2008
Paris, France
|
{"url":"http://mathhelpforum.com/advanced-statistics/137747-characteristic-functions.html","timestamp":"2014-04-18T07:54:33Z","content_type":null,"content_length":"42277","record_id":"<urn:uuid:9cbc5c54-9136-47d4-9036-5caaa067a5f6>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: RE: replace loop over var w/ if
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: replace loop over var w/ if
From "Joseph Coveney" <jcoveney@bigplanet.com>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: replace loop over var w/ if
Date Sun, 22 Feb 2009 13:06:50 +0900
John Bunge wrote:
i have 3 variables called v1d, v2d, v3d which all equal to .
moreover i have 3 variables a1, a2, a3 which equal to . or some numerical
i want
replace v1d = 1 if a1 ~=.
replace v2d = 1 if a2 ~=.
replace v3d = 1 if a3 ~=.
how can i do this in 1 command? with forvalue?
Not sure about a single line of code, but the following should work.
forvalues i = 1/3 {
replace v`i'd = 1 if !missing(a`i')
If you're using v?d variables as flags for nonmissing values in the
corresponding a? variables, then be aware that v?d will always test True,
because they'll be either . or 1, both of which are nonzero. If you're using
v?d variables as nonmissing-value indicator variable for a?, then the
following would be better.
forvalues i = 1/3 {
replace v`i'd = !missing(a`i')
In this case, v?d will be 1 (True) if a? is not missing, and 0 (False)
Joseph Coveney
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2009-02/msg00985.html","timestamp":"2014-04-21T15:14:35Z","content_type":null,"content_length":"6317","record_id":"<urn:uuid:8b24e7d3-a5ff-4c8e-8784-ca90c5826b58>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Volo, IL Statistics Tutor
Find a Volo, IL Statistics Tutor
...I employ both traditional and modern tutoring techniques. Often I use web-based software for skill building. This is very effective for students preparing for standardized exams.
67 Subjects: including statistics, English, Spanish, chemistry
I graduated magna cum laude with honors from Illinois State University with a bachelor's degree in secondary mathematics education in 2011. I taught trigonometry and algebra 2 to high school
juniors in the far north suburbs of Chicago for the past two years. I am currently attending DePaul University to pursue my master's degree in applied statistics.
19 Subjects: including statistics, calculus, geometry, algebra 1
...I can't learn it for them, but I can ask the right questions, pick the right examples, and offer the right hints to help them unlock their potential. Learning is personal, so my goal is to
connect with each and every student in whatever way is most helpful to them. I look forward to working wit...
12 Subjects: including statistics, geometry, algebra 1, GED
...I can tutor in a range of subjects including ACT prep, English, Math, Science, and History. Education is something that I am really passionate about and I am confident in my ability to deliver
quality instruction to my students.Tutoring young students requires two skills: creativity and patience...
40 Subjects: including statistics, English, reading, Spanish
...The summer of my sophomore year, I took the ACT and scored a 36 on the math section. In addition, I took three science classes: biology, chemistry, and physics. My junior year, I finished up
calculus, which I had begun as a sophomore.
16 Subjects: including statistics, chemistry, calculus, algebra 2
Related Volo, IL Tutors
Volo, IL Accounting Tutors
Volo, IL ACT Tutors
Volo, IL Algebra Tutors
Volo, IL Algebra 2 Tutors
Volo, IL Calculus Tutors
Volo, IL Geometry Tutors
Volo, IL Math Tutors
Volo, IL Prealgebra Tutors
Volo, IL Precalculus Tutors
Volo, IL SAT Tutors
Volo, IL SAT Math Tutors
Volo, IL Science Tutors
Volo, IL Statistics Tutors
Volo, IL Trigonometry Tutors
Nearby Cities With statistics Tutor
Buffalo Grove statistics Tutors
Bull Valley, IL statistics Tutors
Crystal Lake, IL statistics Tutors
Fox Lake, IL statistics Tutors
Gurnee statistics Tutors
Holiday Hills, IL statistics Tutors
Island Lake statistics Tutors
Johnsburg, IL statistics Tutors
Lakemoor, IL statistics Tutors
Mchenry, IL statistics Tutors
Port Barrington, IL statistics Tutors
Round Lake Beach, IL statistics Tutors
Round Lake Park, IL statistics Tutors
Round Lake, IL statistics Tutors
Wheeling, IL statistics Tutors
|
{"url":"http://www.purplemath.com/Volo_IL_statistics_tutors.php","timestamp":"2014-04-18T08:16:51Z","content_type":null,"content_length":"24032","record_id":"<urn:uuid:620de309-0dff-4938-9e98-d4029a76fc97>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MASS YAMAHA/CCRMA Joint Project - Technical Documentation
Matlab/Yamaha Toolbox Documentation
Here you can find all the Matlab functions writen for the development of the project. Everything is documented from matlab help.
An example of generating one band of FM noise
This is a simple example that shows the algorythm used to generate one band of FM modulated noise. All the documentation of the funtions used can be found in the toolbox documentation.
Analysis of voice and FM masking prototype
This is a simple example that shows the algorythm used to generate one band of FM modulated noise. All the documentation of the funtions used can be found in the toolbox documentation.
Tokyo Impulses Responses - Comparison of the delay times
This document shows a plot where the delay of arrival of the the signal to each of the PZM microphones can be compared. It's a plot of the impulse responses.
A note on the Sampling Rate selection for generating the FM Noise
Explanation on the best sampling rate selection to use in the FM noises algorythm.
Power Spectral Densities and Spectrogram of noise bands at FS = 4000 Hz
Explanation on the best sampling rate selection to use in the FM noises algorythm.
Beta Test - Noise Conditions Strategies
Explanation and generation of the set of masking noises to be used in the Bata Test.
Experiment 02 - Analysis
Space Study
|
{"url":"https://ccrma.stanford.edu/~jcaceres/yamaha/documentation/","timestamp":"2014-04-17T11:26:14Z","content_type":null,"content_length":"9544","record_id":"<urn:uuid:cfcac461-7d2a-4a8d-a29a-abf43f46896b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Derivative question
October 18th 2010, 03:17 PM #1
Oct 2010
Derivative question
I don't know how to find where the derivative exists or where it fails to exists? and where all the points exists for these functions?
(btw its suppose to be a big parentheses holds both functions)
f(x)={ (x-1)^3, x≥1 and x^3-3x+2, x<1 }
1. show f(x) is continuous at x = 1 using the three-part definition of continuity at a point. then ...
2. show f(x) is differentiable at x = 1 using the definition of a derivative at a specific point ... you'll need to show
$\displaystyle \lim_{x \to 1^-} \frac{f(x) - f(1)}{x-1} = \lim_{x \to 1^+} \frac{f(x) - f(1)}{x-1}$
October 18th 2010, 03:48 PM #2
|
{"url":"http://mathhelpforum.com/calculus/160147-derivative-question.html","timestamp":"2014-04-20T05:58:16Z","content_type":null,"content_length":"33566","record_id":"<urn:uuid:8d26e13c-000b-4fd8-8566-abf1624cd3c1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts Categorized as 'Words & Linguistics'—Wolfram|Alpha Blog
CATEGORY: Words & Linguistics
Whether you’re trying to find the perfect word in Scrabble or study the languages of the world, Wolfram|Alpha has always provided computational insights into how we communicate. Now we’re taking that
a step further—with data from the American Community Survey, we can take a closer look at where different languages are spoken in the United States. More »
If you’re a big music fan (and who isn’t?), you’ve probably been carrying around a lot of the lyrics to your favorite songs and albums in your head. It’s unlikely that you get the chance to show off
that expertise very often, though–but now’s your chance! Cash in your music knowledge for cool Wolfram prizes in the Alpha Albums contest. More »
I’ll admit that I am an awful singer, who also happens to sing a lot. But, oddly enough, there aren’t many songs that I know all the words to by heart. When I sing along to one of my favorite songs,
I tend to fill in the blanks with plenty of hmms, hums, and other nonsensical gibberish. The end result can be both hilarious and excruciating to hear. More »
A lot of cool things happened this summer on Wolfram|Alpha and the Wolfram|Alpha Blog. And just wait—we have even better stuff planned for the coming months! But in case you missed it, here’s a quick
recap of some of our best posts from this summer. More »
We here at Wolfram are, by and large, a bunch of nerds. This shouldn’t be that surprising, especially after looking at the people and fictional characters we’ve turned into mathematical curves on
Wolfram|Alpha. Our curves of internet memes, cartoon and video game characters, celebrities, and mathematical formulas have been incredibly popular. As many of our fellow nerds get ready to go back
to school, we’re celebrating nerddom and showing our appreciation for Wolfram’s users—by letting one of you decide the next curve to be featured in Wolfram|Alpha. More »
Even if you’re a pretty big word nut, you may not think of Wolfram|Alpha as your go-to source for learning more about the English lexicon. You might be surprised, then, by the discoveries and
computations that you can make with Wolfram|Alpha’s word data. More »
Superlatives, like hyperbole, are my favorite thing. So it is with the greatest excitement that I am devoting this blog post to superlatives and range searching, as Wolfram|Alpha has again expanded
its functionality in these areas.
I once heard from an actor pretending to be a scientist that the denser an element is, the better that element is for fighting terrible monsters. I cannot speak on the accuracy of that statement, as
I am not an actor pretending to be a scientist, but if you wanted to apply superlatives to chemistry, Wolfram|Alpha can do that. More »
When I was younger, I held the naive and incorrect view that mathematics was divorced from the arts. As I’ve gotten older, I’ve become more aware of not only how mathematics is the foundation for any
of the hard sciences, but also how it is intrinsically linked to essentially any form of creativity. Certainly users of our Wolfram Music Theory Course Assistant could have told me that, but I’m not
just referring to music. In truth, I’m not even trying to make some highbrow appeal to abstract art, either, although I happen to rather like that sort of thing. What I’m trying to say is that
mathematical equations can make pretty pictures.
Wolfram|Alpha is different from most of the tools out there on the web that you might use to get answers. Rather than inundate you with lists of links to web pages that may or may not be useful,
Wolfram|Alpha works to understand your query.
What really sets these different approaches apart is how they deal with complexity in queries. Whether there are many concurrent factors to your question or you have a unique math computation with an
answer that simply does not exist on some web page, Wolfram|Alpha is your best bet for a web service that actually understands what you are asking.
One of the ways that complexity can appear in queries is in depth, when there are multiple steps to a question. To understand what we mean by “depth,” think of the beautiful Matryoshka dolls that all
fit inside of each other.
To answer a query like “elevation of Steven Spielberg’s birthplace“, the first step is to recognize that “Steven Spielberg” and “birthplace” can be combined to form a location.
(elevation) of ((Steven Spielberg’s) (birthplace))
Who says data doesn’t have a lighter side to it?
Here at Wolfram|Alpha we are constantly adding data from the critical domains of science and socioeconomics and making all of it computable in order to provide new insights as well as novel ways of
looking at the world we live in.
But once in a while we like to throw in something fun and exciting, and one such new area that we have added is detailed information on over 150 types of keyboards from all over the world. Ever
wondered how many keys are on the third row of a US keyboard or what fingers you would use when typing the first six words of the Gettysburg Address, “Four score and seven years ago”, on your
Today marks an important milestone for Wolfram|Alpha, and for computational knowledge in general: for the first time, Wolfram|Alpha is now on average giving complete, successful responses to more
than 90% of the queries entered on its website (and with “nearby” interpretations included, the fraction is closer to 95%).
I consider this an impressive achievement—the hard-won result of many years of progressively filling out the knowledge and linguistic capabilities of the system.
The picture below shows how the fraction of successful queries (in green) has increased relative to unsuccessful ones (red) since Wolfram|Alpha was launched in 2009. And from the log scale in the
right-hand panel, we can see that there’s been a roughly exponential decrease in the failure rate, with a half-life of around 18 months. It seems to be a kind of Moore’s law for computational
knowledge: the net effect of innumerable individual engineering achievements and new ideas is to give exponential improvement.
For hundreds of years, scholars have carefully studied the plays of Shakespeare, breaking down the language and carefully dissecting every act and scene. We thought it would be interesting to see
what sorts of computational insights Wolfram|Alpha could provide, so we uploaded the complete catalog of Shakespeare’s plays into our database. This allows our users to examine Romeo and Juliet,
Macbeth, Othello, and the rest of the Bard’s plays in an entirely new way. More »
With Halloween around the corner, everyone’s thinking about costumes, trick-or-treating, and jack-o’-lantern carving and figuring out what to do with a 1,818 pound pumpkin. While the latter might
only be true for the owners of this year’s largest pumpkin, Wolfram|Alpha has something for everyone this Halloween. The nearly one-ton squash belongs to a farmer from Quebec, Canada. Besides carving
it into a giant jack-o’-lantern, the next best thing to do with that much pumpkin is make enough pumpkin pie for a small town. A common recipe for a pumpkin pie calls for two cups of pumpkin. Using
Wolfram|Alpha, we find that 1,818 pounds of pumpkin will allow us to make 3,550 pumpkin pies.
Hopefully you are in a giving mood, so you can cut each pie into eight slices to come up with just enough to share with the entire town of Allen Park, Michigan. With 28,210 people in Allen Park and
28,400 slices of pie, you’re still left with 190 slices to put in the freezer for later. More »
This Thursday, we’ll celebrate the Thanksgiving holiday here in the United States. The first U.S. National Thanksgiving was celebrated on November 26, 1789. The holiday was originally established to
show gratitude for a plentiful harvest and to give thanks for relationships with family and friends. A customary U.S. Thanksgiving celebration is centered on sharing a great feast that includes
turkey, stuffing, cranberries, and more with loved ones. (Of course, in recent years, we’ve also tossed in football and holiday shopping.)
A cornucopia is a traditional centerpiece that symbolizes abundance and is often found on a Thanksgiving meal table. Wolfram|Alpha is a cornucopia of sorts—a horn filled with many trillions of pieces
of data that produce an abundance of facts. In the spirit of the holiday, we though we’d share some fun Thanksgiving-themed facts we discovered from Wolfram|Alpha.
Fact: A typical turkey bats its wings 3 times per second.
Fact: If you’re in Champaign, Illinois, set your alarm to 6:51am on Thanksgiving Day if you’re planning to rise with the sun to start cooking your holiday bird. Click here for sunrise information for
your location.
Fact: The chill point of cranberries is 2 degrees Celsius.
Fact: There are 5.8 grams of fiber in one serving of cornbread stuffing.
Fact: The first known English use of the word “cornucopia” was in 1508.
Fact: Need to burn off holiday calories? Six hours of Black Friday shopping will burn 1050 calories, or you can knock off 457 calories by staying in and watching football.
Dig into Wolfram|Alpha to find interesting facts of your own. (You might need them in the near future—hint, hint.) Here at Wolfram|Alpha, we’re thankful for all of our dedicated blog readers and
Wolfram|Alpha users.
When we talk on this blog about “making knowledge computable”, the knowledge in question is often mathematical or statistical in nature. But that’s not the only knowledge Wolfram|Alpha can compute.
We’ve always had a solid backbone of dictionary-style information about words, but we’ve been steadily adding new features to that traditional output. Some of it should be quite useful, some of it is
just for fun, and much of it takes advantage of Wolfram|Alpha’s ability to mash up algorithms and data from a wide variety of knowledge domains.
To celebrate National Dictionary Day (October 16)—which honors Noah Webster, often regarded as “the father of the modern dictionary”—you might like to take advantage of this classic word widget,
which provides quick access to some of the more traditional areas of Wolfram|Alpha’s lexicographical data: definitions, pronunciations, synonyms, and more for most English words.
Or grab the next widget if you want to play around with a few of the “fun” features we’ve added, including the ability to compute anagrams and convert words to telephone keypad digits. More »
This week BBC News ran a story on how taxi drivers in Japan are hearing the unexpected sounds of cooing babies on their CB radios. The cause: U.S.-purchased baby monitors from nearby U.S. military
bases that are interfering with communication frequencies. Why would this happen? It’s likely that the baby monitors were manufactured to work on region two communication frequencies, and while being
used in Japan, they’re interfering with communication frequencies allocated to region one.
The International Telecommunication Union (ITU) divides the world into three regions. Each region has its own frequency-band allocations; that is, in each region, each frequency band is allocated to
a specific use. Sometimes a local authority like the Federal Communications Commission (FCC) in the United States regulates the use of frequency bands.
Say you want to find out how a specific frequency like 2GHz is allocated. Type “frequency allocation 2GHz” into Wolfram|Alpha.
Here at Wolfram|Alpha we’re always asking questions and seeking answers in an effort to make all of the world’s knowledge computable and understandable by everyone (big or small).
We’ve put together a short list of common questions asked by preschool- and kindergarten-aged children that can be answered with Wolfram|Alpha. We hope these examples inspire your child to dream up
Is the Moon bigger than the Earth? Ask Wolfram|Alpha to compare “size of earth, size of moon”, and you’ll discover numerical and graphic size comparisons showing that the Earth is indeed larger than
the Moon.
Chances are your little artists will discover the answer to this question on their own, but they can try asking Wolfram|Alpha what color they get when they “mix red and blue”?
Whether it’s because they’re excited about the party or just turning a year older, the birthday countdown is always on! Simply ask Wolfram|Alpha about the date of the child’s upcoming birthday, such
as “October 8 2010”, to learn the number of days, weeks, or months until the big day.
More »
Hello, fellow readers of the Wolfram|Alpha Blog—my name’s Justin. In just a few short weeks, I’ll be graduating from the University of Illinois at Urbana-Champaign. Over the years I’ve found my own
way of getting things done in regards to homework and studying routines. But this semester I realized there were tools available that would make studying and completing assignments easier and help me
understand better. One tool that has become increasingly valuable in my routine and those of other students on my campus is Wolfram|Alpha. Recently, I was invited to share how Wolfram|Alpha is being
used by students like myself.
Being a marketing major, I had to take some finance and accounting courses, but I was a bit rusty with the required formulas and the overall understanding of the cash flow concepts, such as future
cash flows and the net present values of a future investment. A friend recommended I check out Wolfram|Alpha’s finance tools, and they’ve became indispensable in my group’s casework for the semester.
Each proposed future investment we were met with, we would go directly to Wolfram|Alpha to compute the cash flows. We even went as far to show screenshots, such as the one below, of inputs and
outputs in our final case presentation last week.
I’ve met other students on my campus who have found Wolfram|Alpha to be helpful in their courses. A few months ago while studying in the library, I walked by a table of freshman students all using
Wolfram|Alpha on their laptops. I decided to stop and chat with them because I knew one of the girls. They explained how they were using Wolfram|Alpha to model functions and check portions of their
math homework. All three girls are enrolled in Calculus III, and not exactly overjoyed about the fact of future— and most likely harder—math classes. More »
There’s no better time than the rainy spring season to bring friends and family together for a game night. If you happen to find yourself at a table full of ruthless Scrabble players, you might find
your new best friend in Wolfram|Alpha.
The Scrabble functionality in Wolfram|Alpha has useful tools, such as a points calculator and a dictionary word verifier. Wolfram|Alpha can also suggest other possible words to be used. Currently,
Wolfram|Alpha supports the American English, International English, and French versions of Scrabble. Each of these versions has their own scoring system. Wolfram|Alpha’s GeoIP capabilities will
return results based on the default version for the user’s location. However, you can specify a version in your input, like “International English Scrabble umbrella”.
Here are a few examples:
You can find the results for words such as umbrella by entering “Scrabble umbrella” into Wolfram|Alpha:
The results pods tells us that the score is 12 and that this word is in the regional Scrabble dictionary (American in this case). It also gives other words that can be made with these letters, listed
in score order. More »
During the holidays we posted “New Features in Wolfram|Alpha: Year-End Update” highlighting some of the most notable datasets and enhancements added to Wolfram|Alpha since its launch this past May.
We are thrilled by the questions and feedback many of you posted in the comments section. Your feedback is incredibly valuable to the development of Wolfram|Alpha. Many of the additions presented in
the post were the result of previous suggestions from Wolfram|Alpha users.
We hope to continue this dialogue as we update Wolfram|Alpha’s ever-growing knowledge base in 2010. You wrote 170-plus comments to the “Year-End Update” post, and we’ve sent questions from those
comments to Wolfram|Alpha’s developers and domain experts for answers. We’ll be reporting their responses in a series of blog posts.
So without further ado…
Zach Q: Wonderful to hear about, yet my regular challenge raises its head again. I type in “plasma physics” and get a definition—but nothing more. I type in “plasma temperatures”, “gas plasma”,
“ionized gas” and get nothing. I applaud the notion of making sure Wolfram|Alpha has information relevant to the public interest (ecology, environment, employment, salaries, cost of living, and all
that), but you’re missing an entire branch of physics and an entire state of matter. I’d love to compute, for example, the temperature of a certain firework as it explodes, and then relate that to
whether the chemicals within have been heated to plasma or are simply burning brightly, and which additives burn the longest (and thus have more chance of landing on the audience while still hot).
Pure exploration of data based on something cool and pretty.
On the other hand, the more you add, the more holes you’ll find as people search and then become frustrated when specific things they want aren’t available. Please keep tracking your “cannot find”
A: Although we haven’t yet covered every possible domain of knowledge, that’s certainly our goal—and feedback like yours is definitely considered and added to our “to-do” list. Each time a query
produces one of those “Wolfram|Alpha isn’t sure how to compute an answer from your input” messages, it shows up in our logs. Sometimes we have the data, but need to tweak Wolfram|Alpha’s linguistic
code so it recognizes more types of questions. If we don’t have the data, someone looks closely at your question and at sources that might be able to answer such questions, and more often than not
those sources are incorporated into our planning. Many of the features mentioned in our year-end review were direct responses to user requests, and many more are in the works.
Jim Clough Q: I have just downloaded W/A for iPhone, but haven’t had much chance to try it yet. Two questions:
1. My first query to W/A, about Olympic marathon winners, failed “Could not connect to a W/A server” or something like that. I thought the point of the downloaded version was to free you from wi fi
2. Given the ever changing nature of knowledge and your impressive programme of developments, can iPhone customers expect updates in the future?
A: As we’ve noted before, the iPhone and iPod touch are terrific platforms, but they simply aren’t powerful enough to solve many queries in a reasonable amount of time, if at all; the Wolfram|Alpha
App for the iPhone does require an internet connection. Users of the app will therefore benefit from all the same data and algorithm updates that are added weekly to the main Wolfram|Alpha website,
as well as ongoing bug fixes and enhancements to the app itself. More »
Prior to releasing Wolfram|Alpha into the world this past May, we launched the Wolfram|Alpha Blog. Since our welcome message on April 28, we’ve made 133 additional posts covering Wolfram|Alpha news,
team member introductions, and “how-to’s” in a wide variety of areas, including finance, nutrition, chemistry, astronomy, math, travel, and even solving crossword puzzles.
As 2009 draws to a close we thought we’d reach into the archives to share with you some of this year’s most popular blog posts.
Rack ’n’ Roll
Take a peek at our system administration team hard at work on one of the
many pre-launch projects. Continue reading…
The Secret Behind the Computational Engine in Wolfram|Alpha
Although it’s tempting to think of Wolfram|Alpha as a place to look up facts, that’s only part of the story. The thing that truly sets Wolfram|Alpha apart is that it is able to do sophisticated
computations for you, both pure computations involving numbers or formulas you enter, and computations applied automatically to data called up from its repositories.
Why does computation matter? Because computation is what turns generic information into specific answers. Continue reading…
Live, from Champaign!
Wolfram|Alpha just went live for the very first time, running all clusters.
This first run at testing Wolfram|Alpha in the real world is off to an auspicious start, although not surprisingly, we’re still working on some kinks, especially around logging.
While we’re still in the early stages of this long-term project, it is really gratifying to finally have the opportunity to invite you to participate in this project with us. Continue reading…
Wolfram|Alpha Q&A Webcast
Stephen Wolfram shared the latest news and updates about Wolfram|Alpha and answered several users’ questions in a live webcast yesterday.
If you missed it, you can watch the recording here. Continue reading… More »
If you’re writing an essay for history or a speech for debate class, Wolfram|Alpha is a great resource. It has an enormous words and linguistics database that you can use for such things as word
definitions, and word origins, synonyms, and hyphenation. Wolfram|Alpha can even compute the number of pages a given text might produce based on the number of words it contains, such as “500 words in
French”. Wolfram|Alpha also has the ability to compute details such how long it should take you to type, read, and deliver that 500-word speech you’ve been preparing.
Type “word contest”, and Wolfram|Alpha will retrieve the word data for the English word “contest”. The results tell you many definitions of the word, that its first known recorded use was in 1603,
that it rhymes with “conquest”, and a wealth of other data on just that word. More »
Wolfram|Alpha is a great resource for writers. It has an enormous words and linguistics database that writers can use for such things as word definitions, origins, synonyms, hyphenation, and Soundex
Type “word contest”, and Wolfram|Alpha will retrieve the word data for the English word “contest”. The results tell you many definitions of the word, that its first known recorded use was in 1603,
that it rhymes with “conquest”, and a wealth of other data on just that word.
Many of our world’s advancements can be attributed to the evolution of communication mediums and styles. Today we can tweet a message in 140 characters or less around the world in a matter of
seconds. But long before the days of the radio, telephone, the fax machine, and email there was the original text message—Morse code. And Wolfram|Alpha can translate a string of characters to and
from Morse code.
Morse code was introduced to the world over 160 years ago, when Samuel Morse and Alfred Vail invented a telegraph that when triggered by electrical pulses made indentions in a paper tape with a
stylus. They also developed a code of short dots and long dashes to represent letters, numbers, and special characters, allowing messages to be sent via those indentions. The sounds produced when the
telegraph processed the electrical pulses became so familiar that adept users could translate the code by sounds, and the code would eventually be adapted for broadcast across the radio airwaves.
This system would go on to become a major form of international communication, especially for those working and traveling in the air or out at sea.
Wolfram|Alpha introduces many new methods for understanding linguistic inputs. Those methods allow you and others around the world to ask it questions in natural ways. In this video, a developer
working on Wolfram|Alpha’s linguistics shares a bit about her role in building and improving the system’s understanding to help you get the answers you’re looking for.
You can watch more interviews with Wolfram|Alpha team members here.
We’ve updated another entry thanks to feedback sent to Wolfram|Alpha. We’ve now changed linguistic priority settings so that “blog” is no longer interpreted as the math expression b log(x) by
There’s new data flowing into Wolfram|Alpha every second. And we’re always working very hard to develop the core code and data for the system. In fact, internally, we have a complete new version of
the system that’s built every day. But before we release this version for general use, we do extensive validation and testing.
In addition to real-time data updates, we’ve made a few changes to Wolfram|Alpha since its launch three weeks ago. But today, as one step in our ongoing, long-term development process, we’ve just
made live the first broad updates to the core code and data of Wolfram|Alpha. More »
Today if you give input to Wolfram|Alpha in a language other than English, you’ll most likely see something like:
But in making Wolfram|Alpha accessible to as many people around the world as possible, our goal is eventually to have it understand every one of these languages.
A certain amount of Wolfram|Alpha input is actually quite language independent—because it’s really in math, or chemistry, or some other international notation, or because it’s asking about something
(like a place) that’s always referred to by the same name.
But inevitably many inputs do depend on human language—and in fact even now about 5% of all inputs that are given try to use a language other than English.
Wolfram|Alpha knows quite a bit about the general properties of essentially every language (Spanish, Swahili, ….) But it doesn’t yet know how to interpret input in any language other than English.
More »
(Comments to a blog post this morning on Gizmodo.)
Type in the word “dumb” to Wolfram|Alpha and you’ll get all sorts of interesting information. Like that “dumb” was first recorded in use in 1323—686 years ago. More »
Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more.
Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies…
Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes!
Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon?
Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step.
|
{"url":"http://blog.wolframalpha.com/category/words/","timestamp":"2014-04-17T12:52:44Z","content_type":null,"content_length":"111968","record_id":"<urn:uuid:3972efce-5bda-4856-9042-0041b8927f4d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In Code: A Mathematical Journey
At last! Here is a multifaceted book that is truly inspiring — inspiring to the parent in us, to the teacher in us, to the student in us, and to we who know and love the problem-solving obsession
mathematics can often provide. The book is about the development of mathematical passion and excellence. The exposition, a joint collaboration of Sarah Flannery, then a secondary school girl, and her
father, David, is clearly written and is exceptional for its bare and realistic honesty.
In this review, you will find a short summary of the narrative story, Sarah's mathematical development and maturation, the book's mathematics, and a little about her projects' evolution.
At its simplest level, this book is the story of Sarah Flannery, an Irish girl, who at the age of 15 was talked by one of her teachers into entering the annual national science fair to be held in
Dublin a few months later, in January. Her father suggested she enter a mathematics project that investigated cryptography. This was her first science fair, and her initial entry into the January
1998 Irish Young Scientist Exhibition received a first place in the Individual Intermediate Mathematics, Physics and Chemistry division. Along with this prize, she was chosen to represent Ireland at
the Intel International Science and Engineering Fair (ISEF), which took place in Fort Worth that May. Sarah continued her independent studies and her new ISEF entry won a third-place Karl Menger
Memorial Award from the American Mathematical Society, a fourth-place Grand Award in Mathematics, and the prestigious $2000 Intel Fellows Achievement Award.
Following considerable additional study, and now 16 years old, Sarah produced a new project and won the IR£1000 first place in the 1999 Irish Young Scientist Exhibition for her entry in the
Mathematics, Physics and Chemistry competition, and thence in September to Thessalonìki where she become a first-place winner of the 1999 European Union Young Scientist Award. Along with her European
Young Scientist Award, Sarah was given an invitation to attend the Nobel Prize ceremonies in Stockholm. Fame, trophies, and media attention came following her 1999 Irish Young Scientist award, and
Sarah was invited to lecture internationally on mathematics, puzzle solving, and on her projects.
Sarah Flannery is the eldest of five children. She lives rurally in a farmhouse near Blarney, near Cork, Ireland. However, it is not a typical farm household. Sarah and her four brothers live with
their parents, David and Elaine, who respectively lecture in mathematics and microbiology at the Cork Institute of Technology (CIT). There is a blackboard in their kitchen, and it is on this
blackboard that David has posed mathematical puzzles for his children to solve. The puzzles were clearly chosen to bring the children into enjoying abstract reasoning. They included magic square
construction, crossing the river with lion, goat and cabbage, liquid-measuring with various containers, etc. And take note: Sarah does not fit the nose-always-in-a-book nerd stereotype. She rides
horses, plays basketball and hurling, boating, and other outdoor and team sports.
On Tuesday nights, David conducts a non-credit course at CIT called Mathematical Excursions. The course has minimal prerequisites and explores popular and recreational mathematical themes. David has
allowed Sarah to participate in the three-hour sessions. It is here that she became familiar with modular arithmetic, elementary number theory and monoalphabetic-substitution cryptography, and the
RSA public-key algorithm.
When she was 15, Sarah entered her "transition year" in the Irish secondary school system. This is an optional year during which students can investigate a large number of subjects prior to going on
to the final two years of more concentrated study. David's suggestion for Sarah's first Young Scientist project was that she program several cryptographic algorithms and demonstrate encryption and
decryption using two laptops. Sarah did this using Mathematica. Using handmade cardboard coding disks and other props, she was able to explain her project to the judges and to answer their questions
about how everything worked. By no means did she simply make props and code the algorithms: she learned the underlying mathematics. For example, to help her understand the workings of implementing
the RSA algorithm, David found relevant papers for her to read. As a 15-year old, it took her many weeks of diligent study to master A. R. Meijer's "Groups, Factoring and Cryptography," Math. Mag. 69
(1996), pp. 103-109.
As part of her transition year, Sarah was given the opportunity to work in industry for two weeks. After winning the '98 Young Scientist award, Sarah worked at Baltimore Technologies' Dublin office
where she was assigned to read a paper describing an algorithm by one of their cryptologists, vacationing Michael Purser. This algorithm, designed to speed-up computationally the RSA algorithm,
substituted noncommutative multiplications over the quaternions for the RSA's more costly exponentiations. Sarah was asked to implement a prototype of Purser's algorithm, which she did in Mathematica
Sarah asked and received permission from Baltimore Technologies to modify this algorithm and use it as part of her May entry in the ISEF. David showed Sarah the noncommutative multiplicative
properties of matrices, and she implemented a demonstration of the modified algorithm using 2x2 matrices for Fort Worth. David had her read several additional papers and mathematical biographies, and
she learned of Cayley's early work. Sarah continued evolving the project after the ISEF, and matured the coding, the algorithm and the presentation for her entry for the 1999 Young Scientist
Exhibition. At this point she asked permission and named her new algorithm Cayley-Purser, or CP.
The CP algorithm's speed was compared to that of RSA in her project, and her conclusion was that CP runs approximately 22 times faster than RSA. Sarah's paper presents the mathematics underlying both
algorithms as well as empirical results. While Sarah's paper conjectured that the algorithm was secure, it left open the possibility that it might be flawed. After winning Young Scientist '99, the CP
algorithm came under close scrutiny and a crucial flaw was identified by a mathematician at University College Dublin and by Purser and William Whyte of Baltimore Technologies. A consequence of the
Cayley-Hamilton Theorem, the flaw is intrinsic to the algorithm, and it was a painful experience for Sarah to realize that her algorithm was not secure after all. This left Sarah with the problem of
how to present her work in Thessalonìki in June. Sarah's decision was to keep the discovered flaw secret prior to the European Union Young Scientist Exhibition, but to include a full disclosure and
explanation of the flaw as an appendix to her entry's paper. She did, and the entry won. Sarah called her winning project "Cryptography-A New Algorithm Versus the RSA." Its text can be found in one
of the book's appendices and also at http://www.cayley-purser.ie/.
As mentioned above, Sarah's achievements were greeted with a great deal of media attention. Even the London Times waxed poetic in their promotion of Sarah to the stature of mathematical genius and
her work to a revolutionary and patentable technology. As detailed in the book, the attention and such gushingly imprecise reporting caused Sarah, David and her mentors at Baltimore Technologies some
embarrassments and difficulties. It was here that I believe the book really shines. Flannery père et fille make no bones about stating that Sarah is a bright young woman, but not a genius. Thanks to
David, Sarah has read Men of Mathematics and the biographies of many mathematicians. She knows what Gauss and Euler achieved in their youth, and she understands that what she has been doing is not of
that calibre. She also knows about the important history of counterexamples in mathematics as well as in cryptology. There is no false modesty here. Sarah's frank voice rings very true and maturely
on these points, and I believe these aspects are the best reasons for giving this book to bright young students.
The book was originally published in the UK by Profile Books, who did not flinch at the prospect of producing a book about a young celebrity that included a lot of equations. The initial Flannery
collaboration included a selection of the puzzles that had intrigued Sarah along with their solutions (many of these are developed in the text while others are teasingly presented in the back of the
The US publisher, Workman, responded to criticism of the Profile edition and asked for even more mathematics to be included. The result is a narrative that is interrupted by several long sections
that explain: the Euclidean algorithm, sieve of Eratosthenes, complexity of factoring, distribution of primes, the Chinese Remainder Theorem, Fermat's little theorem, pseudoprimes, casting out nines,
matrix arithmetic and inversion, Sarah's conjecture on the order of GL(2, Z[p]), etc. With few exceptions, these expositions are complete and should be within the grasp of US high school students.
The revision for the US edition was written while Sarah was particularly busy in school, and so David was largely responsible for much of the newer mathematical exposition. This decision showed
considerable courage on Workman's part — a cursory scan of Workman's publications catalogue shows no mathematics books and only very light popularisations of science writing. There is one obvious
error in the text (on page 136, the calculated 2860 should be 103) and the publisher's typesetter introduced several syntactical errors in the Mathematica code.
Now 20, Sarah has entered her second year of studies at Cambridge, where she is enrolled in a theoretical computing science curriculum.
Marvin Schaefer (bwapast@erols.com) is a computer security expert and was chief scientist at the National Computer Security Center at the NSA, and at Arca Systems. He has been a member of the MAA for
39 years and now operates an antiquarian book store called Books With a Past.
|
{"url":"http://www.maa.org/publications/maa-reviews/in-code-a-mathematical-journey","timestamp":"2014-04-19T05:46:09Z","content_type":null,"content_length":"104379","record_id":"<urn:uuid:08070b6e-93f3-409a-8566-9b95e37589e0>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nilpotent Elements of Residuated Lattices
International Journal of Mathematics and Mathematical Sciences
Volume 2012 (2012), Article ID 763428, 9 pages
Research Article
Nilpotent Elements of Residuated Lattices
^1Department of Mathematics, Bam Higher Education Complexes, Kerman, Iran
^2Department of Mathematics, Islamic Azad University, Kerman Branch, Kerman, Iran
Received 28 March 2012; Revised 13 June 2012; Accepted 27 June 2012
Academic Editor: Siegfried Gottwald
Copyright © 2012 Shokoofeh Ghorbani and Lida Torkzadeh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Some properties of the nilpotent elements of a residuated lattice are studied. The concept of cyclic residuated lattices is introduced, and some related results are obtained. The relation between
finite cyclic residuated lattices and simple MV-algebras is obtained. Finally, the notion of nilpotent elements is used to define the radical of a residuated lattice.
1. Introduction
Ward and Dilworth [1] introduced the concept of residuated lattices as generalization of ideal lattices of rings. The residuated lattice plays the role of semantics for a multiple-valued logic called
residuated logic. Residuated logic is a generalization of intuitionistic logic. Therefore it is weaker than classical logic. Important examples of residuated lattices related to logic are Boolean
algebras corresponding to basic logic, BL-algebras corresponding to Hajek’s basic logic, and MV-algebras corresponding to Lukasiewicz many valued logic. The residuated lattices have been widely
studied (see [2–8]).
In this paper, we study the properties of nilpotent elements of residuated lattices. In Section 2, we recall some definitions and theorems which will be needed in this paper. In Section 3, we study
the nilpotent elements of a residuated lattice and study its properties. In Section 4, we define the notion of cyclic residuated lattice and we obtain some related results. In particular, we will
prove that a finite residuated lattice is cyclic if and only if it is a simple MV-algebra. In Section 5, we investigate the relation between nilpotent elements and the radical of a residuated
2. Preliminaries
In this section, we review some basic concepts and results which are needed in the later sections.
A residuated lattice is an algebraic structure such that(1) is a bounded lattice with the least element 0 and the greatest element 1,(2) is a commutative monoid where is a unit element,(3) if and
only if , for all . We denote the residuated lattice by . We use the notation for the bounded lattice .
Proposition 2.1 (see [5, 9]). Let be a residuated lattice. Then one has the following properties: for all ,(1) if and only if ,(2), ,(3), ,(4),(5),(6),(7)if , then and .
An MV-algebra is an algebra with one binary operation , one unary operation , and one constant 0 such that is a commutative monoid and, for all , , , . If is an MV-algebra, then the binary operations
, , , and the constant 1 are defined by the following relations: for all , , , , , .
Remark 2.2. A residuated lattice is an MV-algebra if it satisfies the additional condition: , for any .
Definition 2.3. A nonempty subset of is called a filter of if and only if it satisfies the following conditions:(i)for all and all , if then ,(ii)for all , .
Let be a filter of . For all , we denote and say that and are congruent if and only if and . is a congruence relation on . The quotient residuated lattice with respect to the congruence relation is
denoted by and its elements are denoted by , for .
For all elements of a residuated lattice , define and for all . The order of , in symbols , is the smallest positive integer such that . If such does not exist, we say has infinite order.
Definition 2.4. The residuated lattice is called simple if the only filters of are and .
Proposition 2.5 (see [10]). A residuated lattice is simple if and only if , for every such that .
A filter of is called a maximal filter if and only if it is a maximal element of the set of all proper filters of . The set of all maximal filters of is called the maximal spectrum of and is denoted
by . For any , we will denote . For any , will be denoted by .
Proposition 2.6 (see [10]). Let be a residuated lattice and a proper filter of . Then the followings are equivalent:(i) is a simple residuated lattice,(ii) is a maximal filter, (iii)for any , if and
only if , for some .
Definition 2.7. A residuated lattice is said to be local if and only if it has exactly one maximal filter.
Proposition 2.8 (see [10]). Any simple residuated lattice is local.
We denote by the Boolean center of , that is the set of all complemented elements of the lattice . The complements of the elements in the Boolean center of a residuated lattice are unique.
Theorem 2.9 (see [10]). If is a local residuated lattice, then .
3. Nilpotent Elements of Residuated Lattices
We recall that an element is called nilpotent if and only if is finite. We denote by the set of the nilpotent elements of .
Example 3.1 (see [5]). Consider the residuated lattice with the universe . Lattice ordering is such that , , and elements from and are pairwise incomparable. The operations of and are given by the
following: Then , , , and . Hence .
Remark 3.2. Let be a residuated lattice. Then has order if and only if , for some .
Example 3.3. Let , for and for all . Also, we have and . Define Then becomes a residuated lattice. Let . Then there exists such that . We get that , because . Hence is a simple residuated lattice.
Theorem 3.4. is a lattice ideal of the residuated lattice .
Proof. It is clear that . Suppose that and . There exists such that . We have . Therefore .
Suppose that . Then there exists such that . By Proposition 2.1(4), we have Hence , and then is a lattice ideal of .
Remark 3.5. An element of a residuated lattice is nilpotent if and only if there is no proper filter of such that .
Theorem 3.6. Let be a residuated lattice and a family of residuated lattices. Then(1),(2).
Proof. (1) Let . Since , then . Hence we get that for all . Also, we have . So there exists such that . Therefore, we obtain that .
(2) Suppose that . Then where is nilpotent in . Thus, for each , there exists such that . Put . Then , that is . The proof of reverse inclusion is straightforward.
For a nonempty subset , the smallest filter of which contains is said to be the filter of generated by and will be denoted by . If with , we denote by the filter generated by . Also, we have for some
Theorem 3.7. Let be an element of a residuated lattice . Then is nilpotent if and only if .
Proof. Let be a finite order. Then there exists integer such that , that is . Therefore .
Conversely, if , then . So there exists an integer such that . Hence has finite order.
Theorem 3.8. Let be an element of order of a residuated lattice . Then the elements , , , of are pairwise distinct.
Proof. Suppose that , for some . Then . But which is a contradiction with the order of . Hence .
4. Cyclic Residuated Lattices
The order of a residuated lattice is the cardinality of and denoted by .
Definition 4.1. Let be a finite residuated lattice. is called cyclic, if there exists such that . is called a generator of .
Theorem 4.2. Let be a cyclic residuated lattice of order . Then there exists an element of order such that where , .
Proof. Since is cyclic of order , then there exists an element of order . By Theorem 3.8, the elements , and , of are pairwise distinct. Hence the cardinality of is . Since and , we get that .
An element of a residuated lattice is called a coatom if it is maximal among elements in .
Theorem 4.3. Let be a cyclic residuated lattice of order . Then is linearly ordered. Moreover the generator of is a unique coatom.
Proof. Since is a cyclic residuated lattice of order , then there exists an element such that by Theorem 4.2. If , then . By Proposition 2.1(7),. Hence is linearly ordered.
It is clear that is a coatom of . We will show that . Since , then there exists such that . We have Therefore . By definition order of , we get that . Hence . Therefore .
In the following example, we will show that the converse of the above theorem may not be true in general.
Example 4.4. Consider the residuated lattice with the universe . Lattice ordering is such that . The operations of and are given by the following: is a finite linearly residuated lattice of order but
it is not cyclic because we have , , , and .
In the following theorems, we characterize cyclic residuated lattices.
Theorem 4.5. Let be a cyclic residuated lattice of order . Then is isomorphic to the simple residuated lattice of Example 3.3.
Proof. By Theorems 4.2 and 4.3, there exists an element such that , where . We denote , , and for . By Theorem 4.3, is the generator of and it is a coatom. We will show that It is clear that , if .
We will prove that , if . Suppose that . Then Hence . Since is a coatom, then or . If , then . Since , then which is a contradiction by Theorem 3.8. Hence .
Now, suppose that where . Since , then there exists such that . We have Therefore . We get that . Thus, . On the other hand, since and for all , then . Therefore . We obtain that .
Now, we will show that Suppose that . Then . Since , we get that . Thus .
Suppose that . Since , we have . Therefore . We get that . Let . Then . Consider the following cases.(1)If , then which is a contradiction.(2)If , then which is a contradiction.(3), then . We get
that . Therefore which is a contradiction.We obtain that .
Hence is an isomorphism between and . Since is a simple residuated lattice, then is a simple residuated lattice.
Remark 4.6. Consider the residuated lattice in Example 3.3. Since for all , , then is an MV-algebra.
Corollary 4.7. Every cyclic residuated lattice is a finite simple MV-algebra.
Proof. It follows from Theorem 4.5 and Remark 4.6.
Corollary 4.8. Every cyclic residuated lattice is local and .
Proof. It follows from Theorem 4.5, Proposition 2.8, and Theorem 2.9.
Theorem 4.9. Every finite simple MV-algebra is a cyclic residuated lattice.
Proof. Any simple MV-algebra is isomorphic to a subalgebra of , and also , () is the only subalgebra of with elements [11]. Since , then . Therefore it is cyclic.
Corollary 4.10. A finite residuated lattice is cyclic if and only if it is a simple MV-algebra.
Proof. It follows from Corollary 4.7 and Theorem 4.9.
Theorem 4.11. Every nonzero subalgebra of a cyclic residuated lattice is cyclic.
Proof. Suppose that is a nonzero subalgebra of a cyclic residuated lattice . Then is a simple MV-algebra and is isomorphic to a subalgebra of . Moreover , () is the only subalgebra of with elements.
Hence is a simple MV-algebra. By Theorem 4.9, is a cyclic MV-algebra.
Theorem 4.12. Every finite MV-algebra is a direct product of cyclic residuated lattices.
Proof. Every finite MV-algebra is isomorphic to a finite direct product of finite subalgebras of the standard MV-algebra [11]. Theorem 4.11 yields the theorem.
5. Semisimple Residuated Lattices
The intersection of the maximal filters of residuated lattice is called the radical of and will be denoted by .
Theorem 5.1. Let be a residuated lattice. Then for all .
Proof. See [12].
Theorem 5.2. Let be a residuated lattice. Then(1),(2) and are homomorphic topological spaces.
Proof. (1) It is easily seen that , thus
(2) Define , for all , . This function is well defined and surjective. For any and any , we have if and only if . We get that is injective.
Now, we will prove that is continuous and open. Let . By using the above, we get Thus is open. Since is injective and open, then . So is continuous.
Definition 5.3. A residuated lattice is called semisimple if the intersection of all congruences of is the congruence (where, for all , if and only if ).
Remark 5.4 (see [5]). A residuated lattice is semisimple if and only if .
Lemma 5.5. Let be a residuated lattice, , and . Then is a filter of .
Proof. (1) Suppose that . Then . We have Therefore .
(2) Let , where and . We have . Since , then . Therefore and .
Hence is a filter of .
Theorem 5.6. Let be a residuated lattice such that for all there exists such that . Then is semisimple.
Proof. We will show that . Suppose that . By Theorem 5.1, we have that is nilpotent. By assumption, there exists such that . We have . Since is a filter of by Lemma 5.5 and is nilpotent, we get that
. Thus . We obtain that and then is semisimple by Remark 5.4.
1. M. Ward and R. P. Dilworth, “Residuated lattices,” Transactions of the American Mathematical Society, vol. 45, no. 3, pp. 335–354, 1939. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH
2. H. Freytes, “Injectives in residuated algebras,” Algebra Universalis, vol. 51, no. 4, pp. 373–393, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
3. G. Georgescu, L. Leuştean, and C. Mureşan, “Maximal residuated lattices with lifting Boolean center,” Algebra Universalis, vol. 63, no. 1, pp. 83–99, 2010. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH
4. P. Hájek, Metamathematics of Fuzzy Logic, vol. 4, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1998. View at Publisher · View at Google Scholar
5. T. Kowalski and H. Ono, “Residuated lattices: an algebraic glimpse at logics without contraction,” In press.
6. C. Mureşan, “Dense elements and classes of residuated lattices,” Bulletin Mathématique de la Société des Sciences Mathématiques de Roumanie. Nouvelle Série, vol. 53, no. 1, pp. 11–24, 2010. View
at Zentralblatt MATH
7. D. Piciu, Algebras of Fuzzy Logic, Universitaria Craiova Publishing House, Craiova, Romania, 2007.
8. E. Turunen, Mathematics Behind Fuzzy Logic, Physica, Heidelberg, Germany, 1999.
9. E. Turunen and S. Sessa, “Local BL-algebras,” Multiple-Valued Logic, vol. 6, no. 1-2, pp. 229–249, 2001. View at Zentralblatt MATH
10. L. C. Ciungu, “Local pseudo-BCK algebras with pseudo-product,” Mathematica Slovaca, vol. 61, no. 2, pp. 127–154, 2011. View at Publisher · View at Google Scholar
11. R. L. O. Cignoli, I. M. L. D'Ottaviano, and D. Mundici, Algebraic Foundations of Many-Valued Reasoning, vol. 7 of Trends in Logic-Studia Logica Library, Kluwer Academic Publishers, Dordrecht, The
Netherlands, 2000. View at Zentralblatt MATH
12. U. Höhle, “Commutative, residuated $l$-monoids,” in Non-Classical Logics and Their Applications to Fuzzy Subsets, vol. 32, pp. 53–106, Kluwer Academic Publishers, Dordrecht, The Netherlands,
|
{"url":"http://www.hindawi.com/journals/ijmms/2012/763428/","timestamp":"2014-04-19T05:17:45Z","content_type":null,"content_length":"479587","record_id":"<urn:uuid:4ec7c933-9a42-4cdd-a45d-d09db0399384>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why Proofs? Definitions and Axioms
Date: 09/16/2001 at 18:49:30
From: Allie
Subject: Geometry, proofs
Why is a proof needed? Why do you think proofs are important in the
development of a mathematical system such as geometry?
Thank you,
Date: 09/18/2001 at 17:30:49
From: Doctor Achilles
Subject: Re: Geometry, proofs
Hi Allie,
Thanks for writing to Dr. Math.
Proofs are the foundation of all mathematical systems. Geometry is
perhaps the paradigm (or clearest) case of how to build a mathematical
system on proofs.
To do this, you start with definitions and axioms. (I should note that
the example definitions and axioms I give below are by no means
official, they are just samples that I am using to illustrate what a
definition and an axiom are.)
Definitions are things like "A triangle is a set of three line
segments joined at the tips" and the like. Definitions are just a way
to create vocabulary. They don't actually say anything meaningful,
they just give a word meaning. Basically, it's very cumbersome and
difficult to write "a set of three line segments joined at the tips"
over and over again every time you want to refer to such an object.
So instead you make up a word "triangle" for such an object and save
yourself ink.
Axioms are things that you claim are true about the world. For
example, "Given a line and a point not on the line, there is one and
only one parallel line that crosses that point." This is not simply a
meaning of a term, it is a substantive sentence about the world: it's
a claim that something is true. Axioms are usually chosen to be
obvious things that people won't dispute.
Once you have a set of definitions and axioms you see what "follows"
from it; i.e. you assume that your axioms are true, and then use logic
to see what else MUST be true. Basically what mathematics does is show
that if we accept certain _basic_ facts, then we must also accept
certain other _derived_ facts. So if you accept the axioms of
geometry, then you have to accept all the theorems (for example that
the area of a triangle is one half its base times its height).
Proofs are not just for historic significance. Math grows through new
proofs. Mathematicians are constantly using proofs to find out new
things. In a way, math is about pushing the limits of what we MUST
accept if we choose to accept a few things that seem obvious at first.
Without proofs there really is no math.
I hope this helps. If you have other questions about this or other
topics, please write back.
- Doctor Achilles, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/52373.html","timestamp":"2014-04-16T10:27:28Z","content_type":null,"content_length":"7950","record_id":"<urn:uuid:cef154d0-56f5-4f6e-9687-384ec1ba772a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cryptology ePrint Archive: Report 2007/301
On Asymptotic Behavior of the Ratio Between the Numbers of Binary Primitive and Irreducible PolynomialsYuri Borissov and Moon Ho Lee and Svetla NikovaAbstract: In this paper we study the ratio $\
theta(n) = \frac{\lambda_2(n)}{\psi_2(n)}$, where ${\lambda_2(n)}$ is the number of primitive polynomials and ${\psi_2(n)}$ is the number of irreducible polynomials in $GF(2)[x]$ of degree $n$. %and
$2n$, for an arbitrary odd number $n$. Let $n=\prod_{i=1}^{\ell} p_i^{r_i}$ be the prime factorization of $n$, where $p_i$ are odd primes. We show that $\theta(n)$ tends to 1 and $\theta(2n)$ is
asymptotically not less than 2/3 when $r_i$ are fixed and $p_i$ tend to infinity. We also, describe an infinite series of values $n_{s}$ such that $\theta(n_{s})$ is strictly less than $\frac{1}{2}$.
Category / Keywords: Publication Info: Extended abstract of a talk at Finite Fields and applications (FQ8), Melbourne, Australia, July 2007Date: received 2 Aug 2007, last revised 15 Aug 2007Contact
author: svetla nikova at esat kuleuven beAvailable format(s): PDF | BibTeX Citation Version: 20070815:073802 (All versions of this report) Discussion forum: Show discussion | Start new discussion[
Cryptology ePrint archive ]
|
{"url":"http://eprint.iacr.org/2007/301","timestamp":"2014-04-19T01:52:29Z","content_type":null,"content_length":"2474","record_id":"<urn:uuid:deaf6f4f-7d83-4a1c-bc4e-58c992b1c8a2>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Online Median Problem
Results 1 - 10 of 59
, 2006
"... Abstract. The PASCAL Visual Object Classes Challenge ran from February to March 2005. The goal of the challenge was to recognize objects from a number of visual object classes in realistic
scenes (i.e. not pre-segmented objects). Four object classes were selected: motorbikes, bicycles, cars and peop ..."
Cited by 369 (18 self)
Add to MetaCart
Abstract. The PASCAL Visual Object Classes Challenge ran from February to March 2005. The goal of the challenge was to recognize objects from a number of visual object classes in realistic scenes
(i.e. not pre-segmented objects). Four object classes were selected: motorbikes, bicycles, cars and people. Twelve teams entered the challenge. In this chapter we provide details of the datasets,
algorithms used by the teams, evaluation criteria, and results achieved. 1
- Journal of the ACM , 1999
"... ..."
- In Proceedings of the 40th Annual IEEE Symposium on Foundations of Computer Science , 1999
"... We present improved combinatorial approximation algorithms for the uncapacitated facility location and k-median problems. Two central ideas in most of our results are cost scaling and greedy
improvement. We present a simple greedy local search algorithm which achieves an approximation ratio of 2:414 ..."
Cited by 209 (14 self)
Add to MetaCart
We present improved combinatorial approximation algorithms for the uncapacitated facility location and k-median problems. Two central ideas in most of our results are cost scaling and greedy
improvement. We present a simple greedy local search algorithm which achieves an approximation ratio of 2:414 + in ~ O(n 2 =) time. This also yields a bicriteria approximation tradeoff of (1 +; 1+ 2
=) for facility cost versus service cost which is better than previously known tradeoffs and close to the best possible. Combining greedy improvement and cost scaling with a recent primal dual
algorithm for facility location due to Jain and Vazirani, we get an approximation ratio of 1.853 in ~ O(n 3 ) time. This is already very close to the approximation guarantee of the best known
algorithm which is LP-based. Further, combined with the best known LP-based algorithm for facility location, we get a very slight improvement in the approximation factor for facility location,
achieving 1.728....
- Journal of the ACM , 2001
"... We present a natural greedy algorithm for the metric uncapacitated facility location problem and use the method of dual fitting to analyze its approximation ratio, which turns out to be 1.861.
The running time of our algorithm is O(m log m), where m is the total number of edges in the underlying c ..."
Cited by 100 (13 self)
Add to MetaCart
We present a natural greedy algorithm for the metric uncapacitated facility location problem and use the method of dual fitting to analyze its approximation ratio, which turns out to be 1.861. The
running time of our algorithm is O(m log m), where m is the total number of edges in the underlying complete bipartite graph between cities and facilities. We use our algorithm to improve recent
results for some variants of the problem, such as the fault tolerant and outlier versions. In addition, we introduce a new variant which can be seen as a special case of the concave cost version of
this problem.
- IN: 36TH STOC , 2004
"... Several combinatorial optimization problems choose elements to minimize the total cost of constructing a feasible solution that satisfies requirements of clients. In the STEINER TREE problem,
for example, edges must be chosen to connect terminals (clients); in VERTEX COVER, vertices must be chosen t ..."
Cited by 90 (22 self)
Add to MetaCart
Several combinatorial optimization problems choose elements to minimize the total cost of constructing a feasible solution that satisfies requirements of clients. In the STEINER TREE problem, for
example, edges must be chosen to connect terminals (clients); in VERTEX COVER, vertices must be chosen to cover edges (clients); in FACILITY LOCATION, facilities must be chosen and demand vertices
(clients) connected to these chosen facilities. We consider a stochastic version of such a problem where the solution is constructed in two stages: Before the actual requirements materialize, we can
choose elements in a first stage. The actual requirements are then revealed, drawn from a pre-specified probability distribution π; thereupon, some more elements may be chosen to obtain a feasible
solution for the actual requirements. However, in this second (recourse) stage, choosing an element is costlier by a factor of σ> 1. The goal is to minimize the first stage cost plus the expected
second stage cost. We give a general yet simple technique to adapt approximation algorithms for several deterministic problems to their stochastic versions via the following method. • First stage:
Draw σ independent sets of clients from the distribution π and apply the approximation algorithm to construct a feasible solution for the union of these sets. • Second stage: Since the actual
requirements have now been revealed, augment the first-stage solution to be feasible for these requirements.
- In Proc. of 35th ACM Symposium on Theory of Computing (STOC , 2003
"... We study cluster ng pr blems in the str aming model, wher e the goal is to cluster a set of points by making one pass (or a few passes) over the data using a small amount of storSD space.Our
mainr esult is a r ndomized algor ithm for k--Median prE lem which p duces a constant factor a ..."
Cited by 70 (1 self)
Add to MetaCart
We study cluster ng pr blems in the str aming model, wher e the goal is to cluster a set of points by making one pass (or a few passes) over the data using a small amount of storSD space.Our mainr
esult is a r ndomized algor ithm for k--Median prE lem which p duces a constant factor appr oximation in one pass using storR4 space O(kpolylog n). This is a significant imp r vement of the prS ious
best algor5 hm which yielded a 2 appr ximation using O(n )space.
"... We consider the online variant of facility location, in which demand points arrive one at a time and we must maintain a set of facilities to service these points. We provide a randomized online
O(1)-competitive algorithm in the case where points arrive in random order. If points are ordered adversar ..."
Cited by 53 (4 self)
Add to MetaCart
We consider the online variant of facility location, in which demand points arrive one at a time and we must maintain a set of facilities to service these points. We provide a randomized online O(1)
-competitive algorithm in the case where points arrive in random order. If points are ordered adversarially, we show that no algorithm can be constant-competitive, and provide an O(log n)-competitive
algorithm. Our algorithms are randomized and the analysis depends heavily on the concept of expected waiting time. We also combine our techniques with those of Charikar and Guha to provide a
linear-time constant approximation for the offline facility location problem.
- in Proc. ACM Symposium on Principles of Distributed Computing (ACM PODC , 2004
"... We analyze replication of resources by server nodes that act selfishly, using a game-theoretic approach. We refer to this as the selfish caching problem. In our model, nodes incur either cost
for replicating resources or cost for access to a remote replica. We show the existence of pure strategy Nas ..."
Cited by 47 (2 self)
Add to MetaCart
We analyze replication of resources by server nodes that act selfishly, using a game-theoretic approach. We refer to this as the selfish caching problem. In our model, nodes incur either cost for
replicating resources or cost for access to a remote replica. We show the existence of pure strategy Nash equilibria and investigate the price of anarchy, which is the relative cost of the lack of
coordination. The price of anarchy can be high due to undersupply problems, but with certain network topologies it has better bounds. With a payment scheme the game can always implement the social
optimum in the best case by giving servers incentive to replicate.
, 2002
"... Clusteringisafundamentalprobleminunsuper-vised learning, andhasbeenstudiedwidelyboth asaproblemoflearningmixture modelsandasanoptimizationproblem. Inthispaper, we studyclusteringwithrespectthe
k-median objectivefunction, anaturalformulationofclusteringin whichweattempttominimize the average distance ..."
Cited by 32 (2 self)
Add to MetaCart
Clusteringisafundamentalprobleminunsuper-vised learning, andhasbeenstudiedwidelyboth asaproblemoflearningmixture modelsandasanoptimizationproblem. Inthispaper, we studyclusteringwithrespectthe
k-median objectivefunction, anaturalformulationofclusteringin whichweattempttominimize the average distancetoclustercenters. Oneofthe maincontributionsofthispaperisasimplebutpowerful
samplingtechniquethatwecall successivesampling thatcouldbeofindependentinterest. Weshowthatoursamplingprocedurecan rapidlyidentify asmallsetofpoints(ofsizejust O(k log n/k))
thatsummarizetheinputpoints forthepurposeofclustering. Usingsuccessive sampling, we develop analgorithmforthe k-medianproblemthatrunsin O(nk) timeforawiderangeof valuesof k andisguaranteed, with high
probability, to return a solution with cost at most a constant factor times optimal. We also establish a lower bound of \Omega ( nk) onanyrandom-izedconstant-factorapproximation algorithm for the
k-median problem that succeeds with even a negligible (say
- in APPROX , 2005
"... Abstract. The field of stochastic optimization studies decision making under uncertainty, when only probabilistic information about the future is available. Finding approximate solutions to
well-studied optimization problems (such as Steiner tree, Vertex Cover, and Facility Location, to name but a f ..."
Cited by 32 (8 self)
Add to MetaCart
Abstract. The field of stochastic optimization studies decision making under uncertainty, when only probabilistic information about the future is available. Finding approximate solutions to
well-studied optimization problems (such as Steiner tree, Vertex Cover, and Facility Location, to name but a few) presents new challenges when investigated in this framework, which has promoted much
research in approximation algorithms. There has been much interest in optimization problems in the setting of two-stage stochastic optimization with recourse, which can be paraphrased as follows: On
the first day (Monday), we know a probability distribution π from which client demands will be drawn on Tuesday, and are allowed to make preliminary investments (e.g., installing links, opening
facilities) towards meeting this future demand. On Tuesday, the actual requirements are revealed (drawn from the same distribution π) and we must purchase enough additional equipment to satisfy these
demands; however, these purchases are now made at an inflated cost. In a recent paper [8], we proposed the Boosted Sampling framework which converted an approximation algorithm A for an optimization
problem Π into one for the stochastic version of Π (provided A satisfied certain technical conditions). In this paper, we give two generalizations of this Boosted Sampling framework: Firstly, we show
that a natural extension of the framework works in a general k-stage setting, where information about the future is gradually revealed in several stages and we are allowed to take (increasingly
expensive) corrective actions in each stage. We use these to give approximation
|
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.128.4386","timestamp":"2014-04-24T08:41:39Z","content_type":null,"content_length":"40302","record_id":"<urn:uuid:448f44f2-5ac9-4000-924b-3f36ae0052b1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Expected Value Question
April 12th 2011, 03:35 PM #1
Junior Member
Oct 2009
Expected Value Question
This problem was presented in a philosophy class. I don't have much prob/stat background, so I want to make sure I understand it correctly.
A game at a casino consists of the following:
□ You roll a dice.
□ If it lands on a 6, then you win $60.
□ Otherwise, you lose $12.
What is the expected utility of agreeing to play this game?
I did some googling and found this resource on Expected Utility:
If there is a $p%$ chance of $X$ and a $q%$ chance of $Y$, then $EV=pX+qY$
Based on that Theorem, here is what I have done:
$X = \text{Gain } \60$
$p = \frac{1}{6} \times 100$
$Y = \text{Lose } \12$
$q = \frac{5}{6} \times 100$
Plug these values into
Then we have
$EV=\frac{1}{6} \times 100 \times 60 + \frac{5}{6} \times 100 \times -12$
$EV=1000 - 1000$
So, based on that formula the expected value would be zero. So you don't stand to gain anything by playing.
Is that correct?
You can make it simplier by saying
$\displaystyle EV=\frac{1}{6} \times 60 + \frac{5}{6} \times -12$
This problem was presented in a philosophy class. I don't have much prob/stat background, so I want to make sure I understand it correctly.
I did some googling and found this resource on Expected Utility:
Based on that Theorem, here is what I have done:
$X = \text{Gain } \60$
$p = \frac{1}{6} \times 100$
$Y = \text{Lose } \12$
$q = \frac{5}{6} \times 100$
Then we have
$EV=\frac{1}{6} \times 100 \times 60 + \frac{5}{6} \times 100 \times -12$
$EV=1000 - 1000$
So, based on that formula the expected value would be zero. So you don't stand to gain anything by playing. Is that correct?
That is absolutely correct.
But I have a question: "Why is this a question in a philosophy class"?
No question that has definite answer is a philosophical question.
Oh yes, now I quite well understand.
It is often said “Pascal could have been a great mathematician but religion got in the way”.
Others say “Pascal could have been a great theologian/philosopher but mathematics got in the way”.
April 12th 2011, 03:38 PM #2
April 12th 2011, 03:48 PM #3
April 12th 2011, 04:13 PM #4
Junior Member
Oct 2009
April 12th 2011, 04:22 PM #5
|
{"url":"http://mathhelpforum.com/statistics/177675-expected-value-question.html","timestamp":"2014-04-17T14:03:11Z","content_type":null,"content_length":"50319","record_id":"<urn:uuid:87e131ac-3186-45b3-ba73-46e15383fa6c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gyromagnetic Ratio
Gluons have no magnetic moment, so the gyromangetic ratio is zero.
The gyromagnetic mooment of quarks depends on the model of a quark, which is still open for speculation. For point, Dirac-like quarks the gyromagnetic ratio would equal 2, with the same type of small
correction like that for an electron or muon. If quarks have a different anomalous moment (like protons do), then the GR depends on the particular model.
|
{"url":"http://www.physicsforums.com/showthread.php?p=1633500","timestamp":"2014-04-20T23:40:08Z","content_type":null,"content_length":"33568","record_id":"<urn:uuid:5e32de96-7277-4283-8766-b48cddf91228>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
|
No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Efficient and accurate solver of the three-dimensional screened and unscreened Poisson's equation with generic boundary conditions
FIG. 1.
Accuracy of the approximation of the function e ^−x /x with 136 Gaussians used in the solution of the screened Poisson's equation for the case of free BC. The range of the independent variable x is
[10^−9, 33]. We plot both the absolute and the relative error because the latter is a better indicator close to the origin (where the fitted function takes on very large values), while the former is
a reliable signature of the goodness of the fit towards the opposite end. Note that at x = 33, the function e ^−x /x is already smaller than the machine precision.
FIG. 2.
Accuracy of the approximation of the Green's function with 144 Gaussians as used in the solution of the screened Poisson's equation for the case of wire-like BC. The range of the independent variable
x is [10^−9, 30] for the function and [10^−9, 1] for log(x).
FIG. 3.
Accuracy test for the case of free/isolated boundary conditions in the absence of screening (m is the order of the ISF, h the grid spacing).
FIG. 4.
Accuracy test for the case of free/isolated boundary conditions in the presence of screening (m = 16 is the order of the ISF, h the grid spacing).
FIG. 5.
Influence of the (cubic) simulation box size on the accuracy of the Hartree energy. L stands for the box size, whereas L [ref.] stands for the box size for which ρ(r = L) = 2.21 × 10^−12bohr^−3.
FIG. 6.
Accuracy test for the case of surface-like boundary conditions in the absence of screening (m is the order of the ISF, h the grid spacing).
FIG. 7.
Accuracy test for the case of surface-like boundary conditions in the presence of screening (m = 16 is the order of the ISF, h the grid spacing).
FIG. 8.
Accuracy test for the case of wire-like boundary conditions in the absence of screening (m is the order of the ISF, h the grid spacing).
FIG. 9.
Accuracy test for the case of wire-like boundary conditions in the presence of screening (m = 16 is the order of the ISF, h the grid spacing).
FIG. 10.
Accuracy test for the case of wire-like boundary conditions (WBC) with monopolar charge density distribution (m is the order of the ISF, h the grid spacing).
FIG. 11.
Accuracy in the computation of the Hartree linear energy density—see Eq. (37)—in the case of wire-like boundary conditions (WBC) with monopolar charge density distribution (m is the order of the ISF,
h the grid spacing). In our setup, .
FIG. 12.
Gaussian density charge distribution (red, dashed mesh) as a function of the isolated directions (x, y in our notation) and the corresponding electrostatic potential (black, solid mesh) evaluated for
different values of the screening, namely upon increasing boundary thickness. The charge density is implicitly periodic along z (wire-like BC). The amplitude of the plotted density charge
distribution is multiplied by a factor 10 with respect to the actual value in order to improve the readability of the picture. Only half of the solution is drawn to highlight its profile.
FIG. 13.
Electrostatic potential generated by a pair of planar charge distributions of opposite sign (positive in red/solid; negative in black/dashed) modelling a planar capacitor unlimited in the periodic
directions. The piecewise planar behavior corresponds to the case with no screening (μ[0] = 0), whereas the other solutions are obtained with upon increasing the boundary thickness. The potential is
more and more localized around the capacitor's plates and falls rapidly to zero as μ[0] is increased. Each curve is normalised to one to improve readability.
FIG. 14.
Electrostatic potential generated by a cylindrical capacitor, periodic in the vertical direction. The different solutions correspond to upon increasing the boundary thickness. Each curve is
normalised to one for sake of readability. Only half of the solution is drawn so as to highlight the potential profile.
Article metrics loading...
|
{"url":"http://scitation.aip.org/content/aip/journal/jcp/137/13/10.1063/1.4755349","timestamp":"2014-04-17T16:32:16Z","content_type":null,"content_length":"91098","record_id":"<urn:uuid:673e9122-f85d-490c-b885-21ed2f82479f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Do You Know When to Stop Factorizing?
Date: 03/09/2010 at 16:32:51
From: Marianne
Subject: Factorize
How do you know when to stop factorizing?
For example,
6pq - 4qr - 8pr = 2(3pq - 2qr - 4pr)
Can 3pq - 2qr - 4pr still be factored?
10s^2t + 15st^2 + 5s^2 = 5s(2st + 3t^2 + s)
Can (2st + 3t^2 + s) still be factored?
Is there a rule? Thanks.
Date: 03/09/2010 at 23:20:30
From: Doctor Vogler
Subject: Re: Factorize
Hi Marianne,
Thanks for writing to Dr Math.
That's a good question. The short answer is that there is a way to tell
for certain if you can't factor it anymore, but in practice it's more
complicated than it's worth. Generally, the right answer is that you try
to factor it, and if there isn't an obvious way to factor it, then it
probably can't be factored any more.
If it's a polynomial in only one variable, then see the following for a
discussion of the answer to your question:
The proper technique for determining whether a polynomial in two or more
variables can be factored or not really belongs to the branch of
mathematics known as Algebraic Geometry. It's pretty advanced subject
matter. But if you know your basic calculus, then I can give you a
simple rule to determine whether a polynomial in two variables can be
factored into a product of two (or more) polynomials, each of degree at
least one. (That is, I'm not counting pulling out an integer factor in
common with the coefficients, as you did in your first example.)
Let's call your (polynomial) function f(x,y). For example, suppose you
want to check if this can be factored:
f(x,y) = 2xy + 3x^2 + y
If it can, then you would have
f(x,y) = g(x,y) * h(x,y),
where g and h are polynomials of degree at least one (that is, not
constants). It turns out that there will necessarily be at least one
complex solution (x,y) to the simultaneous equations
g(x,y) = 0
h(x,y) = 0
This is known from Bezout's Theorem. (Note that there might not be any
real-number solutions, but there will be at least one complex- number
solution.) Then it turns out that this solution will also be a root of
the two derivatives of f, and of f itself:
f_x(x,y) = g_x(x,y) * h(x,y) + g(x,y) * h_x(x,y)
f_y(x,y) = g_y(x,y) * h(x,y) + g(x,y) * h_y(x,y)
f(x,y) = g(x,y) * h(x,y)
But a typical function in two variables will not have any common roots
of its two derivatives. So if you check, and your function doesn't, then
it can't be factored. (The converse is not necessarily true, but it will
take more work to prove it can't be factored in this case.)
For example, the two derivatives of f(x,y) = 2xy + 3x^2 + y are
f_x(x,y) = 2y + 6x
f_y(x,y) = 2x + 1
If both of these are zero, then x = -1/2, and y = 3/2, but that's not
a solution to f(x,y) = 0, since f(-1/2,3/2) = 3/4.
Similarly, if this can be factored ...
3pq - 2qr - 4pr
... then so can this:
f(x,y) = 3xy - 2x - 4y
(Can you explain why? Why can't I do that to the other function and
reduce it to only one variable?)
In particular, this function has
f_x(x,y) = 3y - 2
f_y(x,y) = 3x - 4
If both of these are zero, then x = 4/3 and y = 2/3, which is not a
solution to f(x,y) = 0. So this can't be factored either.
If you have any questions about this or need more help, please write
back and show me what you have been able to do, and I will try to offer
further suggestions.
- Doctor Vogler, The Math Forum
http://mathforum.org/dr.math/ http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/75056.html","timestamp":"2014-04-19T07:58:54Z","content_type":null,"content_length":"8624","record_id":"<urn:uuid:bf011b18-4a7a-4f6e-b9e6-88c82038cad2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 22
- INFORMATION SCIENCES , 1998
"... The paper discusses the problem of modelling linguistic uncertainty, which is the uncertainty produced by statements in natural language. For example, the vague statement `Mary is young'
produces uncertainty about Mary's age. We concentrate on simple affirmative statements of the type `subject is pr ..."
Cited by 16 (3 self)
Add to MetaCart
The paper discusses the problem of modelling linguistic uncertainty, which is the uncertainty produced by statements in natural language. For example, the vague statement `Mary is young' produces
uncertainty about Mary's age. We concentrate on simple affirmative statements of the type `subject is predicate', where the predicate satisfies a special condition called monotonicity. For this case,
we model linguistic uncertainty in terms of upper probabilities, which are given a behavioural interpretation as betting rates. Possibility measures and probability measures are special types of
upper probability measure. We evaluate Zadeh's suggestion that possibility measures should be used to model linguistic uncertainty and the Bayesian claim that probability measures should be used. Our
main conclusion is that, when the predicate is monotonic, possibility measures are appropriate models for linguistic uncertainty. We also discuss several assessment strategies for constructing a
numerical model.
- Proceedings of the Ninth International Conference on Principles of Knowledge Representation and Reasoning (KR
"... There are many examples in the literature that suggest that indistinguishability is intransitive, despite the fact that the indistinguishability relation is typically taken to be an equivalence
relation (and thus transitive). It is shown that if the uncertainty perception and the question of when an ..."
Cited by 12 (1 self)
Add to MetaCart
There are many examples in the literature that suggest that indistinguishability is intransitive, despite the fact that the indistinguishability relation is typically taken to be an equivalence
relation (and thus transitive). It is shown that if the uncertainty perception and the question of when an agent reports that two things are indistinguishable are both carefully modeled, the problems
disappear, and indistinguishability can indeed be taken to be an equivalence relation. Moreover, this model also suggests a logic of vagueness that seems to solve many of the problems related to
vagueness discussed in the philosophical literature. In particular, it is shown here how the logic can handle the sorites paradox. 1
- Special Session on Intuitionistic Fuzzy Sets and Related Concepts, of International EUSFLAT Conference , 2003
"... In this paper one generalizes the intuitionistic fuzzy logic (IFL) and other logics to neutrosophic logic (NL). The differences between IFL and NL (and the corresponding intuitionistic fuzzy set
and neutrosophic set) are: a) Neutrosophic Logic can distinguish between absolute truth (truth in all pos ..."
Cited by 11 (9 self)
Add to MetaCart
In this paper one generalizes the intuitionistic fuzzy logic (IFL) and other logics to neutrosophic logic (NL). The differences between IFL and NL (and the corresponding intuitionistic fuzzy set and
neutrosophic set) are: a) Neutrosophic Logic can distinguish between absolute truth (truth in all possible worlds, according to Leibniz) and relative truth (truth in at least one world), because NL
(absolute truth)=1 + while NL(relative truth)=1. This has application in philosophy (see the neutrosophy). That’s why the unitary standard interval [0, 1] used in IFL has been extended to the unitary
non-standard interval]- 0, 1 + [ in NL. Similar distinctions for absolute or relative falsehood, and absolute or relative indeterminacy are allowed in NL. b) In NL there is no restriction on T, I, F
other than they are subsets of]- 0, 1 + [, thus:
- Transactions of the American Mathematical Society , 1998
"... Abstract. There is a common perception by which small numbers are considered more concrete and large numbers more abstract. A mathematical formalization of this idea was introduced by Parikh
(1971) through an inconsistent theory of feasible numbers in which addition and multiplication are as usual b ..."
Cited by 8 (4 self)
Add to MetaCart
Abstract. There is a common perception by which small numbers are considered more concrete and large numbers more abstract. A mathematical formalization of this idea was introduced by Parikh (1971)
through an inconsistent theory of feasible numbers in which addition and multiplication are as usual but for which some very large number is defined to be not feasible. Parikh shows that sufficiently
short proofs in this theory can only prove true statements of arithmetic. We pursue these topics in light of logical flow graphs of proofs (Buss, 1991) and show that Parikh’s lower bound for concrete
consistency reflects the presence of cycles in the logical graphs of short proofs of feasibility of large numbers. We discuss two concrete constructions which show the bound to be optimal and bring
out the dynamical aspect of formal proofs. For this paper the concept of feasible numbers has two roles, as an idea with its own life and as a vehicle for exploring general principles on the dynamics
and geometry of proofs. Cycles can be seen as a measure of how complicated a proof can be. We prove that short proofs must have cycles. 1.
"... An expression is vague, if its meaning is not precise. For vagueness at the sentence-level this means that a vague sentence does not give rise to precise truth conditions. This is a problem for
the standard theory of meaning within linguistics, because this theory presupposes that each sentence has ..."
Cited by 5 (0 self)
Add to MetaCart
An expression is vague, if its meaning is not precise. For vagueness at the sentence-level this means that a vague sentence does not give rise to precise truth conditions. This is a problem for the
standard theory of meaning within linguistics, because this theory presupposes that each sentence has a precise
- Australasian Journal of Philosophy , 2005
"... This paper presents and defends a definition of vagueness, compares it favourably with alternative definitions, and draws out some consequences of accepting this definition for the project of
offering a substantive theory of vagueness. The definition is roughly this: a predicate ‘F ’ is vague just i ..."
Cited by 5 (0 self)
Add to MetaCart
This paper presents and defends a definition of vagueness, compares it favourably with alternative definitions, and draws out some consequences of accepting this definition for the project of
offering a substantive theory of vagueness. The definition is roughly this: a predicate ‘F ’ is vague just in case for any objects a and b, ifa and b are very close in respects relevant to the
possession of F, then ‘Fa ’ and ‘Fb ’ are very close in respect of truth. The definition is extended to cover vagueness of many-place predicates, of properties and relations, and of objects. Some of
the most important advantages of the definition are that it captures the intuitions which motivate the thought that vague predicates are tolerant, without leading to contradiction, and that it yields
a clear understanding of the relationships between higher-order vagueness, sorites susceptibility, blurred boundaries, and borderline cases. The most notable consequence of the definition is that the
correct theory of vagueness must countenance degrees of truth. In this paper I present and defend a definition of vagueness, and draw out
"... It is natural to assume that the explicit comparative – John is taller than Mary – can be true in cases the implicit comparative – John is tall compared to Mary – is not. This is sometimes seen
as a threat to comparison-class based analyses of the comparative. In this paper it is claimed that the di ..."
Cited by 3 (3 self)
Add to MetaCart
It is natural to assume that the explicit comparative – John is taller than Mary – can be true in cases the implicit comparative – John is tall compared to Mary – is not. This is sometimes seen as a
threat to comparison-class based analyses of the comparative. In this paper it is claimed that the distinction between explicit and implicit comparatives corresponds to the difference between
(strict) weak orders and semi-orders, and that both can be characterized naturally in terms of constraints on the behavior of predicates among different comparison classes.
, 2007
"... This work investigates children’s early semantic representations of gradable adjectives (GAs) and proposes that infants perform a probabilistic analysis of the input to learn about abstract
differences within this category. I first demonstrate that children as young as age three distinguish between ..."
Cited by 3 (0 self)
Add to MetaCart
This work investigates children’s early semantic representations of gradable adjectives (GAs) and proposes that infants perform a probabilistic analysis of the input to learn about abstract
differences within this category. I first demonstrate that children as young as age three distinguish between relative (e.g., big, long), maximum standard absolute (e.g., full, straight), and minimum
standard absolute (e.g., spotted, bumpy) GAs in the way that the standard of comparison is set and how it interacts with the discourse context. I then ask if adverbs enable infants to learn these
differences. In a corpus analysis, I demonstrate that statistically significant patterns of adverbial modification are available to the language learner: restricted adverbs (e.g., completely) are
more likely than non-restricted adverbs (e.g., very) to select for maximal GAs with bounded scales. Non-maximal GAs, which are more likely to be modified by adverbs in general, are more likely to be
modified by a narrower range, predominantly composed of intensifiers (e.g., very). I then ask if language learners recruit this information when learning new adjectives. In a word learning task
employing the preferential looking paradigm, I demonstrate that 30-month-olds use adverbial modifiers they are not necessarily producing to assign an interpretation to novel adjectives. Adjectives
modified by completely are assigned an
- Language and Cognitive Processes , 2004
"... case study. ..."
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1076764","timestamp":"2014-04-18T22:46:24Z","content_type":null,"content_length":"36392","record_id":"<urn:uuid:49111308-b2a5-44e4-8d8d-cc85de741b1b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
|
- In Proceedings of the Symposium on Logical Foundations of Computer Science: Logic at St. Petersburg, Lecture notes in Computer Science , 1994
"... In order to define models of simply typed functional programming languages being closer to the operational semantics of these languages, the notions of sequentiality, stability and seriality
were introduced. These works originated from the definability problem for PCF, posed in [Sco72], and the full ..."
Cited by 59 (0 self)
Add to MetaCart
In order to define models of simply typed functional programming languages being closer to the operational semantics of these languages, the notions of sequentiality, stability and seriality were
introduced. These works originated from the definability problem for PCF, posed in [Sco72], and the full abstraction problem for PCF, raised in [Plo77]. The presented computation model, forming the
class of hereditarily sequential functionals, is based on a game in which each play describes the interaction between a functional and its arguments during a computation. This approach is influenced
by the work of Kleene [Kle78], Gandy [Gan67], Kahn and Plotkin [KP78], Berry and Curien [BC82, Cur86, Cur92], and Cartwright and Felleisen [CF92]. We characterize the computable elements in this
model in two different ways: (a) by recursiveness requirements for the game, and (b) as definability with the schemata (S1)-- (S8), (S11), which is related to definability in PCF. It turns out that
both definitio...
, 1996
"... Modularity reflects the Frege Principle: any two expressions expr 1 and expr 2 which have the same meaning (semantics) can be replaced by each other in every appropriate context C[ ] without
changing the meaning of the overall expression. In [18] we identified observable relations and nets of obser ..."
Cited by 2 (0 self)
Add to MetaCart
Modularity reflects the Frege Principle: any two expressions expr 1 and expr 2 which have the same meaning (semantics) can be replaced by each other in every appropriate context C[ ] without changing
the meaning of the overall expression. In [18] we identified observable relations and nets of observable relations as appropriate tools for the investigation of dataflow networks over
nondeterministic agents. The observable relations are the Input-Output behaviors of (in general nondeterministic) dataflow agents. Moreover, the semantics of nets of observable relations is
consistent with the input-output behavior of dataflow agents. In [18, 19] we showed that the main source of the Brock-Ackerman anomaly [2] is in the semantics of nets of relations. But it turns out
that this semantics is not modular. The central objective of this paper is the characterization of modular classes of relations and hence indirectly the set of dataflow nets without anomalies.
Another major theme which plays a ...
"... λ-theories: some investigations by ..."
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=854686","timestamp":"2014-04-16T16:48:54Z","content_type":null,"content_length":"20380","record_id":"<urn:uuid:9d28bfad-1125-40ce-b4e1-5d658470e7f9>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I don't think I've met a single person whose system of ethics regarding "intellectual property" was internally consistent
Why is it that no one seems to understand how share ratios work on bittorrent? When yours goes up, mine goes down. When yours goes down, mine goes up. If you multiply everyone's ratio together, it
equals 1. ALWAYS. SOME OF THE RATIOS WILL ALWAYS BE GREATER THAN ONE AND SOME OF THEM WILL ALWAYS BE LESS THAN ONE. IT'S NOT BECAUSE PEOPLE ARE EVIL LEECHING SCUMBAGS, IT'S JUST MATH.
Furthermore, share ratios don't even MATTER. It doesn't matter how much you downloaded, only how much you uploaded. Downloading doesn't affect anyone but you. Even if you always download and never
upload, it's not as though you're HOARDING THE DATA. It's not as though you have overdue library books. The books are still in the library, you have copies.
|
{"url":"http://flagonthemoon.blogspot.com/2012/05/i-dont-think-ive-met-single-person.html","timestamp":"2014-04-20T00:40:53Z","content_type":null,"content_length":"21156","record_id":"<urn:uuid:fae7f6ed-c0f2-4156-805b-7b738ccd5160>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How many cups in half a pound?
In physics, mass (from Greek μᾶζα "barley cake, lump [of dough]") is a property of a physical system or body, giving rise to the phenomena of the body's resistance to being accelerated by a force
and the strength of its mutual gravitational attraction with other bodies. Instruments such as mass balances or scales use those phenomena to measure mass. The SI unit of mass is the kilogram
For everyday objects and energies well-described by Newtonian physics, mass has also been said to represent an amount of matter, but this view breaks down, for example, at very high speeds or for
subatomic particles. Holding true more generally, any body having mass has an equivalent amount of energy, and all forms of energy resist acceleration by a force and have gravitational
attraction; the term matter has no universally-agreed definition under this modern view.
Brassiere measurement (also called brassiere size, bra size or bust size) refers to determining what size of bra a woman wears and mass-producing bras that will fit most women. Bra sizes usually
consist of a number, indicating a band size around the woman's torso, and one or more letters indicating the breast cup size. Bra cup sizes were invented in 1932 and band sizes became popular in
the 1940s.
Bra sizes vary from one manufacturer to another, and from country to country. Women's bodies and breasts may not conform to the sizes offered by companies. As a result, some women have a
difficult time finding a properly fitted bra. Up to 80% of women wear the wrong size bra causing 40% to 60% to experience pain of one kind or another.]citation needed[
The suit of cups is one of the four suits of Latin-suited playing cards. These are used in Spain ("Copas"), Italy ("Coppe") and in tarot. The suit of hearts is derived from the suit of cups.
These are sometimes referred to as chalices as well as cups.
In tarot, the element of cups is water, and the suit of cups pertains to situations and events of an emotional nature. As such, when the tarot is used in divination, many cups signify an
emotional issue or love situation, or some event that affects the querent emotionally. The watery astrological signs are Cancer, Scorpio and Pisces. Additionally, cups were the symbol of the
clergy in feudal times, and thus cup cards can also be interpreted as having to do with spiritual or religious matters.
United States customary units are a system of measurements commonly used in the United States. The U.S. customary system developed from English units which were in use in the British Empire before
American independence. Consequently most U.S. units are virtually identical to the British imperial units. However, the British system was overhauled in 1824, changing the definitions of some units
used there, so several differences exist between the two systems.
The majority of U.S. customary units were redefined in terms of the meter and the kilogram with the Mendenhall Order of 1893, and in practice, for many years before. These definitions were refined by
the international yard and pound agreement of 1959. The U.S. primarily uses customary units in its commercial activities, while science, medicine, government, and many sectors of industry use metric
units. The SI metric system, or International System of Units is preferred for many uses by NIST
The system of imperial units or the imperial system (also known as British Imperial) is the system of units first defined in the British Weights and Measures Act of 1824, which was later refined and
reduced. The system came into official use across the British Empire. By the late 20th century, most nations of the former empire had officially adopted the metric system as their main system of
measurement, but some Imperial units are still used in the United Kingdom and Canada.
In journalism, a human interest story is a feature story that discusses a person or people in an emotional way. It presents people and their problems, concerns, or achievements in a way that brings
about interest, sympathy or motivation in the reader or viewer.
Human interest stories may be "the story behind the story" about an event, organization, or otherwise faceless historical happening, such as about the life of an individual soldier during wartime, an
interview with a survivor of a natural disaster, a random act of kindness or profile of someone known for a career achievement.
Related Websites:
|
{"url":"http://answerparty.com/question/answer/how-many-cups-in-half-a-pound","timestamp":"2014-04-18T10:35:08Z","content_type":null,"content_length":"30398","record_id":"<urn:uuid:15b62028-288a-4a22-9fb0-87e157d9bfe7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
|
algebraic topology
, an
open ball
centered on the point
is a set of some points
in a
metric space
X, such that the
is less than some constant r, the '
' of the ball. This is usually written
B[X](p, r)
and is equivalent to the set:
{x in X : d(x, p) < r}
is the
distance function
of the metric space, telling us the distance between its two arguments in that space.)
The construction may be used in order to define an open set as a subset V of X where for any point in it there is an open ball with nonzero and positive r, which is a subset of V.
That is
for all v in V, there exists r > 0 such that B[X](v, r) is a subset of V.
It also comes in handy for defining
continuous functions
between metric spaces. A function
mapping from
, where both
are metric spaces, is continuous at some point
only in the case that for every
> 0 there is some
> 0 such that for any member
of B
) is in B
|
{"url":"http://everything2.com/title/open+ball","timestamp":"2014-04-18T08:58:45Z","content_type":null,"content_length":"20106","record_id":"<urn:uuid:1ca05577-3d27-429a-89e1-ce5481e51fd5>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra Made Simple.
We’re building a 7th grade algebra course entirely online. Whether you’re looking for an intro to algebra or a quick refresher on how to understand algebra, our lessons can help you learn algebra 1
at your own pace. Packed with examples, our algebra study guides and lessons are a great way to teach yourself algebra. Start learning algebra online today!
|
{"url":"http://learnalgebraonline.com/","timestamp":"2014-04-20T03:11:24Z","content_type":null,"content_length":"5405","record_id":"<urn:uuid:5a8edd93-ea5f-40d3-a84a-0faaf3aa8ce0>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[racket] Code Error Creating List
From: Todd Dobmeyer (dobmeyer.2 at wright.edu)
Date: Tue May 17 20:03:29 EDT 2011
I am taking a class at Wright State University and we are working on
problem 3.27 from the EOPL 2nd Edition. This problem has us modifying
the closure procedure to only bind the free variables in our environment
instead of storing all the variables. I am trying to write some code
that will create a list of all the variables in my expression. This will
then be passed to a difference procedure (that has been provided) to
determine which variables are free. Unfortunately, every time I run my
procedure, I receive the error: "procedure application: expected
procedure, given: x (no arguments)" What I should be returning is the
list '(x) as "x" is the only variable in my simple expression. The
string I am running is as follows:
(define str2 "let x = 15 y = 27
in let f = proc () x
in let x = 8
in (f)")
I then run the command:
(run str2)
After it works through its beginning work, it finally hits the proc-exp
where it calls closure. My closure function calls me free-vars procedure
but it apparently does not return a list like I think it should. I have
tried numerous ways and always get this same error. Any ideas on where I
am going wrong would be greatly appreciated. I'm sure there are errors
inside my logic, but if anybody can help me figure out why I am not
returning a list out of (free-vars) that would be great! My code looks like:
(define free-vars
(lambda (body)
(list (free-vars-list body))
(define free-vars-list
(lambda (body)
(if (list? body) (append (free-vars-list (car body))
(free-vars-list (cdr body)))
(cases expression body
;; When we have a var-exp, return the variable
(var-exp (id) id)
(primapp-exp (prim rands)
(append(free-vars-list (car rands)) (free-vars-list (cdr rands))))
(if-exp (test-exp true-exp false-exp)
(append (free-vars-list test-exp)
(free-vars-list true-exp) (free-vars-list false-exp)))
(begin-exp (exp1 exps) (append
(free-vars-list exp1) (free-vars-list exps)))
(let-exp (ids rands body) (append
(free-vars-list body)))
(proc-exp (ids body) (append (free-vars-list
(app-exp (rator rands) (append
(free-vars-list rator) (free-vars-list rands)))
(else '())
(define closure
(lambda (ids body env)
(let ((todd (free-vars body)))
(let ((freevars (difference todd ids)))
(let ((saved-env
(map (lambda (v) (apply-env env v)) freevars)
(lambda (args)
(eval-expression body (extend-env ids args env)))
My call stack looks like:
procedure application: expected procedure, given: x (no arguments)
C:\Users\Todd\Documents\CS784\Assign2\3-5-mod.scm: 255:4
(if (list? body) (append (free-vars-list (car body))
(free-vars-list (cdr body)))
(cases expression body
;; When we have a var-exp, return the variable
(var-exp (id) id)
(primapp-exp (prim rands)
(append(free-vars-list (car rands)) (free-vars-list (cdr rands))))
(if-exp (test-exp true-exp false-exp)
(append (free-vars-list test-exp)
(free-vars-list true-exp) (free-vars-list false-exp)))
(begin-exp (exp1 exps) (append
(free-vars-list exp1) (free-vars-list exps)))
(let-exp (ids rands body) (append
(free-vars-list body)))
(proc-exp (ids body) (append (free-vars-list
(app-exp (rator rands) (append
(free-vars-list rator) (free-vars-list rands)))
(else '())
(define closure
(lambda (ids body env)
C:\Users\Todd\Documents\CS784\Assign2\3-5-mod.scm: 247:4
(list (free-vars-list body))
(define free-vars-list
(lambda (body)
C:\Users\Todd\Documents\CS784\Assign2\3-5-mod.scm: 276:4
(let ((freevars (difference todd ids)))
(let ((saved-env
(map (lambda (v) (apply-env env v)) freevars)
(lambda (args)
(eval-expression body (extend-env ids args env)))
(run str2)
C:\Users\Todd\Documents\CS784\Assign2\3-5-mod.scm: 138:4
(map (lambda (x) (eval-rand x env)) rands)))
C:\Users\Todd\Documents\CS784\Assign2\3-5-mod.scm: 120:8
(let ((args (eval-rands rands env)))
(eval-expression body (extend-env ids args env))))
Thanks for all your help!
Posted on the users mailing list.
|
{"url":"http://lists.racket-lang.org/users/archive/2011-May/045544.html","timestamp":"2014-04-16T07:15:00Z","content_type":null,"content_length":"10292","record_id":"<urn:uuid:49860f07-df05-40e1-a7c9-423325141706>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
|
-256 Test Vectors
SHA[d]-256 Test Vectors
SHA[d]-256 (also written as SHA_d-256, SHA_d256, SHAd256, etc.) is an iterative hash function introduced by Niels Ferguson and Bruce Schneier in their book, Practical Cryptography. Like NMAC and
HMAC, SHA[d]-256 is designed to avoid length extensions that are possible with ordinary SHA-256 (and most other iterative hash functions). This page provides test vectors for SHA[d]-256, which are
missing from the book.
SHA[d]-256 is defined as follows:
SHA[d]-256(m) := SHA-256(SHA-256(m))
As of this writing, SHA[d]-256 has not received much peer review, so using it instead of HMAC-SHA-256 is not recommended. The main purpose of posting these test vectors is to aid implementation of
Fortuna (also introduced in Practical Cryptography), which uses SHA[d]-256.
Download the test vectors: SHAd256_Test_Vectors.txt (1.2 MB US-ASCII plain text).
SHA256 sum: aa9001bb6ebab8902e19c522fe2dc079dadb5267529d1e4cada1cfd99b2c28a1
File format
Each line of the file that starts with a colon (':') is a test vector. Lines without a colon in the first column should be ignored.
After the colon, each test vector consists of several values separated by white-space. The values are, in order:
1. A string that identifies the test vector. Each test vector has its own identifier.
2. The length in octets of the input data.
3. A hexadecimal representation of the input data, the string "MILLION_a", or the string "RC4" (see below).
4. A hexadecimal representation of the ordinary SHA-256 hash of the input data.
5. A hexadecimal representation of the SHA_d-256 hash of the input data.
The following special cases are defined:
• If the input data field (field #3) is "MILLION_a", then the input data is "a" (ASCII 97) repeated 1,000,000 times.
• If the input data field (field #3) is "RC4", then the input data is the first N bytes of the RC4 key-stream, where the key is set to zero and N is the value of the length field (field #2).
Sample test vectors
Here is a small sample of the 7583 test vectors included in the file:
│ Identifier │Input length (in octets)│ Input data │ SHA-256 hash │ SHA[d]-256 hash │
│EMPTY │0 │(empty string) │e3b0c44298fc1c149afbf4c8996fb924│5df6e0e2761359d30a8275058e299fcc│
│ │ │ │27ae41e4649b934ca495991b7852b855│0381534545f55cf43e41983f5d4c9456│
│NIST.1 │3 │"abc" │ba7816bf8f01cfea414140de5dae2223│4f8b42c22dd3729b519ba6f68d2da7cc│
│ │ │ │b00361a396177a9cb410ff61f20015ad│5b2d606d05daed5ad5128cc03e6c6358│
│NIST.3 │1000000 │("a" repeated 1,000,000 times) │cdc76e5c9914fb9281a1c7e284d73e67│80d1189477563e1b5206b2749f1afe48│
│ │ │ │f1809a48a497200e046d39ccc7112cd0│07e5705e8bd77887a60187a712156688│
│RC4.16 │16 │(first 16 bytes of RC4 keystream where the key = 0) │067c531269735ca7f541fdaca8f0dc76│2182d3fe9882fd597d25daf6a85e3a57│
│ │ │de188941a3375d3a8a061e67576e926d │305d3cada140f89372a410fe5eff6e4d│4e5a9861dbc75c13ce3f47fe98572246│
│ │ │(first 55 bytes of RC4 keystream where the key = 0) │ │ │
│ │ │de188941a3375d3a8a061e67576e926d │038051e9c324393bd1ca1978dd0952c2│3b4666a5643de038930566a5930713e6│
│RC4.55 │55 │c71a7fa3f0cceb97452b4d3227965f9e │aa3742ca4f1bd5cd4611cea83892d382│5d72888d3f51e20f9545329620485b03│
│ │ │a8cc75076d9fb9c5417aa5cb30fc2219 │ │ │
│ │ │8b34982dbb629e │ │ │
│RC4.2^36+128│68719476864 │(first 2^36+128 bytes of RC4 keystream where the key = 0)│02eaeaeba71b64a97cc41c83625e497e│f84bef74588a23683db45304c4fa973b│
│ │ │ │64d991e0966773131b143689e50bd87d│09a6045b46a0be5eb0b28c4dbb2a21be│
|
{"url":"https://www.dlitz.net/crypto/shad256-test-vectors/","timestamp":"2014-04-19T02:56:05Z","content_type":null,"content_length":"8125","record_id":"<urn:uuid:d38b84e9-6dc7-4e61-b798-edf9bd763aad>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What's the chain level Gromov-Witten theory
up vote 1 down vote favorite
I think I heard there is such a theory, but I just can't find reference.So I am asking if there really has such a theory and reference if yes. Thanks firstly!
gromov-witten-theory ag.algebraic-geometry
4 HYYY -- this and some of your other questions are potentially quite interesting but I think some more details and/or background are in order (e.g. what kind of answer do you expect in this
particular question?) -1; to be reverted if more details are given. – algori Jun 30 '10 at 3:01
Hi,algori,thank you! I have edited it now.I think I just need reference (or better, comment) – HYYY Jun 30 '10 at 3:10
HYYY -- I'm sorry but it seems to me that your question has become a bit longer but still does not contain any extra information with respect to the original version. You are looking for a
2 reference to the chain level GW theory, but you do not tell us what you expect it to be or what properties you would like it to have. I think I have a guess as to what it may be and I'd be
interested to know if my guess is correct but it's your question so it's for you to set up the rules of the game i.e. to state what requirements something has to fulfill to be called a chain level
GW theory. – algori Jun 30 '10 at 3:38
1 I think a chain-level GW theory should be something like a cohomological field theory (CohFT - as defined by Kontsevich-Manin), except with (co)chains C^* everywhere instead of cohomology H^*. I
don't know if this is written up anywhere. My guess is no. – Kevin H. Lin Jun 30 '10 at 3:50
I think Kevin Lin is correct and as far as I know, this is still a conjecture. Costello says this in his articles on Calabi-Yau $A_\infty$-categories and on the Gromov-Witten potential, but these
articles were written a few years ago, so something may have happened in the mean time. – skupers Jun 30 '10 at 5:54
add comment
1 Answer
active oldest votes
A theory of the type you might be talking about is stated in Sullivan's "Theorem 4" in "String topology and sigma models", http://books.google.com/books?hl=en&lr=&id=yuGic0WClQ4C&oi=fnd&pg
=PA1&dq=string+topology+and+sigma+models+sullivan&ots=2Oympt_Z4g&sig=W4GOnI6Cn-7IjsJEH7x9c8-qnnY#v=onepage&q=string%20topology%20and%20sigma%20models%20sullivan&f=false. Whether it is a
up vote 2 Theorem is somewhat unclear to me.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged gromov-witten-theory ag.algebraic-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/30001/whats-the-chain-level-gromov-witten-theory","timestamp":"2014-04-16T20:20:03Z","content_type":null,"content_length":"56625","record_id":"<urn:uuid:a5ed64b2-bca1-43e9-8bff-f1e0afadd8d5>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cylindrical Gear Conversions: AGMA to ISO
The rational use of the addendum modification coefficient is one of the topics better known by specialists of gears working in the metric system (ISO). This non-dimensional parameter, also well-known
as rack shift coefficient x, allows gear designs with very good adaptability of the teeth profile in practical applications.
Unfortunately, the gear specialists with designs based on the AGMA system traditionally don't appreciate the advantages of the addendum modification coefficient as an adjustment factor between ISO
and AGMA systems. The publications of specialists [McVittie (1993), Rockwell (2001)] directed toward disclosing the generalities of the geometry calculation of gears with the application of this
coefficient are welcome for experts.
The Working Group WG5, under the direction of Henry Deby, approved in 1981 a Technical Report ISO (ISO/TR 4467) with orientations about the value limits and distribution of the addendum modification
of the teeth of external parallel-axis cylindrical involute gears for speed-reducing and speed-increasing external gear pairs. Although the recommendations were not of a restrictive nature, a general
guide was finally published on the application of the addendum modification coefficient. These recommendations were important in order to avoid an incorrect use of this coefficient and to prevent
designs of gears with very small tooth crest width, insufficient value of transverse contact ratio, or cutter interference. Unquestionably, ISO/TR 4467 showed the advantages of using the addendum
modification coefficient in the design of gears and gives guided reasonable values of this coefficient for the geometric calculation of gears with more load capacity and better performance.
This article presents some basic definitions and recommendations associated with the correct application of the addendum modification coefficient. Additionally, formulae relating to external
parallel-axis cylindrical involute gears to consider the effect of the addendum modification coefficient in gear geometry are discussed. The procedure for the solution of two cases—based on the
authors' experiences in the analysis, recovery, and conversion of helical and spur gears in the AGMA system to ISO standards—show the advantage of the application of the addendum modification
coefficient in the solution of practical problems.
Tooth Profile Reference with No Addendum Modification on Cylindrical Gears
It is well known that when external parallel-axis cylindrical gears with an involute profile are in correct meshing the toothed wheels should conjugate with a corresponding rack. The profile of this
rack-tool is denominated basic rack tooth profile. The shape and geometrical parameters of the basic rack tooth profile for involute gears are set by special standards (see Table 2) along with the
rack shaped tool, such as hobs or rack type cutters, used in the cutting of gears by means of generation methods (see Figure 1).
One important definition in the basic rack tooth profile is the datum line corresponding with a straight line drawn parallel to the tip and root lines where the tooth thickness "s" is equal to tooth
space width "e." Most of the dimensions of the basic rack tooth profile are given with reference to the datum line and divides the basic rack tooth profile in two parts: addendum and dedendum. Figure
2 shows a typical shape of a basic rack tooth profile and the principal geometrical parameters used to establish the basic dimensions (see Table 2 for some values standards of basic rack tooth
profile parameters).
The usable flanks in the basic rack tooth profile are inclined at the profile angle a to a line normal to the datum line. This angle is the same as the pressure angle "a" at the reference cylinder of
a spur gear or the normal pressure angle "an" at the reference cylinder of a helical gear. The relation between pith of the basic rack tooth profile "p" and module "m" (in mm) is: p = p ° m
The dimensions of the standard basic rack tooth profile are given in relation to module and identified by *, such as ha* (factor of addendum), hf* (factor of dedendum), c* (factor of radial
clearance), and rf* (factor of root fillet radius). The module m of the standard basic rack tooth profile is the module in the normal section mn of the gear teeth. For a helical gear with helix angle
b on the reference cylinder, the transverse module in a transverse section is mt = mn/cosb. For a spur gear b = 0 the module is m = mn = mt. As a reference parameter, the module m in ISO standards is
similar to the normal diametral pitch Pnd in AGMA standards and can be converted by the following equation:
Most special standards for basic rack tooth profile—including ISO Standard 57-74—recommend for industrial gears with general application the following values:
The principal methods for generating the tooth flanks on external parallel-axis cylindrical involute gears make use of a rack-shaped tool (such as hobs or rack-type cutters) with dimensions similar
to the basic rack tooth profile shown in Figure 1. When gears are produced by a generating process, in the case of tooth profile without addendum modification the datum line (MM) of the basic rack
tooth profile is tangent to the reference diameter of gear (see Figure 3).
Addendum Modification on Cylindrical Gears
It's possible to understand the use of the tooth profile without addendum modification to establish the first standards for the geometric calculation of cylindrical gears. When gears are generated
with a rack-shaped tool and the reference cylinder of gear rolls without slipping on the imaginary datum line, the standard relationship to set up the tooth shape and geometric calculation are
relatively simple.
For specialists involved with gear design based on ISO standards, it's often true that the datum line of the basic rack profile need not necessarily be tangent to the reference diameter on the gear.
The tooth profile and its shape can be modified by shifting the datum line from the tangential position. This displacement makes it possible to use tooth flanks with other parts of the involute curve
and different diameters for gears with the same number of teeth, module, helix angle, and cutting tools.
Explained simply it can be said that the generation of a cylindrical gear with addendum modification, when concluding the generation of tooth flanks, that the reference cylinder is not tangent to the
datum line on basic rack tooth profiles, and there is a radial displacement.
The main parameter to evaluate the addendum modification is the addendum modification coefficient x, also know by Americans as the "profile shift factor," or the "rack shift coefficient." The
addendum modification coefficient quantifies the relationship between distance from the datum line on the tool to the reference diameter of gear Dabs (radial displacement of the tool) and module m.
This coefficient is defined for pinion x1 and gear x2 as:
The addendum modification coefficient is positive if the datum line of the tool is displaced from the reference diameter toward the crest of the teeth (the tool goes away from the center of the
gear), and it is negative if the datum line is displaced toward the root of the teeth (the tool goes toward the center of gear). To consider the effect of addendum modifications for a gear pair it is
a good practice to define the sum of addendum modification coefficients as:
The manufacturing of the gear with addendum modification is not more complex or expensive than gears without profile shift, because the gears are manufactured in the same cutting machines and depend
solely on the relative position of the gear to be cut and the cutter. The difference can be evident in the blanks with different diameters and tooth profiles on gears.
A positive addendum modification (x > 0) results in a greater tooth root width and, thus, in an increase in the tooth root carrying capacity; more effective in the case of small numbers of teeth than
in the case of larger ones, including the possibility of avoiding or reducing undercutting on the pinion. Additionally, an increase of the addendum modification coefficient results in a decrease in
the tip thickness of the teeth. A negative addendum modification (x < 0) has the reverse effect of positive addendum modification on tip and root thickness.
The choice of the sum of addendum modifications could be arbitrary and depends on the center distance or the operational conditions applied. Sums that are too high for positive values, or too low for
negative values, may be harmful to the satisfactory performance of the gear pair. For this reason upper and lower limits are specified in the function of the number of teeth (or the virtual number of
teeth for helical gears). In Figure 5, recommendations for limits of sum of addendum modification coefficients are given. For a sum of addendum modification coefficients Sx with a correct
distribution of values between a pinion and gear, it is possible in a gear pair to realize favorable effects in the ability to balance the bending fatigue life, pitting resistance, or minimizes the
risk of scuffing.
For a given center distance and gear ratio, the assembly of the gears can be adjusted with a convenient calculating of the addendum modifications. There are different criteria for distributing and
applying the addendum modification coefficient depending on the effect required in the gear transmission.
Addendum modification coefficient for gears with a small number of teeth to avoid undercutting on the pinion.
Value of addendum modification coefficient inversely proportional to the number of teeth.
Value of addendum modification coefficient according to MAAG:
The basic geometry of the external parallel-axis cylindrical involute gears, taking into account the addendum modification coefficients that can be calculated with the formulas found in Table 3.
Sample Practical Cases
To illustrate the application of the addendum modification coefficient in the solution of practical problems in the adjusting gears from the AGMA system to ISO standards, two samples of practical
cases are shown. These cases cover the most frequent geometrical problems faced by gear experts during the conversion to the ISO Standard of helical and spur gears manufactured with AGMA standards.
Cases with spur gear are more difficult to adjust due to the fact that the helix angle is fixed and can't be modified. Other cases can be taken into account by a similar approach.
Case 1: Setting gears from AGMA to ISO with determination of the sum of addendum modification coefficients for a given center distance and gear ratio.
• Statement of the problem—In the gearbox of the power transmission system of a heavy truck it was necessary to recover a spur gear transmission cut originally with tools based on AGMA standards. For
the new manufacture cutting tools based on ISO standards should be used. The basic solution is as follows:
• Initial data
Center distance; aw = 203,2 mm (8 inch).
Gear ration (approximate), u = 4,13
Normal diametral pitch (original), Pd = 4
Pressure angle for the rack shaped tool, a = 20°
• Basic solution
a) Proposal of module (ISO).
b) Transverse pressure angle:
c) Number of teeth for pinion and gear.
Approximation is preferable by defect.
d) Pressure angle at the pitch cylinder.
e) Sum of addendum modification coefficients.
f) Addendum modification coefficients for pinion and gear (several criterions to distribute the coefficients can be used, but always keep the calculated sum).
g) Other geometrical parameters.
Case 2: Setting gears from AGMA to ISO with determination of the sum of addendum modification coefficients for a given center distance, keeping the same number of teeth and gear generation using a
cutting tool with a smaller pressure angle than the original.
• Statement of the problem—In the final drive of earth-moving equipment it was necessary to manufacture a pair of spur gears originally manufactured under AGMA standards. Gear generation should be
done by means of a cutting tool based on the ISO system and a pressure angle of 20°. The operator of the hobbing machine proceeded to cut the gears and observed an undercutting on the pinion not
present in the original. During the study of the original pinion it was established that the original generation of the gear was made with a cutting tool with a 25° pressure angle. The challenge in
the new design supposed engineering calculations with a cutting tool with a 20° pressure angle to guarantee the original center distance and number of teeth without undercutting on the pinion. The
basic solution is as follows:
• Initial data
Center distance, aw = 11 inch
(279,4 mm)
Number of teeth on pinion, z1 = 14
Number of teeth on gear, z2 = 41
Module in new design of gear, m = 10
Normal diametral pitch in original gear, Pd = 2,5
Pressure angle for cutting tool (original), a = 25°
Pressure angle for cutting tool (new),
a = 20°
Factor of addendum, ha* = 1
• Basic solution
a) Proposal of module (ISO).
Standard module by ISO 54-77, m = 10 (mm)
b) Transverse pressure angle:
c) Pressure angle at the pitch cylinder.
d) Sum of addendum modification coefficients.
e) Addendum modification coefficient for pinion to avoid undercutting
f) Addendum modification coefficients for gear.
g) Other geometrical parameters.
Tip diameters: da1 = 168,8 mm ; da2 = 429,56 mm
Root diameters: df1 = 124,24 mm ; df2 = 385,0 mm
Based on experiences in the analysis, recovery, and conversion of helical and spur gears in the AGMA system to ISO standards, the main definitions and recommendations associated to the addendum
modification coefficient have been presented. Examples with the application of the addendum modification coefficient in the solution to practical problems in relation to the geometry and manufacture
of gears shows the application of the addendum modification coefficient as an adjustment factor between ISO and AGMA systems. Moreover, the calculation utilized in the two practical cases and
formulae relating to basic gear geometry given in Table 3 can be taken as a base for solution in similar cases of geometrical conversion from the AGMA to the ISO system.
McVittie, D.; The European Rack Shift Coefficient "X" for Americans. Gear Technology, Jul-Aug 1993, Pages. 34-36.
Rockwell, P. D.; Profile Shift in External Parallel-Axis Cylindrical Involute Gears. Gear Technology, Nov-Dec 2001, Pages. 18-25.
Technical Report ISO/TR 4467-1982; Addendum modification of the teeth of cylindrical gears for speed-reducing and speed-increasing gear pairs. ISO 1982.
ISO 53:1998, Cylindrical gears for general and heavy engineering. Standard basic rack tooth profile.
González Rey, G. Apuntes para el cálculo de engranajes cilíndricos según Normas ISO del Comité Técnico 60. EUP de Zamora. Universidad de Salamanca. 2001
Basic solution
You must Log In to comment.
Be the first to comment!
|
{"url":"http://gearsolutions.com/article/detail/5489/title/cylindrical-gear-conversions-agma-to-iso","timestamp":"2014-04-21T02:27:34Z","content_type":null,"content_length":"34970","record_id":"<urn:uuid:c84006c6-ae89-472e-9c7e-ed2c97eff588>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Oblivious Equilibrium for Conc
Gabriel Weintraub
Oblivious Equilibrium for Concentrated Industries
Coauthor(s): Przemyslaw Jeziorski.
In a recent paper, Weintraub, Benkard, and Van Roy (2008) propose a method for analyzing Ericson and Pakes (1995)-style dynamic models of imperfect competition with many firms. In that paper, we
defined a notion of equilibrium, oblivious equilibrium (henceforth, OE), in which each firm is assumed to make decisions based only on its own state and knowledge of the long run industry state, but
where firms ignore current information about competitors' states. The great advantage of OE is that computation of OE is independent of the number of firms in the industry, and thus they are much
easier to compute than are Markov perfect equilibria (henceforth, MPE). Moreover, we showed that OE provide meaningful approximations of long-run Markov perfect dynamics of an industry with many
firms if, alongside some technical requirements, the equilibrium distribution of firm states obeys a light-tail condition.
In Weintraub, Benkard, and Van Roy (2009) we showed that OE also often yields good approximations of MPE behavior for industries with characteristics of those studied by empirical researchers.
However, for some industries it seems likely that oblivious strategies would be too simple to describe actual behavior. In very concentrated markets in particular, one might expect that the leading
firms' strategies might depend on the state variables of the other leading firms.
In this paper we introduce an important extension to OE designed to address such cases. We develop an extended notion of oblivious equilibrium that we call partially oblivious equilibrium (POE) that
allows for there to be a set of "dominant firms," whose firm states are always monitored by every other firm in the market. Thus, for example, if there are two "dominant firms," then in POE each
firm's strategy will be a function of its own state and also the states of both dominant firms. Such strategies allow for more rich strategic interactions than do oblivious strategies, and our hope
is that POE will provide a better model of more concentrated industries.
Source: Working Paper
Exact Citation:
Weintraub, Gabriel, and Przemyslaw Jeziorski. "Oblivious Equilibrium for Concentrated Industries." Working Paper, Columbia Business School, February 2011.
Date: 2 2011
|
{"url":"http://www0.gsb.columbia.edu/whoswho/more.cfm?uni=gyw2105&pub=4902","timestamp":"2014-04-20T20:57:43Z","content_type":null,"content_length":"5297","record_id":"<urn:uuid:a53a6f61-fa76-4df1-b88f-5e1b418b6879>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
If h(x) = 3x − 1 and j(x) = −2x, find h[j(2)].
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4fff4fbde4b09082c07083bd","timestamp":"2014-04-16T22:59:23Z","content_type":null,"content_length":"39329","record_id":"<urn:uuid:83e85268-55ad-418b-82fe-66a6d2fb59fe>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Q: There is a piece of paper with 100 sentences written on: " 1. Exactly one sentence on this page is false.
2. Exactly two sentences on this page are false. 3. Exactly three sentences on this page are false. 4. Exactly four sentences on this page are false. 5. Exactly five sentences on this page are
false. . . . 99. Exactly 99 sentences on this page are false. 100. Exactly 100 sentences on this page are false."
How many sentences are false and how many are true? What if we replace "exactly" by "at least"?
|
{"url":"http://rec-puzzles.org/index.php/Hundred","timestamp":"2014-04-20T16:04:58Z","content_type":null,"content_length":"6752","record_id":"<urn:uuid:89b535f3-bf06-4672-ba6c-13759311fa4c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Copyright © University of Cambridge. All rights reserved.
Thank you to those of you who sent solutions to this problem. Unfortunately, though, not many of you explained how you knew the order in which to put the quantities.
Katie from Hymers College Junior School took us through the first set of quantities:
First of all I am going to tackle the temperature question.
First you look at what to rank
*Of a kettle of boiling water
*Of the centre of the sun
*On a thermometer when you are quite well
*Of the water in a school pond
Then you think about the ones that you know the temperature.
So it is clear that the sun is the hottest. So now tick that off the list.
Then you think, well our school pond is quite deep and therefore quite cold. So compared to the rest of the list the pond is the coldest so now you can check that off the list.
Now you are left with the two middle answers. Everybody knows that water boils at $100^\circ$ and your body temperature is around $38^\circ$. From this research you now know that the kettle of
boiling water is the second hottest. Now you can tick that off the list.
Finally you are left with the last one to tick off the list, the temperature of the thermometer when you are quite well.
So now they are all ticked off the list you can put them into their final order which is:
1. Of the centre of the sun.
2. Of a kettle of boiling water.
3. On a thermometer when you are quite well.
4. Of the school pond.
Katie then ordered the speed, time and sound quantities:
1. Of a rocket going up on bonfire night
2. Of a train
3. Of a ball being thrown to your friend
4. Of a ladybird walking along a leaf
1. Taken for frogspawn to grow into a frog
2. Taken for the moon to orbit the earth
3. Taken for a puddle of water to evaporate on a hot day
4. Taken to walk across the playground
1. Of a clap of thunder
2. Of a teacher blowing a whistle
3. Of a tap running
4. Of a recorder being played by a friend
The last one and the whistle depend on how hard they blow. For number 4, the friend is a controlled recorder player and for number 2, the teacher is having a bad day so is very loud.
Thank you, Katie. I like the way you've given us more information about the sound quantities. Someone from Ricards Lodge who didn't give their name, gave a different answer to the sound part of the
For the sound category the loudest sound is a clap of thunder, then it's a teacher blowing a whistle, next it's your friend playing a recorder, and finally the quietest sound is a tap running.
To find this out I knew thunder would be the loudest.
Then I tested whether a recorder was louder than a tap, and it was, so then I tested whether a recorder was louder than a whistle, but it wasn't and that is how I got my order. Your results may be
different depending on your equipment. But these are the results I got.
It's interesting that there is a differing opinion about these sound quantities. However, it is great that Katie and the pupil from Ricards Lodge have both explained their own reasoning. Well done!
I wonder what you think?
|
{"url":"http://nrich.maths.org/7341/solution?nomenu=1","timestamp":"2014-04-18T01:15:11Z","content_type":null,"content_length":"6622","record_id":"<urn:uuid:4dc19671-7e13-42ae-9508-9c6bc3c3cd40>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MCCS Professional Development - MRESA III
MCCS Professional Development - MRESA III
04/05/2013 - 04/08/2013
Submitted: Friday, March 22, 2013
By Sheri Harlow
MRESA III – 6-12 August 5th and 6th in Billings http://www.msubillings.edu/smart/MRESA3pd.htm
MRESA III – K-5 August 7th and 8th in Billings http://www.msubillings.edu/smart/MRESA3pd.htm
Montana Council of Teachers of Mathematics (MCTM) Professional Development Academy, Montana Common Core Standards (MCCS): Mathematical Practices
§ Value the MCCS Mathematical Practices and identify and nurture them in student work and discourse
§ Recognize that implementing the MCCS Mathematical Practices in your classroom is important and achievable
§ Use rich and engaging tasks to integrate the MCCS Mathematical Practices into your daily instruction
|
{"url":"http://opi.mt.gov/Curriculum/MontCAS/Update_Listings/Events/Events_archive/2013-03-22_031159.html","timestamp":"2014-04-19T01:48:52Z","content_type":null,"content_length":"3593","record_id":"<urn:uuid:2bad110f-dd89-4740-b9db-f76196c25ff5>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MATH S118 ALL Honors Finite Mathematics
Mathematics | Honors Finite Mathematics
S118 | ALL | Tba
Honors Finite Mathematics (3 cr.) N&M P: Mastery of two years of high
school algebra. Designed for students of outstanding ability in
mathematics. Covers all material of M118 and additional topics from
statistics and game theory. Computers may be used in this course but
no previous experiences is assumed. Credit given for only one of
S118, M118, A118, or the sequence D116-D117. I Sem.
|
{"url":"http://www.indiana.edu/~deanfac/blfal04/math/math_s118_ALL.html","timestamp":"2014-04-19T12:44:27Z","content_type":null,"content_length":"932","record_id":"<urn:uuid:6d71b0e1-9708-4d22-9d00-7b93523de82a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the next number in the pattern 1 8 27 64 125?
The visual arts are art forms that create works that are primarily visual in nature, such as ceramics, drawing, painting, sculpture, printmaking, design, crafts, photography, video, filmmaking and
architecture. These definitions should not be taken too strictly as many artistic disciplines (performing arts, conceptual art, textile arts) involve aspects of the visual arts as well as arts of
other types. Also included within the visual arts are the applied arts such as industrial design, graphic design, fashion design, interior design and decorative art.
The current usage of the term "visual arts" includes fine art as well as the applied, decorative arts and crafts, but this was not always the case. Before the Arts and Crafts Movement in Britain and
elsewhere at the turn of the 20th century, the term 'artist' was often restricted to a person working in the fine arts (such as painting, sculpture, or printmaking) and not the handicraft, craft, or
applied art media. The distinction was emphasized by artists of the Arts and Crafts Movement who valued vernacular art forms as much as high forms. Art schools made a distinction between the fine
arts and the crafts maintaining that a craftsperson could not be considered a practitioner of the arts. The increasing tendency to privilege painting, and to a lesser degree sculpture, above other
arts has been a feature of Western art as well as East Asian art. In both regions painting has been seen as relying to the highest degree on the imagination of the artist, and the furthest removed
from manual labour - in Chinese painting the most highly valued styles were those of "scholar-painting", at least in theory practiced by gentleman amateurs. The Western hierarchy of genres reflected
similar attitudes.
Recreational mathematics is an umbrella term for mathematics carried out for recreation, self-education and self-entertainment, rather than as a fully serious professional activity. It often involves
mathematical puzzles and games.
Many problems in this field require no knowledge of advanced mathematics and recreational mathematics often attracts the curiosity of amateur mathematicians, inspiring their further study of the
In mathematics, the look-and-say sequence is the sequence of integers beginning as follows:
To generate a member of the sequence from the previous member, read off the digits of the previous member, counting the number of digits in groups of the same digit. For example:
In mathematics, a manipulative is an object which is designed so that a learner can perceive some mathematical concept by manipulating it, hence its name. The use of manipulatives provides a way
for children to learn concepts in a developmentally appropriate, hands-on and an experiencing way. Mathematical manipulatives are used in the first step of teaching mathematical concepts, that of
concrete representation. The second and third step are representational and abstract, respectively.
In the United States, prior to the late 1980s, manipulatives and student collaboration were nonexistent in elementary math classes. After 1989, due to a decision by the National Council of
Teachers of Mathematics (NTCM), more creativity began to emerge in these elementary schools. This creativity took the form of manipulatives that modeled the addition, subtraction, multiplication,
and division students used to have to memorize from practice.
Related Websites:
|
{"url":"http://answerparty.com/question/answer/what-is-the-next-number-in-the-pattern-1-8-27-64-125","timestamp":"2014-04-19T00:17:28Z","content_type":null,"content_length":"26231","record_id":"<urn:uuid:b112e0ee-5203-4935-8515-5d88c995df9b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Quintessential Guide to Sacred Geometry
Sacred geometry can be found everywhere in the world around us. Since the time of the Ancient Egyptian Pyramids, men have been creating architecture based in forms found in sacred geometry. There are
patterns in nature as well. The world and the universe around us are filled with sacred geometry. From seashells to the human body, from the cosmos to the atom, all forms are permeated with the
shapes found in sacred geometry. While sacred geometry theories can be verified mathematically, it is also a field which holds much interest to many different religious communities who can find that
it holds deep spiritual meaning for them. Sacred geometry is found within the Jewish Kabalistic system. Hindus, Christians and Jews have all built holy buildings with architecture based in sacred
geometry. Scientists, archaeologists, mathematicians, and many spiritual seekers study sacred geometry as well.
The Sphere
While the sphere may be one of the simplest forms in sacred geometry, it is also the container that can hold all of the other forms. All measurements are equal in a sphere. It is a figure that is
complete in its entirety. The earth, a seed, and an atom are all spheres.
The Circle
A circle is another simple form found in sacred geometry. The circle is two dimensional and is a symbol of oneness. The ratio of the circumference of a circle to its diameter is called Pi. Pi is an
irrational number and never ends nor does it ever repeat. It is infinite.
The Point
The point is found at the center of the sphere or the circle. All measurements must either begin with the point or pass through the point. It is the beginning and it is the end. In sacred geometry
the center point is thought to be the place creation began.
The Square Root of 2
The square root of 2 is an irrational number. When a square with sides that measure one unit is divided diagonally, the square root of 2 is the length of the diagonal. Like Pi, square root of 2 never
ends. The total of the square root of 2 equals more than half of itself.
The Golden Ratio
The golden ratio, or phi, is the unique ratio in which the ratio of the larger portion is equal to the ratio of the smaller portion. The golden ratio is another irrational number. It is usually
rounded to 1.618. It is also known as the golden mean, divine proportion, or golden section. The golden ratio has been used since ancient time in architecture of buildings.
The Square Root of 3 and the Vesica Piscis
The square root of 3 is a positive real number. When it is multiplied by itself it equals 3. The vesica picis is the name for the almond shaped area that is created when two circles of the same
radius which intersect so that each circle lies within the circumference of the other. The geometric ratio of the almond space area is the square root of 3. It is considered to be the symbol for
Jesus, part of the Ark of the Covenant along with other sacred meanings.
There are a number of different types of spirals. There are flat spirals, 3-D spirals, right-handed spirals, left-handed spirals, equi-angular spirals, geometric spirals, logarithmic spirals and
rectangular spirals. The most well known spiral is that of the nautilus shell. All spirals have two things in common: expansion and growth. They are symbols of infinity.
A toroid is a circular shaped object, such as an o-ring. It is formed through repeated circular rotations. Each circle meets in the center of the toroid. A popular childhood toy, a spirograph, can be
used to create one.
We see things in either 2 or 3 dimensions. But what about a 4th dimension? Physics debates whether we exist within 3 or 4 dimension. Sacred geometry takes all 4 dimensions into consideration.
Fractals and Recursive Geometries
Fractals are a relatively new form of mathematics, beginning only in the 17th century. A good example of a fractal form is a fern. Each leaf on a fern is made up of smaller leaves that have the same
shape of the larger whole. In recursive geometry the formula making up a form can be used repeatedly.
Perfect Right Triangles
Right triangles with sides that are whole numbers are called perfect right triangles. 3/4/5, 5/12/13 and 7/24/25 triangles are examples of perfect right triangles. A 3/4/5 perfect right triangle can
be found in the "King's Chamber" of the Great Pyramid in Egypt. The Pythagorean Theorem is used to measure the sides of right triangles.
Platonic Solids
A Platonic solid is a convex polyhedron. Platonic solids are made up of equal faces and are made up of congruent regular polygons. There are 5 Platonic solids. They are named for the number of faces:
tetrahedron - 4 faces, hexahedron - 6 faces, octahedron - 8 faces, dodecahedron - 12 faces, and icosahedron - 20 faces. The ancient Greeks believed that these 5 Platonic solids symbolized the
elements, with the dodecahedron symbolizing the heavens.
Archimedean Solids
Archimedean solids are made up of two or more different regular polygons. There are 13 different solids. 7 of the 13 solids can be made by truncating a platonic solid.
Stellations of The Platonic & Archimedean Solids
When a Platonic or Archimedean solid is stellated they create new forms. The process of stellation creates a 3D form with tetrahedrons, or pyramids. For example, if you stellate a cube, a cube based
pyramid will be created. Stellation can create a large number of new forms.
Metatron’s Cube
Metatron is the name of the angel that guards God's throne in Judaism. The figure of Metatron's Cube has been in sacred art for thousands of years. The 5 Platonic solids can be found within the cube.
Because it contains the 5 Platonic solids, it is thought that it contains the building blocks of creation.
The Flower of Life
Images of the Flower of Life have been found all around the world and in most ancient civilizations. The Platonic solids, Metatron's Cube, the Vesica Piscis can be found within the Flower of Life.
Other sacred geometric forms such as the Seed of Life, the Tripod of Life, the Egg of Life, the Fruit of Life and the Tree of Life are also found inside the Flower if Life. This sacred shape is said
to contain patterns of creation.
Sacred Geometry is all around us. It can be found in architecture, the leaves and branches of the trees, and in the cells of the human body. It is believed that through the study of sacred geometry
one can gain an understanding of the universe around him and learn the secrets of creation.
|
{"url":"http://www.barcodesinc.com/articles/the-quintessential-guide-to-sacred-geometry.htm","timestamp":"2014-04-19T12:28:49Z","content_type":null,"content_length":"67972","record_id":"<urn:uuid:deeabe85-c5df-411c-8973-8d7cd55c826e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Variance Swap Replcation in R.
July 6, 2013
By Dominykas Grigonis
As I was studying volatility derivatives I made some charts that represent some key features of replication. Say variance swap has a payoff function \(f=(\sigma^2 - K_{VOL}) \), which means that \(K_
{VOL}\) will most likely be the forward volatility close to implied. To replicate this theory goes deep into maths and log-contrats that are not even traded on the market, however the idea is simple,
buy a portfolio of options with equally distributed strike prices and weight them by reciprocal of squared strikes, i.e.\(1 \big/ K^2 \). Go for liquid ones, i.e. out of the money puts and out of the
money calls. Then volatility or in this case variance is dependent on
sensitivity of portfolio. The following graph gives an idea of how it is done. The code is included below:
X-Y - spot/time to maturity, Z - Vega\( \left(\frac{\partial C}{\partial \sigma}\right)\).
for the author, please follow the link and comment on his blog:
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or
|
{"url":"http://www.r-bloggers.com/variance-swap-replcation-in-r/","timestamp":"2014-04-21T04:44:07Z","content_type":null,"content_length":"34794","record_id":"<urn:uuid:88e6c24f-8d06-4a1c-87b9-0ea9a5f507fe>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Output transformer Impedance - diyAudio
If you're only getting 12W, my guess is that you've got a low-ish B+. Get it up to 500V and you'll get closer to 20W from that single pair of EL34s into a 6K p-p load. Paralleling them will probably
not help much with that sort of plate loading, but would allow you to get the same power at full Class A (if that's your religion). Put into perspective, the differences we're talking about are
pretty minor, ~3dB more output.
I assume that by "pseudo-triode," you mean "screen tied to plate?"
"The way to deal with superstition is not to be polite to it, but to tackle it with all arms, and so rout it, cripple it, and make it forever infamous and ridiculous."- H. L. Mencken
|
{"url":"http://www.diyaudio.com/forums/tubes-valves/14907-output-transformer-impedance.html","timestamp":"2014-04-24T10:27:24Z","content_type":null,"content_length":"62822","record_id":"<urn:uuid:1b1e9186-6f93-485d-9583-c78e14c1b1d1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
|
hw3 sat 2012(1)
Foundations of Finance
Summer Semester 2012
Marcin Kacperczyk
Homework 3
Due Saturday 06/16 (in class)
Part I: Multiple Choice
1) All else the same, a higher plowback ratio means a(n) … P/E ratio
A) higher
B) lower
C) unchanged
D) unable to determine
2) An underpriced stock provides an expected return, which is … the required return based
on the capital asset pricing model (CAPM).
A) less than
B) equal to
C) greater than
D) greater than or equal to
3) You wish to earn a return of 10% on each of two stocks, A and B. Each of the stocks is
expected to pay a dividend of $4 in the upcoming year. The expected growth rate of
dividends is 6% for stock A and 5% for stock B. Using the constant growth DDM, the
intrinsic value of stock A …
A) will be higher than the intrinsic value of stock B
B) will be the same as the intrinsic value of stock B
C) will be less than the intrinsic value of stock B
D) more information is necessary to answer this question
4) You are considering acquiring a common share of R&A Shopping Center Corporation
that you would like to hold for one year. You expect to receive both $1.25 in dividends
and $35 from the sale of the share at the end of the year. The maximum price you would
pay for a share today is … if you wanted to earn a 12% return.
A) $31.25
B) $32.37
C) $38.47
D) $41.32
5) The discount rate on the stock of K&O Wholesale Company is 10%. Its expected ROE is
12% and its expected EPS is $5.00. If the firm's plowback ratio is 40%, its P/E ratio will
be …
A) 8.33
B) 12.09
C) 19.23
D) 50.00
6) C&N Trading Company is expected to have EPS in the upcoming year of $6.00. The
expected ROE is 18.0%. An appropriate required return on the stock is 14%. If the firm
has a plowback ratio of 60%, its growth rate of dividends should be …
A) 2.5%
B) 4.0%
C) 8.4%
D) 10.8%
7) Stern Enterprises is expected to have EPS (Earnings per share) in the upcoming year of
$6.00. The expected ROE is 18.0%. An appropriate required return on the stock is 14%. If
the firm has a plowback ratio of 70%, its intrinsic value should be …
A) $20.93
B) $69.77
C) $128.57
D) $150.00
8) Smart Investors, Inc., is expected to pay a dividend of $4.20 in the upcoming year.
Dividends are expected to grow at the rate of 8% per year. The riskless rate of return is
4% and the expected return on the market portfolio is 14%. Investors use the CAPM to
compute the required rate of return on the stock, and the constant growth DDM to
determine the intrinsic value of the stock. The stock is trading in the market today at
$84.00. Using the constant growth DDM and the CAPM, the beta of the stock is …
A) 1.4
B) 0.9
C) 0.8
D) 0.5
9) M&B Gold Mining Corporation is expected to pay a dividend of $6 in the upcoming year.
Dividends are expected to decline at the rate of 3% per year. The riskless rate of return is
5% and the expected return on the market portfolio is 13%. The stock of M&B Gold
Mining Corporation has a beta of -0.50. Using the constant growth DDM, the intrinsic
value of the stock is …
A) $50.00
B) $150.00
C) $200.00
D) $400.00
Part II: Detailed Questions
Question 1:
Explain why the following statements are true/false/uncertain.
a. Holding all else constant, a firm will have a higher P/E if its beta is higher.
b. P/E will tend to be higher when ROE is higher (assuming plowback is positive)
c. P/E will tend to be higher when the plowback rate is higher.
Question 2:
Heart Medical Inc. (HM) is a little-known producer of heart pacemakers. The earnings and
dividend growth prospects of the company are disputed by analysts. Paul is forecasting 5%
growth in dividends indefinitely. However, his brother Peter is predicting a 20% growth in
dividends, but only for the next three years, after which the growth rate is expected to drop to 4%
for the indefinite future. HM dividends per share are currently $3. The expected market return is
equal to 15% and the risk-free rate is 5%. Beta of a similar company is 0.5.
a) What is the intrinsic value of HM stock according to Paul?
b) What is the intrinsic value of HM stock according to Peter?
c) If both analysts predict a 12% return and the current price is $55 what action would you
recommend to somebody who is following estimates of Paul or Peter? Is this
recommendation entirely consistent? Make any assumptions you wish to justify your
Question 3:
This question requires data collection. You can find all numbers on finance.yahoo.com. The
questions concern Microsoft (ticker: MSFT).
(a) What are the current price and the current price-earnings ratio?
(b) What is the current plowback ratio?
(c) What is the growth rate of earnings for the next 5 years according to the analysts? Hint: look
for annual growth rates under “analyst estimates”
(d) What is the beta of MSFT? Hint: look for “key statistics”. If the risk-free rate (Rf) is
4% and the market risk premium E[RM −Rf ] is 8%, what is the required rate of return on MSFT
according to the CAPM?
(e) Assume that Microsoft will have earnings and dividends that will grow at the analysts
forecasted rate forever after; i.e., the Gordon growth model (GGM) applies. What is the price-
earnings ratio that the GGM predicts for Microsoft?
Question 4 (Adventure):
A small investment-consulting firm in the country of Transylvania is convinced that there are two
key common factors affecting stock returns. One is associated with a stock's dividend yield, the
other with its historic earnings growth rate. To this end, each of the 100 stocks in the
Transylvanian market has been analyzed, and assigned two numbers. The first, y(i) is the relative
yield of the stock. This is an integer number that ranges from a value of 100 (for the stock with
the highest yield) to 1 (for the stock with the lowest yield). The stock with the second-highest
yield has a y(i) of 99, and so on. The second number g(i) indicates the stock's relative growth
rate. Here, too the numbers are integers from 100 (for the stock with the highest historic growth
rate) to 1 for the stock with the lowest historic growth rate.
The consulting firm has also classified each stock based on its economic activity. Every stock
has been assigned to one (and only one) economic sector, basic industries (B), consumer goods
(C), finance (F), or technology (T).
The firm has hired you to implement this view in a factor model of security returns.
a) Write the equation that will characterize your model of the return-generating process. Please
define each term in sufficient detail to avoid any confusion.
b) Assume that the yield factor was positive last month. Does it mean that every stock with a
yield greater than the median yield outperformed every stock with a yield below the median
yield? Suppose that there are two stocks in the same industry with the same historic earnings
growth. Does this mean that the one with the higher yield outperformed the one with the lower
yield? What, if anything, can you say about the relative performance of high yield and low yield
stocks, based on the fact that the yield factor was positive?
|
{"url":"http://www.docstoc.com/docs/123002731/hw3-sat-2012(1)","timestamp":"2014-04-18T00:29:48Z","content_type":null,"content_length":"55318","record_id":"<urn:uuid:d588a1c8-6d29-4a11-8913-32ee3f2227af>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Numbers with 12 Factors....
Date: 11/15/96 at 02:59:25
From: Shane Conrad
Subject: Numbers with 12 Factors....
I have to find two numbers which have exactly 12 factors, including
1, the number itself, 5 and 7. I found one such number, 140 (140 has
factors of 1,2,4,5,7,10,14,20,28,35,70,140) but I had to do a lot of
guessing. Is there another way to figure problems like this out
without trying a lot of numbers?
Date: 04/01/97 at 19:39:52
From: Doctor Yiu
Subject: Re: Numbers with 12 Factors....
Dear Shane,
Suppose you factor a number into a product of prime powers:
the first prime appearing A times,
the second prime appearing B times,
the last prime appearing Z times.
Then that number has exactly
(1 + A) times (1 + B) times ... all the way up to (1+Z)
divisors, including 1 and itself. This is a theorem in a subject
called number theory. If you want to know why it holds, write back to
Since your number is divisible by 5 and 7, you can take the
first prime to be 5 and the second to be 7, and ask whether it can
have other primes or not. Note that 1+A and 1+B cannot be smaller
than 2. Since there are altogether 12 divisors, these (1+A), (1+B),
and others if any, should multiply to 12.
Now the only ways to write 12 as a product of numbers with at
least two of them not smaller than 2 are
(i) 2 times 6
(ii) 3 times 4
(iii) 2 times 2 times 3
In (i) and (ii), the number has only 5 and 7 for its prime divisors.
It must be
5^1 times 7^5 = 84035,
or 5^5 times 7^1 = 21875,
or 5^2 times 7^3 = 8575,
or 5^3 times 7^2 = 6125.
In (iii), the number has one more PRIME divisor. Let it be p.
This number is
either 5 times 7 times p^2 = 35p^2,
or 5 times p times 7^2 = 245p,
or p times 7 times 5^2 = 175p.
Your example corresponds to the first of these, with p = 2.
I hope this helps. If you are confused about anything, please do
write back! Good luck.
-Doctors Yiu and Sydney, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/55786.html","timestamp":"2014-04-16T05:35:39Z","content_type":null,"content_length":"6882","record_id":"<urn:uuid:da5296b2-50c6-456b-bbb4-1070ce82e3e9>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Noise and Noise Figure Measurement
Table of Contents
Noise and Noise Figure Measurement
Plan of Attack
Thermal Noise
Noisy Resistor
Normalized Power Shows “How Much” Noise
Normalized Power Spectrum of Thermal Noise
Shot Noise
Shot Noise Power Spectrum
White Noise
A Noisy Circuit: Two Resistors
Adding the Noise Voltages
The Power Spectra Add
Generalization: A Passive Circuit
Active Circuit
Available Power
Available Power Spectrum
At Room Temperature
Filtering Noise
Noise Power
Filtering White Noise
Example: Low-Pass Filter
The Output Noise Power Depends on the Bandwidth
White Noise of Increasing Bandwidth
Some Illustrative Numbers
Continuing the Numbers...
Characterizing Two-Ports
A Noisy Two-Port Device
Defining Noise Figure
The Definition Requires a Room-Temperature Source
Cascade of Two-Port Devices
Overall Noise Figure
Cascade Example
Example, continued...
Example, continued...
Example, concluded
Input Signal-to-Noise Ratio
Output Signal-to-Noise Ratio
Lossy Two-Port
Lossy Line Example
Lossy Line Example, continued
Lossy Line, continued...
Lossy Line, concluded
Measurement of Noise Figure
Noise Sources
Excess Noise Ratio Example
Calculation of Noise Figure
Calculation, continued...
Example of Noise Figure Measurement
Example, continued...
Example, concluded
Example: Dynamic Range of an Optical Data Link
An Optical Data Link
Measured Gain Compression
Setup for Noise Figure Measurement
Measured Noise Figure
Minimum Detectable Signal Calculation
Compression Dynamic Range
A Caveat
Caveat, continued...
Caveat, concluded
Summary, continued...
Summary, concluded
PPT Slide
|
{"url":"http://www.rose-hulman.edu/~black/Noise/index.htm","timestamp":"2014-04-20T10:54:02Z","content_type":null,"content_length":"4493","record_id":"<urn:uuid:230c878b-b8f7-443d-955a-e60f23acb50d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
|
No Axiomatic Geometry Forum Here
November 20th 2013, 06:45 PM #1
No Axiomatic Geometry Forum Here
The course of axiomatic geometry is beyond my knowledge of math. However, I was wondering why there is no forum for this subject on this site.
Re: No Axiomatic Geometry Forum Here
Hey nycmath.
Are you talking about the stuff they teach in pre-calculus?
In university there are many different kinds of geometry (and it can get very very complicated). Differential geometry is the form of geometry that is more general than the n-dimension stuff they
cover in under-graduate university courses and it is taught often in final year undergraduate or in graduate school. In order to do this, you will probably need to do a math degree with enough
honors courses.
If its high school though, the Basic Mathematics book should cover it (I think).
November 20th 2013, 09:04 PM #2
MHF Contributor
Sep 2012
|
{"url":"http://mathhelpforum.com/math-topics/224479-no-axiomatic-geometry-forum-here.html","timestamp":"2014-04-18T04:52:17Z","content_type":null,"content_length":"33231","record_id":"<urn:uuid:6519dc45-d3aa-4046-a33e-ba4afb54af35>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
|
telescope conjecture
telescope conjecture
The telescope conjecture
In the stable homotopy category of $p$-local spectra for some prime number $p$, there are two collections of similarly behaved spectra: the Morava K-theories and spectra which are the telescopes of
finite spectra under their $v_n$-maps. The former are usually denoted $K(n)$ and the latter are often denoted $T(n)$. The telescope conjecture is that localization at $K(n)$ and localization at $T(n)
$ are (Bousfield) equivalent.
Revised on September 28, 2012 00:25:06 by
Toby Bartels
|
{"url":"http://ncatlab.org/nlab/show/telescope+conjecture","timestamp":"2014-04-20T08:17:29Z","content_type":null,"content_length":"13706","record_id":"<urn:uuid:4a736028-183e-4393-b368-ab64b8194a6b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Information Processing, Control, and Nonlinear Dynamics in Complex Systems
Members of the group [from left to right]: John W. Clark (St. Louis), Lidia Ferreira (Lisbon), Karl Kürten (Vienna), Patrick McGuire (Bielefeld), Henrik Bohr (Lyngby) [top right], Michael J. Barber
(Cologne) [bottom right].
Fields of Research
• Coding, information processing, learning, and control in natural and artificial neural networks
• Complex dynamics of neural-network models
• Complex behavior of one-dimensional Hamiltonian lattice models
• Control of quantum dynamics
• Protein structure and dynamics
Current Projects and Results
Development of a Probabilistic Theory of Neural Representation and of Neurobiological Computation
The aim of this major project is to create and implement a comprehensive framework for quantitative description of neural information processing in the brain, based on the hypothesis that ensembles
of neurons represent and manipulate information about analog variables in terms of probability density functions (PDFs) over these variables. This approach unites several dominant themes that are now
emerging in the new field of computational neuroscience:
• Population coding: analog sensory input or motor output variables are represented in the activities of populations of neurons, as a means to overcome the limited precision of individual neuronal
units (only 2-3 bits).
• Neural circuits in the brain are designed to carry out specific computational tasks essential to successful performance of the organism in a changing environment.
• Neural circuits in the brain perform Bayesian statistical inference as a means to cope with the uncertainties stemming from incomplete information about the world.
• Neural circuits in the brain, notably the visual system, use a stream of "bottom-up" sensory inputs to build an internal model of the sensory data; concurrently, a stream of "top-down"
model-driven signals are used to impose global regularities on the perceptual input.
• To efficiently accomplish a rich variety of information-processing tasks, synaptic inputs to dendritic trees undergo nonlinear processing (in particular, coincidence detection on dendritic
The following results have been derived from the PDF hypothesis:
Neural Representation of Probabilistic Information (ZiF Publication 2001/064)
Decoding of information: We have shown how a time-dependent probability density over a given input or output analog variable may be decoded from the measured activities of a population of neurons, as
a linear combination of basis functions ("decoders"), with coefficients given by the individual neuronal firing rates.
Encoding of information: We have shown how the neuronal encoding process may be described by projecting a set of complementary basis functions ("encoders") on the probability density function, and
passing the result through a rectifying nonlinearity.
We have shown how both decoders and encoders may be determined by minimizing cost functions that quantify the inaccuracy of the representation.
Construction of Neural Circuits to Implement a Given Computation (ZiF Publications 2001/064, 2001/065)
Expressing a given computational task in terms of manipulation and transformation of probabilities, we have shown how this neural representation leads to a specific neural circuit that can perform
the required computation within a consistent Bayesian framework, the synaptic weights being explicitly generated in terms of encoders, decoders, conditional probabilities, and priors.
We have shown how to facilitate analysis and application by introducing an intermediate representation with orthogonal (rather than overcomplete) basis functions, assigned to hypothetical
"metaneurons" having arbitrary precision.
Figure 1: Transformations between representations defined on the implicit space of the analog variable, the explicit space of the neural activities, and the minimal space of the metaneuron
Neural Propagation of Beliefs - Bayesian Belief Networks (ZiF Publication 2001/064)
We have shown how the representation of probabilistic information and the subsequent design of neural circuits to perform specified computations may be advanced by exploiting the formalism of
Bayesian Belief Nets developed by Pearl. A Bayesian belief net is a graphical representation of a probabilistic model that provides an efficient means for organizing the relations of dependence or
independence between the random variables of the model. The resulting neural networks retain important properties of Bayesian Belief Nets and are therefore called Bayesian Belief Networks.
Applications of Probability Density Function Theory (ZiF Publication 2001/064)
Among other applications, we have designed and simulated a neural belief network that can estimate the velocity of a moving target.
Figure 2: (a) The position of the target is copied into two different populations of neurons, with different time delays. (b) The time delay and the difference of the two copies of of position are
used to estimate the velocity.
In a more elaborate application, we have developed a neural network that employs "bottom-up" sensory inputs to create an internal model of the perceptual data, and that in turn employs "top-down"
feedback from this higher-level model to impose global constraints on the bottom-up predictive input. The network is capable of resolving ambiguous sensory input and attending to the larger peak of a
bimodal distribution. The PDF construction naturally generates feedforward, feedback, and lateral connections between the neurons, analogous to the pathways found in the anatomy of the cerebral
Neural Belief Propagation Without Multiplication (ZiF Publication 2001/062)
Except in the case of tree-structured graphs, Neural Belief Networks pool evidence through the multiplication of neural activation states. This implies the presence of multiplicative (or
"higher-order") interactions between neurons (see below). There are indications that such interactions occur in neurobiological systems, but the issue of their existence remains uncertain.
Accordingly, we have devised an alternative class of Neural Belief Networks that function only through weighted sums of activities and hence only entail conventional binary synapses.
Mathematical Analysis of Neural Networks with Higher-Order Interactions
Higher-order neural networks, Polyà polynomials, and Fermi Cluster Diagrams (ZiF Publication 2001/074)
The computational powers of both artificial and natural neural networks are greatly enhanced if the usual binary synaptic interactions felt by a given neuron are supplemented by higher-order (or
"multiplicative") interactions that depend on the states of more than one presynaptic neuron. Such interactions are analogous to "many-body" or "multi-spin" interactions in physics. In particular,
nth-order couplings (involving one postsynaptic and n presynaptic neurons) provide for a storage capacity proportional to the nth power of the number of neurons, when such networks are used as
dynamical content-addressable memories.
Analysis and application of networks with higher-order interactions are hindered by the exponential growth of coupling parameters with order n. An extension of the Hebbian learning prescription
serves to specify the coefficients, but there remains the problem of explicit evaluation of the higher-order terms in the stimulus to a generic neuron of the net. This problem has been addressed with
considerable success using combinatoric group-theoretical techniques. In particular, the nth-order term in the general "multi-spin" representation of the stimulus has been succinctly expressed in
terms of Poly\`a polynomials, and the series has been summed in the thermodynamic limit of a large system. Moreover, this study has revealed an interesting one-to-one correspondence between the nth
order term on the stimulus expansion and the sum of planar n-particle cluster diagrams for noninteracting quantum particles obeying Fermi statistics.
Search for the Sources of Diversity and Volatility in Randomly Assembled Artificial Neural Networks (ZiF Publication 2001/087)
Computer simulation is used to investigate the conditions under which randomly-connected hard-threshold (or McCulloch-Pitts) neurons display complex behavior. Special attention is given to the
influence of quenched threshold disorder on the repertoire of limit cycles available to the network, and on the complexity of limit-cycles as measured in cycle length and degree of neuronal
participation (eligibility). The analysis is aided by the consideration of attractor-occupation entropy as a measure of diversity and a combination of eligibility and diversity as a measure of
We have carried out extensive studies of the behavior of these measures of dynamical complexity when the threshold of each neuron is altered from its conventional "normal" value by a factor selected
randomly from a Gaussian distribution with mean unity and standard deviation d. With the normal thresholds (d=0), taken for each neuron as half the sum of the synaptic weights of its incoming
connections, randomly assembled neural nets (RAANNs) of McCulloch-Pitts neurons typically possess only a small repertoire of limit-cycle attractors, which tend to be long and therefore individually
complex. For small but finite d (weak disorder), the set of limit cycles remains small and shows little change from the undisturbed, normal case; in other words, there exists a regime of the disorder
parameter distinguished by robustness or stability of the unperturbed (normal) cycling behavior. In the opposite extreme of large d (strong disorder), the terminal patterns become less complex, but
the number of attractors vastly increases. The middle ground of intermediate d is found to engender features desired for versatile, rapidly responding systems: accessibility to a large set of complex
patterns with high diversity and volatility. These findings suggest useful connections with chaos-control theory, in which (for example) the parameters of a system moving on a chaotic attractor are
perturbed so as to stabilize a selected periodic orbit among the infinite number of unstable limit cycles embedded in the chaotic attractor.
Improvement of Neural Network Performance via Stochastic Resonance and Heterogeneity (ZiF Publication 2001/063)
In summing the parallel outputs of an array of identical Fitzhugh-Nagumo model neurons, Collins et al. demonstrated an emergent property associated with stochastic resonance (SR) in multicomponent
systems: the enhancement of the response to weak signals due to SR becomes independent of the exact value of the noise variance as the size of the system increases. We have expanded upon this work
and this discovery by examining the response when the input array is assembled from much simpler neuronal units, noisy McCulloch-Pitts neurons having a distribution of thresholds. The same emergent
property is exhibited. More specifically, the network sensitivity to the input signal increases as the number of input neurons increases. Adding more units widens the range of signal detection, but
does not significantly improve the peak performance of the network. Further, we have documented an advantage of heterogeneity that complements the findings of the preceding study. A network of
heterogeneous model neurons outperforms a similar network with homogeneous units, being sensitive to a wider range of input signals (as measured by mean value). The network architectures are
identical, the only difference being in the distribution of the thresholds: identical thresholds as opposed to two groups with different thresholds.
Regular and Irregular Behavior of Hamiltonian Lattice Models
Numerous condensed-matter systems are effectively discrete by nature because the relevant length scales are of the order of the interparticle distance. Such systems are described by a Hamiltonian
that is discrete in space, while their time evolution is considered as continuous. Their remarkable behavior, exemplified in charge-density waves, magnetic spirals, disordered crystals, adsorbed
monolayers, and magnetic multilayers, stems from a competition between two or more forces that leads to locally stable spatially modulated structures. The particles are non-trivially displaced from a
reference lattice and spatial disorder is created due to a highly complex energy landscape in configuration space. The number of locally stable configurations typically increases exponentially with
the size of the system. A model system can be envisioned as a chain of N particles connected by harmonic springs, each particle also being subject to an external multi-well potential field. A widely
used standard model is the so-called linear chain, consisting of a one-dimensional lattice of N oscillators interacting with nearest neighbors via a harmonic intersite potential. The energy of the
system is given by an N-particle Hamiltonian comprised of the vibrational kinetic energy, the intersite energy specified by the coupling strength, and the on-site energy specified by an external
on-site potential. In addition, the system might be subjected to another external force, notably an external magnetic or electric field.
Application to Magnetic Multilayers
New techniques involving the use of ultra-high-vacuum systems open the way to the synthesis of novel materials having properties of great technological interest. Artificial thin-film constructs based
on ferro- or antiferromagnetic layers separated by non-magnetic spacers have been shown to exhibit quite unusual locally stable structures. Such structures have been experimentally detected, for
instance, in Fe/Cr sandwiches and in giant magnetoresistant (GMR) elements consisting of several antiferromagnetically coupled magnetic layers separated from one other by nonmagnetic spacers (e.g.,
Co/Cu). The highly complex magnetic structures that arise depend on three competing forces: the interlayer exchange energy, the Zeeman term defined by the strength of the magnetic field, and the
strength of an intralayer anisotropy energy defined by a periodic on-site potential. The physical variables are specified by the the angles between the magnetic moments within the individual layers
and a reference axis (e.g., the easy axis). Control parameters are defined by the direction and the strength of the anisotropy, depending on the material. For special technical applications, modeling
of GMR multilayers gives valuable information about the appropriate layer thickness and the appropriate magnetic material.
We have shown that the shape of the magnetoresistance curves and the hysteresis loops characterized by Barkhausen jumps can be tailored by fine-tuning the strength of the interlayer couplings and the
strength of the anisotropy constant. The results compare well with experimental GMR and hysteresis shapes.
Another novel finding is that the spatial distribution of the magnetic moments shows fractal patterns which might be accessible to experimental studies. Moreover, the energy landscape -- consisting
of exponentially many locally stable minima separated by barriers - turns out to be a Cantor set. The situation is reminiscent of that encountered in a magnetic glass, involving weak interactions of
domains and "magnetic solitons".
Study of Necessary and Sufficient Conditions for Controllability of Quantum Systems
Since the birth of quantum theory, human control of the behavior of quantum systems has been a prominent goal, with notable successes in particle acceleration and detection, magnetic resonance,
electron microscopy, solid-state electronics, and laser optics. However, it is only in the last two decades that scientists have recognized the need for a comprehensive theory of quantum control that
absorbs and adapts general concepts and powerful methods developed within systems engineering. In chemistry, the development of quantum control theory together with tremendous advances in laser
technology have opened the way to unimolecular control of chemical reactions, a Holy Grail of the field. Even more dramatically, a synergism between quantum control and quantum computation is
creating a host of exciting new opportunities for both activities. See this paper for a review of some of these developments, which were surveyed in a ZiF Kolloquium presented by J. W. Clark.
The role of quantum control in quantum computation is being studied in the context of necessary and sufficient conditions for controllability, complete or approximate, both for finite and
infinite-dimensional state spaces, and systems with discrete and/or continuous spectra. The infinite-dimensional case has received little attention since the original papers on the subject, and
deserves careful re-examination in connection with the notion of quantum computation over continuous variables (a process already begun by Lloyd and Braunstein).
Quantum Computers as Universal Simulators of Quantum Systems
Quantum computation is being studied with a view to its application to ab initio solution of quantum many-body problems hitherto regarded as intractable (e.g. macromolecules, heavy nuclei, hadron
structure based on QCD). An exponential speedup of computation is promised by the massive parallel processing of quantum pathways, but the conditions under which this promise can be realized remain
to be established in practical detail.
How Molecules Can Train Lasers to Control Molecular Dynamics
For any but the simplest molecules, the precision of our knowledge of the system interactions and Hamiltonian is very limited. On the other hand, the molecule certainly "knows" its own Hamiltonian,
and this fact can be exploited by introducing a feedback loop from the molecular system to the laser that generates pulses intended to control the dynamical evolution of the molecule. Information
extracted by probing the system is used to guide the shaping of the laser pulse so as to systematically reduce a positive measure of the difference between the desired and actual system response. The
latter process is performed by a suitable incremental learning rule, e.g., a gradient-descent or conjugate-gradient minimization routine or a genetic algorithm, until convergence is achieved to an
optimal pulse shape. It is now feasible experimentally to execute up to a million pump-probe cycles per second (involving imposition of a shaped laser pulse and subsequent measurement of response).
Accordingly, this hybrid computational-experimental scheme, first proposed by Judson and Rabitz, has become a practical reality. The introduction of a Kalman filter is being explored as a more
powerful approach to governance of the learning cycle. An intriguing aspect of the Judson-Rabitz scheme is its exploitation of the joint system of apparatus-plus-molecule as an analog computer to
determine the optimal pulse shape. In correspondence with the neurobiological example of the modulation of bottom-up sensory input via feedback from a top-down internal model, the "bottom-up" laser
signal from the pulse-shaper is iteratively improved through the intervention of "top-down" signals from the molecule's perfect knowledge of its Hamiltonian.
The Quantum Protein
The determination of protein structure, including the mapping from the primary amino acid sequence to the final, functioning, folded structure in the native environment, is among the most complex and
important problems on the current scientific scene. But if anything, the problem of protein dynamics, in its broadest sense, is exponentially more complex!
The challenge of the post-genomic era is to devise and implement novel theoretical and experimental techniques for exploiting and understanding the huge reservoir of data provided by the world effort
in genetic sequencing. In pursuing this goal and identifying the functionality of an ever-increasing number of genes, it must be recognized that the lowest energy state of many proteins, e.g. the
prion proteins of the mad-cow disease, is not the native, functioning molecule under all conditions. Therefore it is not meaningful to ask for the optimal state of a protein without reference to its
environment; rather, one must determine the energy spectrum or the energy landscape in the region of phase space around the native and various non-native states under the various conditions that can
occur in the cell. We have devoted considerable effort to the design of an interdisciplinary program to elucidate and explain diverse aspects of protein dynamics and function at the quantum level,
enlisting advanced theortical methods for quantitative treatment of electronic structure as well as sophisticated spectrometric tools based on vibrational circular dichroism and Raman optical
activity. Proceeding from the level of quantum structure and dynamics, we seek a deeper understanding of the mechanisms by which protein can attain a function, lose it, and subsequently regain it.
ZiF Publications
K.E. Kürten, Transitions from non-collinear to collinear structures in a magnetic multilayer model, ZiF Publication 2000/010.
F. Castiglione and K.E. Kürten, A dynamical model of B-T cell regulation, ZiF Publication 2001/048.
M.J. Barber, Neural propagation of beliefs without multiplication, ZiF Publication 2001/062.
M.J. Barber and B.K. Dellen, Noise-induced signal enhancement in heterogeneous neural networks, ZiF Publication 2001/063.
M.J. Barber, J.W. Clark, and C.H. Anderson, Neural representation of probabilistic information, ZiF Publication 2001/064.
M.J. Barber, J.W. Clark, and C.H. Anderson, Neural propagation of beliefs, ZiF Publication 2001/065.
K.E. Kürten and J.W. Clark, Higher-order neural networks, Polyà polynomials, and Fermi cluster diagrams, ZiF Publication 2001/074.
P.C. McGuire, H. Bohr, J.W. Clark, R. Haschke, C.L. Pershing, and J. Rafelski, Threshold disorder as a source of diverse and complex behavior in random nets, ZiF Publication 2001/087.
K.E. Kürten and F.V. Kusmartsev, Creation of glassy structures in magnetic multilayers, ZiF Publication 2001/104.
J.W. Clark, D.G. Lucarelli, and T.-J. Tarn, Control of Quantum Systems, in Advances in Quantum Many-Body Theory, Vol. 6, edited by R.F. Bishop, K.A. Gernoth, N.R. Walet (World Scientific, Singapore),
in press, ZiF Publication 2001/106.
Related Publications
PDF Formalism and Applications
C.H. Anderson, Basic elements of biological computational systems, Int. J. Mod. Phys. C 5, 135-137 (1994).
E. Eliasmith and C.H. Anderson, Developing and applying a toolkit from a general neurocomputational framework, Neurocomputing 26, 1013-1018 (1999).
C.H. Anderson, Q. Huang, and J.W. Clark, Harmonic analysis of spiking neuronal ensembles, Neurocomputing 32-33, 279-284 (2000).
J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference (Morgan Kaufmann, San Mateo, CA, 1988).
Higher-Order Neuronal Interactions
J.W. Clark, K.A. Gernoth, S. Dittmar, and M.L. Ristig, Higher-order probabilistic perceptrons as Bayesian inference engines, Phys. Rev. E 59, 6161-6174 (1999).
K.E. Kürten, Quasi-optimized memorization and retrieval dynamics in sparsely connected neural network models, J. Phys. France 51, 1585-1594 (1990).
Neural Network Dynamics
J.W. Clark, K.E. Kürten, and J. Rafelski, Access and stability of cyclic modes in quasirandom networks of threshold neurons obeying a deterministic synchronous dynamics, in Computer Simulation in
Brain Science, edited by R.M.J. Cotterill (Cambridge University Press, Cambridge, UK, 1988), pp. 316-344.
E. Ott, C. Grebogi, and J. A. Yorke, Controlling chaos, Phys. Rev. Lett. 64, 1196-1199 (1990).
J.J. Collins, C.C. Chow, and T.T. Imhoff, Stochastic resonance without tuning, Nature 376, 236-238 (1995).
Hamiltonian Lattice Models
H.S. Dhillon, F.V. Kusmartsev, and K.E. Kürten, Journal of Nonlinear Mathematical Physics 8, 38-49 (2001).
K.E. Kürten, The role of mathematical and physical stability of irregular orbits in Hamiltonian systems, in Condensed Matter Theories, Vol. 14, edited by D. Ernst, I. Perakis, and S. Umar (Nova
Science Publishers, New York, 1999).
K.E. Kürten, in Condensed Matter Theories Regular and irregular dynamical behaviour in Hamiltonian lattice models: An approach from complex systems theory, in Condensed Matter Theories, Vol. 15,
edited by G. S. Anagnostatos, R. F. Bishop, K. A. Gernoth, J. Ginis, and A. Theophilou (Nova Science Publisher, New York, 2000), pp. 415-424.
Quantum Control
G.M. Huang, T.J. Tarn, and J.W. Clark, On the controllability of quantum-mechanical systems, J. Math. Phys. 24, 2608-2618 (1983).
C.K. Ong, G.M. Huang, T.J. Tarn, and J.W. Clark, Invertibility of quantum-mechanical control systems, Math. Systems Theory 17, 335-350 (1984).
J.W. Clark, Control of quantum many-body dynamics: Designing quantum scissors, in Condensed Matter Theories, Vol. 11, edited by E. V. Ludena, P. Vashishta, and R. F. Bishop (Nova Science Publishers,
Commack, NY), pp. 3-19.
S. Lloyd, Universal quantum simulators, Science 273, 1073 (1996).
S. Lloyd and S.L. Braunstein, Quantum computation over continuous variables, Phys. Rev. Lett. 82, 1784-1787 (1999).
R.S. Judson and H. Rabitz, Teaching lasers to control molecules, Phys. Rev. Lett. 68, 1500-1503 (1993).
Protein Structure and Dynamics
H. Bohr and J. Bohr, Topology in protein folding, in Topology in Chemistry (Gordon and Breach, 1999).
|
{"url":"http://www.uni-bielefeld.de/(en)/ZIF/FG/2000Complexity/processing_.html","timestamp":"2014-04-17T13:40:08Z","content_type":null,"content_length":"34223","record_id":"<urn:uuid:640db4a4-975b-40ac-b56d-d00729a19acd>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Founder of the Secret Society of Mathematicians
1138565 story
Posted by
from the as-only-the-french-can-do-it dept.
suggests an article at Science News on the passing of Henri Cartan, one of the founding members of a
strange and influential group of French mathematicians
in the twentieth century.
"In the 1930s, a group of young French mathematicians led an uprising that revolutionized mathematics. France had lost most of a generation in the First World War, so the emerging hotshots in
mathematics had few elders to look up to. And when these radicals did look up, they didn't like what they saw. The practice of mathematics at the time was dry, scattered and muddled, they believed,
in need of reinvention and invigoration... Using the nom de plume Nicolas Bourbaki (after a dead Napoleonic general), they wrote a series of textbooks laying out mathematics the right way. Though the
young mathematicians started out only intending to write a good textbook for analysis..., they ended up creating dozens of volumes which formed a manifesto for a new philosophy of mathematics. The
last of the founders of Bourbaki, Henri Cartan, died August 13 at age 104... Two of his students won the Fields medal..., one won the Nobel Prize in physics and another won the economics Nobel."
This discussion has been archived. No new comments can be posted.
Founder of the Secret Society of Mathematicians
Comments Filter:
• Remove the stone of shame... (Score:5, Funny)
by Anonymous Coward on Saturday August 30, 2008 @02:33PM (#24812341)
Attach the stone of triumph!
• Thees Matematics... (Score:1, Funny)
by Anonymous Coward
Es dry and scataired. We shall revolutionise thees field, for what is pi without a nice sauce? Later we shall riot in the street over the suspension of sauce for our pi.
□ Re: (Score:3, Funny)
by fugue (4373)
Let zem eet Keeeeek!
• secret? (Score:5, Funny)
by bfields (66644) on Saturday August 30, 2008 @02:47PM (#24812447) Homepage
"Secret society" is a bit over the top! I always had the impression it had the feeling more of a running joke; from the article:
In the 1950s, Ralph Boas of Northwestern University wrote an article for the Encyclopaedia Britannica on Bourbaki, explaining that it was the pseudonym for a consortium of French
mathematicians. The editors of the encyclopedia soon received a scalding letter signed by Nicolas Bourbaki himself, declaring that he would not allow anyone to question his right to exist. In
revenge, Bourbaki began spreading the rumor that Ralph Boas himself didn't exist, and that B.O.A.S was an acronym of a group of American mathematicians.
Though it wasn't *just* a joke--they wrote a lot of very serious mathematics!
□ Re:secret? (Score:4, Funny)
by DeadDecoy (877617) on Saturday August 30, 2008 @06:13PM (#24813801)
It's like having a secret society D&Ders with regular meetings in the basement, erm ... Painkeep. It's so secret nobody cares.
Mwahahaha this plan is brilliant. Now if only I could get with a girl and spread my brood upon the world.
□ by Whiteox (919863)
I have hear that Hans Delbrook was also a member.
• And it all came down to this (Score:2, Insightful)
by El Yanqui (1111145)
Hell is other mathematicians.
• You meant the wrong way (Score:5, Insightful)
by Anonymous Coward on Saturday August 30, 2008 @02:54PM (#24812493)
Bourbaki books are the most boring books you can buy. Avoid them at all costs. If you want to study mathematics, there are much better books.
Their motto is to never explain anything. These makes these books completely unreadable. Mathematics the right way ccroding to them is:
--no example: examples are evil being that stray us from the true path of pure abstraction.
--never mention any applications: are you nuts ? mathematics must remain pure. Applied mathematics are the spawn of the devil. If it serves some purpose, it's not mathematics anymore.
--Don't draw anything. Drawings are tools of the devil. 2D domains and geometrical figures should only exists as pure abstraction.
--The less explanations, the better: only idiots needs explanations.
--Never rewrite a theorem for the sake of clarity: having 20 references to other theorems
(usually in another volume) in a 5 lines proof
is better for clarity (don't even write the name of the Theorem you refer to, a true mathematician knows them by volume and page number).
And better they add insult to the injury in the preface: "no prerequisite knowledge of math is needed to read this book". Yeah whatever.
□ Re:You meant the wrong way (Score:4, Insightful)
by msuarezalvarez (667058) on Saturday August 30, 2008 @03:20PM (#24812679)
If you really think there is no value in Bourbaki's texts, you know nothing of the history of mathematics, and probably you have not had to study that many textbooks from before their time...
☆ Re: (Score:1, Redundant)
by fluch (126140)
I can just second this.
☆ Re:You meant the wrong way (Score:5, Informative)
by killmofasta (460565) on Saturday August 30, 2008 @04:19PM (#24813085)
Umm. I have a few texts from the late 1800s, and they are absolutly completely worse. Bourbaki's is the abstraction. ( I have their volume on abstract algebra, that I referred to while I
was taking that graduate level course. Theirs is a terse work, much more accurate, and well though out. ) A few Springer-Verlag texts are worse also.
○ Re:You meant the wrong way (Score:5, Interesting)
by msuarezalvarez (667058) on Saturday August 30, 2008 @04:54PM (#24813291)
Indeed. And the much more literary style that was deemed acceptable before resulted not only in inaccuracy but in gross errors.
Bourbaki's work is an amazing feat, which nowadays can be appreciated maybe only with a considerable amount of historical perspective---mostly because it was extremely successful: it
set (maybe by using an elaborate, laborious, hyperbole that is, among many other things, a display of love for the subjects treated) standards against which mathematical writing was
(and is!) compared, if not jugded, and the student of today has the false impression that the textbooks he reads today are of the same kind as those that were read at all times,
simply because he does not know history.
The effort spent in coming up with clear, precise definitions, detailed proofs, even with usable notation, is easy to disparage once one can enjoy its benefits.
■ Re:You meant the wrong way (Score:4, Insightful)
by Alomex (148003) on Saturday August 30, 2008 @05:58PM (#24813657) Homepage
There were no gross errors found by the Bourbaki group. This is why their horrible formal writing style died out: it increased the pain of writing without the gain of the previous
waves of math formalization.
☆ by goose-incarnated (1145029)
This is my second bitch in as many days at the mods ... how the fark is the above considered flamebait?
□ Re:You meant the wrong way (Score:4, Informative)
by PiNtoS (1318793) on Saturday August 30, 2008 @03:24PM (#24812699)
Bourbaki books are the most boring books you can buy. Avoid them at all costs.
This is a bit extreme. While they're certainly not the best books to begin learning a subject, they're great reference books. They're well written, (generally) correct, and what's more,
they've got some seriously elegant proofs. And from what I recall, they do have some diagrams (e.g., their commutative algebra book).
□ by Guignol (159087)
Wow, either you troll, or your version of a good mathematics textbook is some Dora's "let's do Maths" version of it.
Either way, this should not be modded insightful but troll or maybe flamebait.
☆ Re: (Score:1, Informative)
by Anonymous Coward
Better books:
Walter Rudin: Complex and Real Analysis (splendid book for students, great explanation of measure theory),
functional analysis from same author is good too.
Kato: Perturbation theory (very good book but quite hard, wait till graduate).
Brezis: functional analysis (french): a little abstract but short. Some early chapters are a little boring.
Yosida: functional analysis (haven't had time to finish it, so my opinion on it is not set).
Hormander: Series on linear operators are a reference. Very long.
○ Re: (Score:3, Insightful)
by lysergic.acid (845423)
yea, but how many of those authors published books in the 1930's? unless Walter Rudin started publishing his writings in his early-teens, and Hormander before he turned 10, i don't
think you can make a comparison between their works and Bourbaki's.
□ Re:You meant the wrong way (Score:5, Insightful)
by porpnorber (851345) on Saturday August 30, 2008 @07:27PM (#24814245)
I think perhaps you weren't born to be a mathematician. I seriously wish that some of my profs would have STFU'd about the applications and focused on the proofs and proof techniques. If you
are studying biology, do you really want half your class hours devoted to what plants look nice in a garden? (Maybe you do, but if so, study gardening instead.) If you are studying software
architecture, should the textbook assume that what you really care about is, I don't know, writing keyboard scanners? (Again, you might, but then why not buy a different book?) And do you
want your general psych class turned into a course on methods of military indoctrination? It doesn't matter the field, I think we'd benefit from a lot less focus on applications and a lot
more on mastery of content. Mathematics most of all, because the cultural content of mathematics is the collection of tools for thinking about pure problems, abstracted from any problem
domain. Indeed, the best advances in mathematics come, it seems to me, from abstracting internal mathematical tools away from their original mathematical focus, and thereby making them
available to the whole subject, and not just one small field.
As to issues with how theorems are referred to, I think this brings us to the root of the Bourbaki phenomenon. The cult of personality is not so productive in a field whose content is
supposedly objective, and naming results after people is a barrier to objectivity in understanding and a barrier to communication. My girlfriend is Chinese. Do you suppose she knows, or
cares, about Green's theorem or Taylor series, under those names? But five seconds with a pencil and paper and we are in sync.
Mathematics is not automotive mechanics and it is not pop music! And—I don't mean to be rude here, in making a cultural observation—that was a particularly hard lesson for French academia in
particular—though for France we needed to write "it is not the civil service and it is not religion."
☆ Re:You meant the wrong way (Score:5, Insightful)
by Scott Carnahan (587472) on Saturday August 30, 2008 @10:03PM (#24815325) Homepage
I think perhaps you weren't born to be a mathematician.
The idea that career choices are predetermined at birth is a popular romantic view (cf. the human literary corpus of epics and fairy tales about Chosen Ones), but there is essentially no
hard evidence for its validity, and I think it devalues the richness and variability of life experiences. Also, I don't think we should exclude people from mathematics just because they
don't like the sort of dry abstraction you find in Bourbaki texts. There are plenty of reasonably successful mathematicians who are more comfortable learning things when they have an
example or application in mind. For example, Timothy Gowers [wikipedia.org] wrote two [wordpress.com] posts [wordpress.com] on his blog, suggesting that exposition is improved by starting
with examples to motivate an idea.
It doesn't matter the field, I think we'd benefit from a lot less focus on applications and a lot more on mastery of content. Mathematics most of all, because the cultural content of
mathematics is the collection of tools for thinking about pure problems, abstracted from any problem domain.
From a practical standpoint, I don't think we should try to change teaching methodology too much at a time, because there are almost always weaknesses in revolutionary plans that don't
show up at the thought experiment stage. More abstractly, I think people tend to learn content better when it is motivated with a useful context. Exactly where this balance should be
struck is still a contentious issue (see math wars [wikipedia.org]), but I don't think Bourbaki is the answer. Even among pure academics, we value theoretical work by some notion of
applicability. We say that etale cohomology is a good theory, not because it lets you think abstractly about pure stuff (although it does), but because you can use it to prove hard
quantitative statements like the Weil conjectures, the Adams conjecture, and many theorems in representation theory.
○ by porpnorber (851345)
I don't, in fact, disagree with you very intensely, and yes, perhaps I spoke too strongly. I do think that there is a real current trend away from teaching subjects according to their
own self-determined core values and towards what politicians and industrialists would like, and I truly believe this to be short-sighted, damaging, and not particularly justified by
education theory, historical experience, or much else. That's not quite the same as me wanting to launch into untested didactic waters without a l
■ by martin-boundary (547041)
Well said. (+1)
□ by martin-boundary (547041)
This is ridiculous, and factually incorrect. It's a pity I only had mod points yesterday yet I didn't use them!
The Bourbaki books are readable, however they are a bit dated, and they do _not_ aim to be textbooks for first year freshmen at all, which I strongly suspect the AC is.
-- It's incorrect to say that there are no examples. There are plenty, but if you haven't done a few years of university math already, you won't recognise _what_ they are about.
-- The applications certainly exist, but they are
□ Re: (Score:2, Insightful)
by NaveNosnave (670012)
The principal drawback of the Bourbaki school is not that its output is "boring", but that it presents the final results of a theorem in a pristine format while removing all references to the
messy drudgework and dead ends it took to get there. As such, it's pleasingly elegant for practicing mathematicians, but terrible for students. Students need to see examples of math as a
process, not just as a finished product.
☆ by lekikui (1000144)
Indeed. As far as that goes, I'm having a lot of fun with the book "Numbers and Proofs". It mostly focuses on number theory, but is primarily about the process of proving, and proof
techniques. It's not a heavy duty serious textbook for university level work, but it's a nice book pre-university (which is where I am).
• Metascore (Score:2, Interesting)
by Anonymous Coward
From what I can tell, Metascore is an attempt by mathematicians to take over the government. In fact, every government.
http://www.metascore.org/ [metascore.org]
Difference is they do not seem to be very secretive.
• They really are radicals! (Score:5, Funny)
by CaptainPatent (1087643) on Saturday August 30, 2008 @03:06PM (#24812579) Journal
I don't want to get off on a tangent, but the whole group seem kind of irrational.
How does someone become a member of this finite group? Do they have to stand in the middle while the rest of them form a perfect square around them? I wonder if they have to hide their identity?
oh well, at least at the end there's pi!
□ The pi is a li
☆ The cake is a lie group.
□ Re: (Score:3, Funny)
by exp(pi*sqrt(163)) (613870)
> How does someone become a member of this finite group?
You make sure you have an inverse and that you associate nicely with the other elements.
□ by jonaskoelker (922170)
Just a quick comment from the left field (mama never taught me to be discrete): maybe they could stand in a ring instead (of a square) during their analysis of these complex issues, during
their attempt to see what can be derived from the facts. That would seem more natural and rational; at least it's the norm.
• Not so secret... (Score:4, Informative)
by mbone (558574) on Saturday August 30, 2008 @03:08PM (#24812587)
This was common knowledge when I was taking advanced Math classes in mid 1970's.
• Why are we celebrating these books? (Score:4, Insightful)
by dlenmn (145080) on Saturday August 30, 2008 @03:23PM (#24812693) Homepage
From TFA:
The result was austere books with almost no examples, guide for intuition or pictures. Philip Davis of Brown University described them in an article in SIAM News as "mathematics with all its
juices extracted; bare bones, skeletonic, anorexic stuff; Twiggy dressed in the tunic of Euclid." Michael Atiyah of the University of Edinburgh says: "They're not designed to be read. They're
designed to set out a the is for how mathematics ought to be done."
Any they thought other books were dry? I guess books like this may have some use for hard core math types, but they sound like horrible books for almost anyone else. Examples, pictures, and the
likes are very important for learning. Designing books not to be read seems like silly exercise.
□ Re:Why are we celebrating these books? (Score:4, Informative)
by Anonymous Coward on Saturday August 30, 2008 @03:40PM (#24812807)
Bourbaki was writing for a graduate student or professional level mathematician, not a public audience. The writing is dry by today's standards - indeed, many texts today use more prose and
include diagrams. However, Bourbaki was very good at getting the mathematics itself clearly defined.
☆ Re: (Score:3, Interesting)
by ObsessiveMathsFreak (773371)
However, Bourbaki was very good at getting the mathematics itself clearly defined.
It's just a pity they were never able to clearly get it across.
Bourbaki, and the Bourbaki style, makes great reference material. But that's all it makes. There is more to mathematics, and pictures and example are part of that "more". A big part.
Bourbaki did not just forget these topics. They actively excluded them. Jean Dieudonne [wikipedia.org] stood up in the middle of a conference and shouted "Down with Euclid! Death to
Triangles!". It wa
□ by Nicolas MONNET (4727)
I guess books like this may have some use for hard core math types, but they sound like horrible books for almost anyone else.
You're funny. You have no idea what mathematics actually are, do you? Or how hard core this stuff is *supposed* to be.
☆ by dlenmn (145080)
All I know is that the books I've used for graduate math classes have had both diagrams and examples... I suppose that makes me outright hilarious.
□ Re: (Score:2, Informative)
Many people blame Bourbaki for the horrendous "new math" which infected mathematics teaching in the 1960's. And there is some validity to that accusation. A scathing indictment of Bourbaki
was given by the renowned mathematician V.I. Arnold, author of famous books on classical mechanics and differential equations. Arnold tears apart the dry, lifeless and phony "rigor" and
"purity" of Bourbaki and others who divorce mathematics from reality, which he describes as "sectarianism and isolationism which destroy
• New Math of the 1960's - they invented it (Score:5, Interesting)
by Cliff Stoll (242915) on Saturday August 30, 2008 @03:28PM (#24812735) Homepage
Aargh! From these mathematicians grew the "New Math" of the early 1960's.
During the 1950's, high school math was mainly geometry, algebra, trig, and calculus.
Then came the New Math. Imported from France, it emphasized set theory, number bases, and abstract number theory. Students learned cardinality, commutative laws, associative laws, and "pure"
math, with less applied math and problem-solving.
Many educators (and even more parents) saw the New Math as being too abstract for daily use and undercutting concrete skills such as computation. Physicists, especially objected, when college
freshmen could calculate in multiple number bases, but couldn't solve algebraic equations.
Mathematician/singer Tom Lehrer wrote the song,"New Math", with the line, "It's so very simple that only a child can do it!"
One book, Why Johnny Can't Add - the Failure of the New Math, pointed out that in the mid 1970's teachers applauded the death of the New Math. By the late 70's, algebra was back in style, and
even trigonometry was being taught. So ended the French invasion of high school math classes.
The latest, of course, is the new-new-math, also called rain-forest math. Don't get me started...
□ New Math was Horrible! (Score:4, Interesting)
by Anonymous Coward on Saturday August 30, 2008 @04:14PM (#24813043)
I survived four years of New Math - it was so easy that you couldn't do anything wrong. Straight A's in math.
Then I got to college and met all those equations. Everyone else solved them but me. Two weeks into chem 101 and I was flailing.
It's true - the Bourbaki group *invented* the New Math, and pushed it into classrooms around the world. Millions of adults are now math-illerates because of these oh-so-pure mathematicians.
☆ by hedwards (940851)
If it was so easy you couldn't do anything wrong then you were either a genius or it was completely without any value.
Clearly in your case if you weren't able to handle the chem 101 math that's an indication that you hadn't learned the math.
The mathematics involved with that level of chemistry is not that tough, and certainly not as much as is required for freshman level calculus.
Math is a pursuit where you're either right or wrong. In some cases being right may mean being within the margin for error but no
○ by Watson Ladd (955755)
The point of teaching math is for use in doing math, not physics.
■ Re: (Score:2, Insightful)
by linzeal (197905)
No, the point of teaching math is to give the skill sets needed for analyzing other subjects mathematically. No one should be teaching math in a purely abstract way to anyone but
grad students.
○ by pjt33 (739471)
...or it was completely without any value
I wouldn't be hasty to discount that possibility, given another of Lehrer's characterisations of "New Math":
[i]n the new approach, as you know, the important thing is to understand what you're doing rather than to get the right answer.
☆ by blahplusplus (757119)
"I survived four years of New Math - it was so easy that you couldn't do anything wrong. Straight A's in math. .... Millions of adults are now math-illerates because of these oh-so-pure
I agree but I think the whole problem is that we don' start with a geometric interpretation of math, the mayan's used shapes for numerals, and arabic symbols hide mathematical truths that
are expressed better in images, visual geometric shapes.
We already do math unconsciously else we could not navigate, we co
□ by mbone (558574)
I took the new math, took lots of algebra along with the set theory, and actually thought it helped me as a physicist.
Lots depends on what you want to do. I can't imagine not knowing calculus, but plenty of people don't and they somehow don't seem to miss it.
□ I had new math and as a result my equation solving ability was utterly horrible and honestly still is... but I did find that new math laid the groundwork for databases down the road...
cardinality and set theory are sort of the gist of databases... and, well, commutative and associativity are useful for understanding operators in programming languages.
□ by WATist (902972)
Did they actually push "New Math." As you said it was imported. They were writing to college audiences. This stinks of jumping on the band wagon. Someone heard how great this was from
publisher's salesmen and demanded kids learn it.
• by Bromskloss (750445)
Is the group's works [bourbaki.ens.fr] available online somewhere? I'd like to see their condensed style.
□ by Neil Strickland (1064886)
I don't know at the books (listed at the parent's link) but the journal 'Seminaire N. Bourbaki' is at NUMDAM [numdam.org]. I'm not sure about the formal relationship but much of the journal
is in a fairly similar style to the books. There's lots of other good French mathematics at the same site.
Is the group's works available online somewhere?
The guy just died, so they're copyrighted for only another 70 years. See how well it works?
• Secret Society (Score:4, Funny)
by PPH (736903) on Saturday August 30, 2008 @04:06PM (#24812989)
Dedicated to the ongoing fight against the "show your work" admonitions of math teachers everywhere.
• by sw155kn1f3 (600118)
They can make RTFA about mathematicians to sound like this is an article about some couturier. Well.. It's all geometry after all! .. AND applied to nude models ;) so this "secret society" should
be into something, eh
□ by ConceptJunkie (24823)
- Arwen, I'm your father, Agent Smith.
- Well, you're just Smith, but my father is Aerosmith!
I think my brain just core-dumped. Count Dooku is wailing on Magneto with his light-sabre.
☆ by sw155kn1f3 (600118)
Yeah.. This is nice anecdote playing on The Matrix (Hugo Weaving playing agent Smith), Lord of the Rings (Hugo Weaving playing Elrond, father of Arwen played by Liv Tyler) and who Liv
Tyler's father is :)
• from Straight Dope (Score:5, Funny)
by mbius (890083) on Saturday August 30, 2008 @07:13PM (#24814165) Journal
The following examples may help to clarify the difference between the new and old math.
1960: A logger sells a truckload of lumber for $100. His cost of production is 4/5 of this price. What is his profit?
1970 (Traditional math): A logger sells a truckload of lumber for $100. His cost of production is $80. What is his profit?
1975 (New Math): A logger exchanges a set L of lumber for a set M of money. The cardinality of set M is 100 and each element is worth $1.
(a) make 100 dots representing the elements of the set M
(b) The set C representing costs of production contains 20 fewer points than set M. Represent the set C as a subset of the set M.
(c) What is the cardinality of the set P of profits?
1990 (Dumbed-down math): A logger sells a truckload of lumber for $100. His cost of production is $80 and his profit is $20. Underline the number 20.
1997 (Whole Math): By cutting down a forest full of beautiful trees, a logger makes $20.
(a) What do you think of this way of making money?
(b) How did the forest birds and squirrels feel?
(c) Draw a picture of the forest as you'd like it to look.
• About the Bourbaki crowd... (Score:1, Informative)
by Anonymous Coward
1. Bourbaki was not a "secret society", although it didn't publish a membership list because membership wasn't particularly official. Even some non-French mathematicians participated
2. They did not publish "text" books, they published carefully written reference books.
3. The reputation was they they met once a year or so in a nice French resort with a reputation for good wine, to enjoy themselves and argue about the best wording for proofs in the next volumes.
4. While in principle, you might r
Related Links Top of the: day, week, month.
|
{"url":"http://science.slashdot.org/story/08/08/30/1642216/founder-of-the-secret-society-of-mathematicians","timestamp":"2014-04-17T07:05:37Z","content_type":null,"content_length":"247890","record_id":"<urn:uuid:c4ef493c-b8d1-4707-b6d8-d65ead550a33>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Rookie problems - Why is C-code much faster?
Gary Ruben gruben at bigpond.net.au
Tue Feb 21 15:34:01 CST 2006
Something like this would be great to see in scipy. Pity about the licence.
Gary R.
Alan G Isaac wrote:
> On Tue, 21 Feb 2006, (CET) Mads Ipsen apparently wrote:
>> My system consists of N particles, whose coordinates in
>> the xy-plane is given by the two vectors x and y. I need
>> to calculate the distance between all particle pairs
> Of possible interest?
> http://www.cs.umd.edu/~mount/ANN/
> Cheers,
> Alan Isaac
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-February/018828.html","timestamp":"2014-04-19T04:52:42Z","content_type":null,"content_length":"3339","record_id":"<urn:uuid:5b3a3c8d-a5a6-45d6-ac22-5079103c3c7e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Worksheet on Profit and Loss | Word Problem on Profit and Loss | Profit and Los
Worksheet on Profit and Loss
In worksheet on profit and loss, we can see below there are 15 different types of questions which we can practice in our homework.
1. Find the profit or loss:
2. Fill in the blanks:
From question 3 onwards we can see word problems on profit and loss.
Word Problems on Profit and Loss.
A TV was bought for $ 18,950 and old at a loss of $ 4780. Find the selling price.
A second hand car was sold for $ 190000, at a loss of $ 85. Find the CP of the car.
Jane sold her genset for $ 20000 at a profit of $ 1737. Find the CP of genset.
Abraham bought a music system for $ 6375.00 and spent $ 75.00 on its transportation He sold it for $ 6400.00. Find his profit or loss percent.
Joy bought pens at $ 120 a dozen. He sold it for $ 15 each. What is his profit percent?
8. Simi bought a study table for $ 9000. She sold it at a profit of 20%. How much profit did she make? What is the selling price?
9. Find the selling price if the cost price is $ 1200 and loss percent is 25.
10. Marshall bought 20 refills and sold them at $ 4 each. If it had cost $ 50 for the refills, what was his profit or loss percent?
11. Mr. Smith buys pencils at $ 250 per hundred and sells each at $ 1.75. Find his loss or profit.
12. Davis bought a second hand cycle for $ 500. He spent $ 80 in repairs and $ 175 in repainting. He then sold it to John for $ 900. How much did he gain or lose?
13. A fruit vendor bought 600 apples for $ 4800. He spent $ 400 on transportation. How much should he sell each to get a profit of $ 1000?
14. Tim bought a box of chocolates for $ 650 and sold it to Tom at a profit of $ 75. Find the selling price.
15. David bought 2 dozen eggs for $ 56. Since 6 of them broke, he incurred a loss of $ 20 on selling them. What was the selling price of one egg?
If you have any doubts while solving the worksheet on profit and loss please Contact Us so that we can help you.
● Profit and Loss.
Formulas of Profit and Loss.
To find Cost Price or Selling Price when Profit or Loss is
Worksheet on Profit and Loss.
Contact Us
5th Grade Numbers
5th Grade Math Problems
5th Grade Math Worksheets
From Worksheet on Profit and Loss to HOME PAGE
|
{"url":"http://www.math-only-math.com/worksheet-on-profit-and-loss.html","timestamp":"2014-04-19T22:17:56Z","content_type":null,"content_length":"14096","record_id":"<urn:uuid:693e3da9-70be-4f2a-a9b6-a6f24baf39c5>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Please Look At The Circuit: Look At How Chegged ... | Chegg.com
Please look at the circuit:
Look at how chegged added the 2 parallel resistors. They put the 12 ohm resistor on the right side, while I put it on the left side.
1. Is this the same thing?
2. How do I know which side of the current source to put the combined parallel resistors on?
Now look at how chegg converted the parallel current and resistor to a series voltage source and resistor:
3. "What" tells you to put the voltage source and 12 ohm resistor at the bottom versus at the middle line?
4. Also what tells you to put the voltage source on the left side and the 12 ohm resistor on the right side versus putting the voltage source on the right side and the 12 ohm resistor on the left
Please ANSWER all 4 questions so I can understand it all.
Dont just answer one or 2.
Electrical Engineering
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/please-look-circuit-look-chegged-added-2-parallel-resistors-put-12-ohm-resistor-right-side-q3498466","timestamp":"2014-04-18T13:09:45Z","content_type":null,"content_length":"23290","record_id":"<urn:uuid:92224b9b-455f-442f-a791-8a71e7b911c2>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Magnetic Fields Due to Currents
In this chapter we will explore the relationship between an electric current and the magnetic field it generates in the space around it.
For problems with low symmetry we will use the law of Biot-Savart law in combination with the principle of superposition.
For problems with high symmetry we will introduce Ampere’s law.
Both approaches will be used to explore the magnetic field generated by currents in a variety of geometries (straight wire, wire loop, solenoid coil, toroid coil).
We will also determine the force between two parallel, current-carrying conductors. We will then use this force to define the SI unit for electric current (the ampere).
Magnetic Field Due to a current
A moving charge produce magnetic field and its magnitude and direction are given by "Biot-Savart law" (pronounced bee-oh sah-VAR)
is the magnetic field at a position due to a charge q, moving with velocity .
Current in a wire is an example of moving charge, therefore it should produce a magnetic field.
Consider a small segment ds of a wire carrying current i.
If is the drift velocity then time t taken by all the conduction electrons, in the segment ds, to cross the line A is given as
Total charge q moving in time t through this segment is
The magnetic field due to this segment of wire at point P will be given as
Since direction of and is same therefore we can re write above equation by substituting the value of q.
Now , therefore magnitude of can be written as
Since the magnetic field is proportional to the cross product (×) of vector and , therefore the direction of will always be perpendicular to and .
Tips to find direction of magnetic field due to a wire carrying current:
Point the thumb of right hand in the direction of current, curled fingers point to the direction of magnetic field .
Draw a circle on a plane (page) perpendicular to the direction of current with wire at the center.
If current is going in the plane (page), magnetic field goes in clockwise direction.
If current is coming out of the plane (page), magnetic field is in anti-clockwise direction.
Magnetic field at any point P on the circle will point in the direction of the tangent and will always be perpendicular to the radial vector joining the point P and the wire at the center.
Magnetic Field of a Current carrying Long Straight Wire
Consider a very long (infinitely long ) wire.
How can we compute magnetic field due to this wire at a point P?
We know how to calculate magnetic field at a point due to a small segment of a wire carrying current.
We can divide the wire in several small segments of length ds, and can compute magnetic field due to each of these segments at point P.
We know Magnetic field is a vector quantity and net field at any point can be computed by vector sum of the magnetic fields due to all these segments (law of superposition).
Net magnetic field at point P due to all the segments will be
If we apply right hand rule, the direction of at point P due to each element will point in the page. You can see that it is perpendicular to both and .
Therefore magnitude of net magnetic field B can be obtained by adding magnitude of magnetic field due to each segment
If the segment ds is very small, the above equation can be written as an integral
In the above figure
With the substitution of θ and r, the integral can be re written as
Solving this integral gives us the value
By substituting the value of the integral we get
In the above example point P was at the mid of the wire.
What will be the magnetic field if point P is located at one edge of the wire?
In this case integration limits will be from 0 to ∞, instead of from -∞ to ∞.
Therefore the magnetic field at the edge of the wire will be half of the magnetic field at the middle of the wire.
Magnetic Field of a Current carrying Circular arc of Wire
Let us calculate magnetic field at the center of a circular arc of wire carrying current i.
Magnitude of , of each element at the center of the arc is given as
Since magnetic field due to all the segments point out of the page, the net field at the center can be computed by adding the sum of magnitude due to all the segments.
If the segment ds is very small then
The above summation can be written as an integral
When arc is a complete circle then φ=2π, therefore field at the center of a circular wire will be
Magnetic Field on the axis of a Current carrying Circular loop
Let us calculate magnetic field at the center of a circular arc of wire carrying current i.
Here angle between and segment is 90°, therefore magnitude of , of each element at point P on the axis of the circular loop is given as
As we have discussed earlier, the direction of is perpendicular to the position vector and segment .
Let us resolve into two components, One parallel to z-axes and other perpendicular to z-axes.
Due to symmetry, due to the segment on left will be opposite to the of symmetric segment on the right. Therefore the sum of all components is equal to zero.
Thus only components contributing to the total magnetic field are of all the segments. Net magnetic field will be sum of component of all the segments.
Value of cos α and value of r at a distance z from the center is given as
If ds is very small summation can be written as an integral and magnetic field at a distance z from the center is
When observation point is very far from the current loop or R≪z, then the magnetic field at that point is given as
Here , area of the current loop. In last chapter we have seen that the magnitude of magnetic moment μ of a dipole is
In present case number of loops N is one therefore
Since direction of and is same therefore in vector form we can write
Checkpoint 1
The figure here shows four arrangements of circular loops of radius r or 2r, centered on vertical axes (perpendicular to the loops) and carrying identical currents in the directions indicated. Rank
the arrangements according to the magnitude of the net magnetic field at the dot, midway between the loops on the central axis, greatest first.
Hint : Magnetic field is proportional to area of the loop. Direction is given by right hand rule.
Force Between Two Parallel Currents.
Interactive Checkpoint - 1 (Force Between two Parallel Currents)
Two current carrying wires are placed parallel to each other. We can say a current carrying wire is placed in the magnetic field produced by another current carrying wire.
(a) Will the wires attract or repel if the currents are parallel?
(b) Will the wires attract or repel if the currents are anti parallel?
Let us consider two parallel wires of length L each with current and .
How to calculate magnetic force between two current-carrying wires?
First find the magnetic field due to second (b) wire at the position of first (a) wire.
Then calculate the force on first (a) wire due to the field of second (b) wire.
Magnetic force on first (a) wire placed in a magnetic field of second (b) wire is given as
Here is length vector that has magnitude L and is directed along the direction of current in the wire.
Magnitude of magnetic field due to second (b) wire at a distance d is
Where is the current through second wire. Now the force on first wire is
By right hand rule we can find the direction of force and it follows the rule
Parallel currents attract each other, and anti parallel currents repel each other.
Checkpoint 2
The figure here shows three long, straight, parallel, equally spaced wires with identical currents either into or out of the page. Rank the wires according to the magnitude of the force on each due
to the currents in the other two wires, greatest first.
Hint : Parallel currents attract and anti parallel repel. Magnitude of field reduces with distance.
Ampere's Law
Gauss law is used to compute electric field in certain symmetric charge distribution, similarly if the current distribution is considerably symmetric, Ampere's law can be used to find the magnetic
field with considerably less effort.
According to Amperes law
The loop on the integral sign means that the scalar (dot) product is to be integrated around a closed loop, called an Amperian loop.
The current is the net current encircled by that closed loop.
How to compute integral ?
Divide the closed path into n segments , ,.......,.
Compute the sum
Here is the magnetic field at the location of ith segment. In the limiting case the summation can be replaced by an integral
For calculating , choose any arbitrary direction as direction of Amperian loop
Curl the fingers of right hand in the direction of Amperian loop and note the direction of thumb.
All the currents inside the loop parallel to the thumb are counted as positive.
All the currents inside the loop anti parallel to the thumb are counted as negative.
All the currents outside the loop are not counted.
For the above example
Checkpoint 3
The figure here shows three equal currents i (two parallel and one anti parallel) and four Amperian loops. Rank the loops according to the magnitude of along each, greatest first.
Hint : Only count currents inside the loop.
Application of Ampere' s Law
Example - 1 (Magnetic Field Outside a Long Straight Wire with Current)
Let us calculate magnetic field due to a long straight wire that carries current i straight through the page.
Draw an Amperian loop as a circle of radius r around the wire with its center at the wire center.
Since all the points on the circle are equidistant from the wire, the magnitude of will be same at any point on the circle.
At any point the angle between and is , therefore
According to Ampere's law, the direction of current is parallel to thumb so it will be taken as positive and
This is the relation we have derived using long calculus method.
Ampere's law holds true for any closed path. We choose the path that makes the calculation of as easy as possible.
Example - 2 (Magnetic Field inside a Long Straight Wire with Current)
Let us calculate magnetic field due to a long straight wire of radius R, that carries current i straight through the page. Assume that the distribution of current with in the cross-section is
uniform, or current density in the wire is a constant.
Draw an Amperian loop as a circle of radius r<R, around the wire with its center at the wire center.
Since all the points on the circle are equidistant from the wire, the magnitude of will be same at any point on the circle.
At any point the angle between and is , therefore
is the fraction of the total current i which is passing through the area of the circle r. is given as
According to Ampere's law, the direction of current parallel to thumb should be taken as positive and
Let us plot magnetic field inside and outside a wire as function of the distance r from the center of wire.
Solenoid and Toroid
Magnetic field of a solenoid
A solenoid is a coil of conducting wire as shown below
Following figure shows a vertical cross-section through the central axis of a stretched out solenoid. Magnetic field lines of a stretched out solenoid.
Magnetic field lines of a real solenoid.
The field is strong and uniform at the interior point but relatively weak at external points such as .
In an ideal solenoid we take field at any external point as zero and uniform at any point inside the solenoid.
Let us now apply Ampere's law to a real solenoid.
We can consider a rectangular Amperian loop abcda. Since is uniform inside the solenoid and zero out side we can write as sum of four integrals
The first integral on the right is Bh, where B is the magnitude of the field and h is the length of segment ab.
Second and fourth integral are zero because B is perpendicular to the direction inside the solenoid and B is zero out side the solenoid.
Third integral is also zero as B is zero outside the solenoid.
total value of integral will be
If n is the number of wire turns per unit length then total number of turns enclosed by the Amperian loop will nh. If i is the current through each turn, total enclosed current will be
According to Ampere' s law, magnitude of B inside the solenoid will be given as
Although it is derived for an infinite solenoid but it holds true for actual solenoid if we measure at a point inside the solenoid away from the edges.
Magnetic field inside a solenoid depends only on the number of turns per unit length and current, it is independent of the area (radius of solenoid) of the loops
Magnetic Field of a Toroid
A toroid is a ring shaped solenoid as shown below
Following figure shows a horizontal cross-section of the toroid.
What is the magnetic field inside a toroid?
We can find out by applying Ampere' s law. From symmetry we see that B forms concentric circles inside toroid.
Consider a circular Amperian loop of radius r inside the toroid .
If N is the total number of turns, the enclosed current . According to Ampere's law
The magnetic field B inside the toroid at a distance r from the center of the toroid ring, will be given as
|
{"url":"http://www.rakeshkapoor.us/ClassNotes/MagneticFieldduetoCurrent.html","timestamp":"2014-04-20T09:16:50Z","content_type":null,"content_length":"55407","record_id":"<urn:uuid:1eb60e25-3294-4cf3-ac7b-636722c3b450>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
|
notes on teleconference, 9 sep
notes on teleconference, 9 sep
• To: w3c-math-erb@w3.org
• Subject: notes on teleconference, 9 sep
• From: Ron Whitney <RFW@MATH.AMS.ORG>
• Date: Tue, 10 Sep 1996 14:25:06 -0400 (EDT)
• From w3c-math-erb-request@www10.w3.org Tue Sep 10 14: 25:35 1996
• Mail-system-version: <MultiNet-MM(369)+TOPSLIB(158)+PMDF(5.0)@MATH.AMS.ORG>
• Message-id: <842379906.552078.RFW@MATH.AMS.ORG>
Notes on HTML-Math Interest Group Teleconference Call
9 Sep 96
In attendance:
Ste'phane Dalmas Safir, INRIA
Patrick Ion Mathematical Reviews
Robert Miner The Geometry Center
Bruce Smith Wolfram Research, Inc.
Bob Sutor IBM
Stephen Watt Safir, INRIA
Ron Whitney American Mathematical Society
Ralph Youngen American Mathematical Society
[Notes prepared by RW. Corrections welcome.]
Bruce indicated that he is making progress on the Mathematica parser,
and reduced his estimate to achieving something demonstrable to a
week. Robert has continued to work on his Java renderer (which reads
code in `Presentation List' form and displays it).
There was little discussion about the Sep30/Oct1 meeting logistics.
We talked about the groupings and priorities posted by Robert in his
note of 3/5 September. [[For convenience, the list is reproduced here:
"1. Secondary and undergraduate level math notation with
little or no semantic information for generic visual display
"2. Secondary and undergraduate level math notation with
enough semantics for CAS rendering. To me, it is reasonable
to expect the author to have a specific CAS in mind.
"3. Research mathematics notation with little or no semantics
for visual display.
"4. Research mathematics notation with enough semantics for CAS, math
software, or speech rendering. Again, I would be content with a
document tagged specifically for Maple or Geomview, etc. However,
speech rendering would need to be possible at the same time.
"The main consequence of these priorities is that I [RM] think HTML Math
should be primarily notation based, with semantic information added
through some combination of annotations and parser heuristics. ... "
]] Bruce said he was hoping to get agreement on some basic principles
regarding extensibility and treatment of semantics so that he could
devote time toward some more immediate goals (such as writing code for
the Mathematica parser). He called for reaction (if there is any) to
his recent note discussing <mtype> or <mannotation> elements
(e.g. will that cover needs that people foresee?).
Patrick felt he was in agreement with the List as given by Robert, and
Ron said he also was in general agreement, although he was uncertain
as to the volume of material in category 2 and knew that, e.g., AMS
has a large amount of material in category 3. Robert said that he had
debated the ordering he gave of categories 2 and 3, and thought they
might be called 2a and 2b as parallel components to category 2. After
further discussion from Bruce and others, the linear order became a partial
/ \
Undergrad Research level
CAS w/o CAS
\ /
3 (high semantics,
sophisticated layout)
wherein we envisioned that the two categories at the second level just
might have different emphases on semantical and presentational
Stephen said he felt the ordering of the list was ok, but that the
needs of category 1 might be a rather large chunk to bite off first.
He asked for clarification on what was to be included in category 1
(e.g., an undergraduate course in topology might show notation which
would require fairly sophisticated typesetting capability). Patrick
said that he took category 1 to comprise notations seen in
standard courses at levels K-14 (i.e. through the first two
undergraduate years). Robert said he was trying to exclude "highly
decorated operators", and spoke to handling notation which might be
used at the level of the mathematical operators available on a
hand-held, programmable calculator. After further discussion, there
seemed to be rough agreement on the type of notation to be covered in
category 1, at which point Bruce asked again whether the <mtype> or
<mannotation> elements would satisfy the needs that people envision
for type characteristics throughout the list. Bob and Stephen
answered "yes" [[but tell me if I lie -- RW]].
Stephen asked for clarification of the handling of "style sheets"
within category 1. He felt that, with increasing dissemination of
course notes as replacement for standard texts, there is a need to
address the problems of stylistic variation. He gave the example of a
vector transpose being displayed with a superscripted T or a dagger or
emboldened. Bruce said that he envisioned style sheets as being
driven via the macro facility of the WP, and that, to achieve
convenient stylistic variation, authors of such course notes would
have to collaborate on macro conventions. Bob said he felt that, in
addition to being a clearing house for "contexts", OpenMath might
serve as a clearing house for these other sorts of semantical
conventions. Stephen remarked that he felt this was too great a leap
(i.e. leaping into macros) for something that should be relatively
simple (style sheets). Robert said he felt that collaborators on
elementary texts ought to be regarded as rather sophisticated users,
more in the category 2 range.
Stephen wondered whether the disparate underlying techniques (macros,
style sheets, semantical annotations) to be used for stylistic
variation could be simplified. He said he felt there were "too many
concepts doing the same thing" [[*rough* quotation; please correct me
Stephen]]. Bruce fielded this and discussed the need for a couple of
different kinds of macros (those which might be simple shorthand and
which should *not* be alterable by style sheets, and those which dealt
with presentation form and may be so alterable).
There was a fair amount of further discussion, but I feel I cannot
do it justice. I welcome postings on these matters, most especially
from Bruce and Stephen.
In the wrap-up we spoke briefly about what the prioritization in a
list such as Robert's might mean. Everyone is concerned that issues
regarding HTML-Math be addressed fairly and completely. Listing
priorities gives some sense of the path of development and discussion
when we feel we can see adequately "through" all the issues.
|
{"url":"http://lists.w3.org/Archives/Public/w3c-math-erb/msg00583.html","timestamp":"2014-04-18T23:23:00Z","content_type":null,"content_length":"8537","record_id":"<urn:uuid:d672c73a-c44f-40a2-89be-38c3394cc23a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
|
weakly periodic cohomology theory
Special and general types
Special notions
A multiplicative cohomology theory $A$ is weakly periodic if the natural map
$A^2({*}) \otimes_{A^0({*})} A^n({*}) \stackrel{\simeq}{\to} A^{n+2}({*})$
is an isomorphism for all $n \in \mathbb{Z}$.
Compare with the notion of a periodic cohomology theory.
Relation to formal groups
One reason why weakly periodic cohomology theories are of interest is that their cohomology ring over the space $\mathbb{C}P^\infty$ defines a formal group.
To get a formal group from a weakly periodic, even multiplicative cohomology theory $A^\bullet$, we look at the induced map on $A^\bullet$ from a morphism
$i_0 : {*} \to \mathbb{C}P^\infty$
and take the kernel
$J := ker(i_0^* : A^0(\mathbb{C}P^\infty) \to A^0({*}))$
to be the ideal that we complete along to define the formal scheme $Spf A^0(\mathbb{C}P^\infty)$ (see there for details).
Notice that the map from the point is unique only up to homotopy, so accordingly there are lots of chocies here, which however all lead to the same result.
The fact that $A$ is weakly periodic allows to reconstruct the cohomology theory essentially from this formal scheme.
To get a formal group law from this we proceed as follows: if the Lie algebra $Lie(Spf A^0(\mathbb{C}P^\infty))$ of the formal group
$Lie(Spf A^0(\mathbb{C}P^\infty)) \simeq ker(i_0^*)/ker(i_0^*)^2$
is a free $A^0({*})$-module, we can pick a generator $t$ and this gives an isomorphism
$Spf(A^0(\mathbb{C}P^\infty)) \simeq Spf(A^0({*})[[t]])$
if $A^0(\mathbb{C}P^\infty) A^0({*})[ [t] ]$ then $i_0^*$ “forgets the $t$-coordinate”.
|
{"url":"http://ncatlab.org/nlab/show/weakly+periodic+cohomology+theory","timestamp":"2014-04-23T22:17:45Z","content_type":null,"content_length":"36023","record_id":"<urn:uuid:4c56c100-f7b9-4097-994b-3854a4ac0af2>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
|
algebraic topology
, an
open ball
centered on the point
is a set of some points
in a
metric space
X, such that the
is less than some constant r, the '
' of the ball. This is usually written
B[X](p, r)
and is equivalent to the set:
{x in X : d(x, p) < r}
is the
distance function
of the metric space, telling us the distance between its two arguments in that space.)
The construction may be used in order to define an open set as a subset V of X where for any point in it there is an open ball with nonzero and positive r, which is a subset of V.
That is
for all v in V, there exists r > 0 such that B[X](v, r) is a subset of V.
It also comes in handy for defining
continuous functions
between metric spaces. A function
mapping from
, where both
are metric spaces, is continuous at some point
only in the case that for every
> 0 there is some
> 0 such that for any member
of B
) is in B
|
{"url":"http://everything2.com/title/open+ball","timestamp":"2014-04-18T08:58:45Z","content_type":null,"content_length":"20106","record_id":"<urn:uuid:1ca05577-3d27-429a-89e1-ce5481e51fd5>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
|
quantum electrodynamics
quantum electrodynamics (QED), quantum field theory that describes the properties of electromagnetic radiation and its interaction with electrically charged matter in the framework of quantum theory.
QED deals with processes involving the creation of elementary particles from electromagnetic energy, and with the reverse processes in which a particle and its antiparticle annihilate each other and
produce energy. The fundamental equations of QED apply to the emission and absorption of light by atoms and the basic interactions of light with electrons and other elementary particles. Charged
particles interact by emitting and absorbing photons, the particles of light that transmit electromagnetic forces. For this reason, QED is also known as the quantum theory of light.
QED is based on the elements of quantum mechanics laid down by such physicists as P. A. M. Dirac, W. Heisenberg, and W. Pauli during the 1920s, when photons were first postulated. In 1928 Dirac
discovered an equation describing the motion of electrons that incorporated both the requirements of quantum theory and the theory of special relativity. During the 1930s, however, it became clear
that QED as it was then postulated gave the wrong answers for some relatively elementary problems. For example, although QED correctly described the magnetic properties of the electron and its
antiparticle, the positron, it proved difficult to calculate specific physical quantities such as the mass and charge of the particles. It was not until the late 1940s, when experiments conducted
during World War II that had used microwave techniques stimulated further work, that these difficulties were resolved. Proceeding independently, Freeman J. Dyson, Richard P. Feynman and Julian S.
Schwinger in the United States and Shinichiro Tomonaga in Japan refined and fully developed QED. They showed that two charged particles can interact in a series of processes of increasing complexity,
and that each of these processes can be represented graphically through a diagramming technique developed by Feynman. Not only do these diagrams provide an intuitive picture of the process but they
show how to precisely calculate the variables involved. The mathematical structures of QED later were adapted to the study of the strong interactions between quarks, which is called quantum
See R. P. Feynman, QED (1985); P. W. Milonni, The Quantum Vacuum: An Introduction to Quantum Electrodynamics (1994); S. S. Schweber, QED and the Men Who Made It: Dyson, Feynman, Schwinger, and
Tomonaga (1994); G. Scharf, Finite Quantum Electrodynamics: The Causal Approach (1995).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on quantum electrodynamics from Fact Monster:
See more Encyclopedia articles on: Physics
|
{"url":"http://www.factmonster.com/encyclopedia/science/quantum-electrodynamics.html","timestamp":"2014-04-18T11:31:55Z","content_type":null,"content_length":"23472","record_id":"<urn:uuid:ecfd3840-7bf6-4b55-9239-82b8fed88c29>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Inductive limit of C*-algebras
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
It is known that an intertwining (or even approximately intertwining) diagram implies the isomorphism of the limit algebras. Under what conditions the converse holds?
up vote 6 down vote favorite fa.functional-analysis oa.operator-algebras
add comment
It is known that an intertwining (or even approximately intertwining) diagram implies the isomorphism of the limit algebras. Under what conditions the converse holds?
The converse holds if the inductive limits involve semiprojective building blocks. That is: suppose $$\varinjlim (A_i,\phi_i^{i'}) \cong \varinjlim (B_j,\psi_j^{j'})$$ (here my
notation is $\phi_i^{i'}:A_i \to A_{i'}$, etc.), and each $A_i$ and $B_j$ are separable and semiprojective, then there exists subsequences $(A_{i_k}), (B_{j_k})$ and an approximate
up vote 6 down intertwining between these.
vote accepted
add comment
The converse holds if the inductive limits involve semiprojective building blocks. That is: suppose $$\varinjlim (A_i,\phi_i^{i'}) \cong \varinjlim (B_j,\psi_j^{j'})$$ (here my notation is $\phi_i^
{i'}:A_i \to A_{i'}$, etc.), and each $A_i$ and $B_j$ are separable and semiprojective, then there exists subsequences $(A_{i_k}), (B_{j_k})$ and an approximate intertwining between these.
|
{"url":"http://mathoverflow.net/questions/129850/inductive-limit-of-c-algebras","timestamp":"2014-04-16T16:51:44Z","content_type":null,"content_length":"50402","record_id":"<urn:uuid:1cd65e15-7143-4fe0-8e55-8be71f1ecb66>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Learn about Adding Unlike Fractions
Learn about Adding Unlike Fractions In this lesson, let’s learn how we add unlike fractions. Unlike fractions means two fractions that have different denominators. These are not the same. So we are
given a problem that says we have to add two fractions. One is ¾ the other is ½. As we could see, they have different denominators. Well in order to get them to add, I have to convert them or rename
one of them. So it has same denominator as the other, you have to rename both at times but in this case we should be able to do denominators. So let’s take ¾, that’s got a bigger denominator called
4. Now let’s take this one, ½. What do I have to do to get this to have a denominator of 4? Well 2 can be multiplied by 2. Top can be multiplied by 2. So if I want the denominator to go from 2 to 4,
I have to multiply it by 2. So we have to do the same to the top. 1×2 is 2. So ½ is the same as 2/4. So now that I have this right, what am I left with? I have to add ¾+2/4. Instead of ½ I’ll put 2/
4. Well this is a little easier to do since we have the same denominator, we copy the denominator forward and we add the numerators, 3 and 2 is 5. That’s the answer. Another way to look at this
problem would be to do it slightly differently. Let’s look at taking of sheet, and break it down into quarter pieces. Four equal pieces. This is ¼, this is ¼, this is ¼ and this is ¼. So we have to
take ¾ which means I shade 3 parts out of 1. Now let’s take ½ if I take the same sheet. Take the same size sheet and break it up into two halves, this is one half, and this is one half. As we can
see, ½ is the same as 2/4. So I can say this is the same as 2/4 because with this and with two of this parts are the same. So what do I have? If I break this down, I’ve got 2/4. So what do I have? 2/
4 and 1, 2, 3, 4, 5. So I have 5/4 as the total addition which is where we ended up. So I just want to have a recap. What we do when we are adding unlike fractions is we write the fractions down but
we have to convert both the fractions or rename them so that they have the same denominator. In this case ½ is a smaller fraction when it comes to smaller denominators. So I can multiply the
denominator and the numerator by 2, to get the equivalent fraction which is 2/4. So now I just have to have this and this, which is easier to do. If the fractions have the same denominator, that’s
carried over. And we just simply add the numerators.
|
{"url":"http://www.healthline.com/hlvideo-5min/learn-about-adding-unlike-fractions-285016661","timestamp":"2014-04-16T19:56:21Z","content_type":null,"content_length":"40239","record_id":"<urn:uuid:aa8841a7-a3c6-439f-a4a3-263687bcb206>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bell's theorem rehashed
Created on Sunday, 27 May 2001 20:00
Quantum mechanics suggests that a physical property of a particle doesn't exist until it is measured. For instance, an electron has a property called spin that, unlike the angular momentum of a
classical object, can only have two values, "up" and "down"; but the orientation of this "up" and "down" is determined by the measuring apparatus.
The spin of an electron may not exist until it is measured but conservation laws still apply. There are experiments that produce pairs of electrons whose combined spin must be zero. Once the spin of
the electron on one end is measured, the spin of the other electron will become known as long as the measuring devices are aligned in parallel on the two ends. This applies even if the measuring
devices are put in place after the two electrons are generated! And if the two measuring devices aren't aligned, a correlation will still exist between the two electrons that is difficult to explain
in classical terms.
Or is it? No less a scientist than Einstein proposed that it isn't: that the problem is really that our knowledge of the system is incomplete and that there are hidden variables that determine the
system's behavior fully.
Okay, so imagine some kind of an experiment that creates a pair of correlated electrons that fly off towards the $A$ and $B$ ends of the experimental setup. Let's assume that the outcome of the
experiment at the $A$ end depends on two factors: the orientation of the experimental device at $A$ which we represent with $\vec{a}$, and some hidden parameter(s) represented by the Greek letter $\
lambda$. In other words, $A(\vec{a},\lambda)=\pm 1$. Similarly at the $B$ end, the outcome of the experiment is a function of $\vec{b}$ (the orientation of the apparatus) and $\lambda$: $B(\vec{b},\
lambda)=\pm 1$.
What is really important to notice is that we specifically assume that $A$ does not depend on $\vec{b}$, and $B$ does not depend on $\vec{a}$. In other words, the experiment on one end only depends
on the setup of the experimental device on that end and the properties of the electron, but does not depend on the configuration of the experimental device at the other end.
$\lambda$ can have many values; however, what we do know is that it has a probability distribution function (representing, for each $\lambda$, the probability of that value occurring) and it is
The expectation value of a quantum mechanical experiment is, essentially, the average of the values measured over several experiments. From quantum mechanics, we can compute the expectation value of
the product of $A$ and $B$: in certain experiments (for instance, when the electron spin is measured with so-called Stern-Gerlach magnets where $\vec{a}$ and $\vec{b}$ is the magnets' orientation)
this expectation value will be the dot product of the two vectors $\vec{a}$ and $\vec{b}$ times $-1$. But we can also compute the expectation value, $P(\vec{a},\vec{b})$ using what we know from
probability calculus:
\[P(\vec{a},\vec{b})=\int\limits_{-\infty}^{+\infty}\rho(\lambda)\cdot A(\vec{a},\lambda)\cdot B(\vec{b},\lambda) d\lambda.\]
What Bell did was to prove that no matter how you choose $A$ and $B$, in general $P(\vec{a},\vec{b})$ cannot be the same as $-\vec{a}\cdot\vec{b}$. Which implies that our assumption, namely that the
outcome at $A$ depends only on $\vec{a}$ and $\lambda$, or the outcome at $B$ depends only on $\vec{b}$ and $\lambda$, must be false; the outcome at one end depends on the experimental setup at the
other and vice versa.
To prove this, let's introduce another symbol: $P^{xy}(\vec{a},\vec{b})$ is to represent the probability that the outcome of the experiment will be $x$ at $A$ and $y$ at $B$. So for instance, $P^{++}
(\vec{a},\vec{b})$ will represent the probability that we measure a $+1$ spin at both $A$ and $B$. We can then construct the following:
We can prove that $|E(\vec{a},\vec{b})+E(\vec{a}',\vec{b})+E(\vec{a},\vec{b}')-E(\vec{a}',\vec{b}')|\le 2$. But this inequality is not true in the quantum mechanical case. In the case of the
Stern-Gerlach magnets, the probabilities can be computed as follows: $P^{++}=P^{--}=\frac{1}{2}\sin^2\frac{\phi}{2}$, and $P^{+-}=P^{-+}=\frac{1}{2}\cos^2\frac{\phi}{2}$ (where $\phi$ is the angle
between the orientations of the two magnets), so $E=-\cos\phi$. For instance, if $\vec{a}$, $\vec{b}$, $\vec{a}'$ and $\vec{b}'$ are vectors pointing respectively at $0^\circ$, $45^\circ$, $90^\circ$
and $-45^\circ$, then $E(\vec{a},\vec{b})=E(\vec{a}',\vec{b})=E(\vec{a},\vec{b}')=-\cos 45^\circ$, and $E(\vec{a}',\vec{b}')=\cos 135^\circ$, so $|E(\vec{a},\vec{b})+E(\vec{a}',\vec{b})+E(\vec{a},\
vec{b}')-E(\vec{a}',\vec{b}')|=|-3\cos 45^\circ+\cos 135^\circ|=2\sqrt{2}$, which is decidedly greater than 2.
To prove that $|E(\vec{a},\vec{b})+E(\vec{a}',\vec{b})+E(\vec{a},\vec{b}')-E(\vec{a}',\vec{b}')|\le 2$, we first spell out $E$:
\[E=\int d\lambda\rho(\lambda)\left\{P_A^+(\vec{a},\lambda)-P_A^-(\vec{a},\lambda)\right\}\left\{P_B^+(\vec{b},\lambda)-P_B^-(\vec{b},\lambda)\right\},\]
where $P_A$ and $P_B$ represent the experimental probabilities at the two ends of the apparatus. Since these are probabilities, it follows that $0\le P_A\le 1$ and $0\le P_B\le 1$; therefore, $|P^+-P
^-|\le 1$. As a shorthand, we can write $\bar{A}$ and $\bar{B}$ for the two subexpressions in the curly braces:
\[E=\int d\lambda\rho(\lambda)\bar{A}(\vec{a},\lambda)\bar{B}(\vec{b},\lambda).\]
As before, $\bar{A}\le 1$ and $\bar{B}\le 1$.
We can then write:
\[E(\vec{a},\vec{b})+E(\vec{a},\vec{b}')=\int d\lambda\rho(\lambda)\bar{A}(\vec{a},\lambda)|\bar{B}(\vec{b},\lambda)+\bar{B}(\vec{b}',\lambda)|,\]
from which:
\[|E(\vec{a},\vec{b})+E(\vec{a},\vec{b}')|\le\int d\lambda\rho(\lambda)|\bar{B}(\vec{b},\lambda)+\bar{B}(\vec{b}',\lambda)|.\]
\[|E(\vec{a}',\vec{b}')-E(\vec{a}',\vec{b}')|\le\int d\lambda\rho(\lambda)|\bar{B}(\vec{b},\lambda)-\bar{B}(\vec{b},\lambda)|.\]
But from $\bar{B}\le 1$, it follows that:
\[|\bar{B}(\vec{b},\lambda)+\bar{B}(\vec{b}',\lambda)|+|\bar{B}(\vec{b},\lambda)-\bar{B}(\vec{b}',\lambda)|\le 2.\]
And since $\int\rho(\lambda)d\lambda=1$, we can assert that $|E(\vec{a},\vec{b})+E(\vec{a}',\vec{b})|+|E(\vec{a},\vec{b}')-E(\vec{a}',\vec{b}')|\le 2$, from which $|E(\vec{a},\vec{b})+E(\vec{a}',\vec
{b})+E(\vec{a},\vec{b}')-E(\vec{a}',\vec{b}')|\le 2$, which is just what we set out to prove.
. . .
What this all means is that the outcome of the experiment at $A$ can only be explained as a function of $\vec{a}$ (the setup of the measuring apparatus at $A$), $\lambda$ (whatever parameters are
"internal" to the electron) and $\vec{b}$! I.e., some information about the setup of the measuring apparatus at $\vec{b}$ will be "known" to the electron at $\vec{a}$ even if there is no conventional
means of transmitting this information from $B$ to $A$; in fact, even if $B$ is an apparatus operated by little green men at Alpha Centauri, who will only perform the measurement some four and a half
years from now, and haven't even built their measuring apparatus yet!
Bell, J. S., Speakable and unspeakable in quantum mechanics, Cambridge University Press, 1989
|
{"url":"http://www.vttoth.com/CMS/index.php/physics-notes/73","timestamp":"2014-04-18T13:13:45Z","content_type":null,"content_length":"20523","record_id":"<urn:uuid:750ef634-dbb6-42b9-a640-8d2bfaf63eb3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Topological space
From Encyclopedia of Mathematics
A totality of two elements: A set Topological structure (topology)); it is immaterial whether this is an open or closed topology (one transfers into the other by replacing the sets constituting the
given topology by their complements). Unless otherwise stated, a topology base of the given topology), in terms of which all remaining elements of the topology can be obtained as unions (in the case
of an open topology) or intersections (in the closed case) of sets belonging to the base. So, for example, the usual topology of the real line is obtained by taking as a base of its open topology the
set of all open intervals (it is sufficient to take only open intervals with rational end points). The remaining open sets are unions of such intervals.
When regarding a base of an open, or closed, topology, it is common to refer to it as an open or closed base of the given topological space. Open bases are more often considered than closed ones,
hence if one speaks simply of a base of a topological space, an open base is meant. The smallest (in non-trivial cases, infinite) cardinal number that is the cardinality of a base of a given
topological space is called its weight (cf. Weight of a topological space). After the cardinality of the set of all its points, the weight is the most important so-called cardinal invariant of the
space (see Cardinal characteristic). Of special importance are spaces having a countable base, for example the real line. One obtains a countable base of an
A topological space is said to be metrizable (see Metrizable space) if there is a metric on its underlying set which induces the given topology. The metrizable spaces form one of the most important
classes of topological spaces, and for several decades some of the central problems in general topology were the general and special problems of metrization, i.e. problems on finding necessary and
sufficient conditions for a topological space, or for a topological space of some particular type, to be metrizable. These conditions form the content of general or special metrization theorems.
Every subset proximate point of a set Closure of a set). The transition from an arbitrary set
The closure operation and its basic properties 1), 2) and 3) have been derived from the fundamental concept of a topology on a given set
Closely associated with the concept of a topological space is that of a continuous mapping from one space into another. A mapping
Given a (continuous) mapping
An important type of continuous mappings are the so-called quotient mappings (cf. Quotient mapping). They are characterized by the following condition. A set homeomorphism. Two spaces
A continuous mapping
The concrete study of topological spaces consists primarily of the separation from the general class of those spaces of subclasses characterized by some conditions or axioms additional to those
defining topologies. These additional axioms can be of various kinds. First of all there is a group of so-called separation axioms. The first separation axiom was the Hausdorff axiom, requiring that
any two distinct points of the space can be separated by means of neighbourhoods, i.e. are contained in disjoint open sets (two or more sets are disjoint if no two of them have common elements).
Hausdorff's separation axiom is also called the Hausdorff space or a
The Regular space); they are Hausdorff spaces. A topological space is called a Normal space); they are regular and Hausdorff. Any subspace of a space satisfying one of the axioms
Until now the separation of points and sets has been understood in the sense of the presence of disjoint neighbourhoods. However, in modern topology so-called functional separation, originally
introduced by P.S. Urysohn in 1924, is also important. Two sets Completely-regular space). Among the (completely-regular) spaces satisfying this condition the most important are the
completely-regular Tikhonov space). In particular, the underlying space of any
As well as the separation axioms, the so-called conditions of compactness type are significant for the theory of topological spaces. They are based on the consideration of (open) coverings. A family
Covering (of a set)) if every point Compact space).
If as Compact set, countably). For metrizable spaces, and also for Hausdorff spaces with a countable base, the conditions of being compact and of being countably compact are equivalent. If as
An important class of spaces, called locally compact spaces (cf. Locally compact space), is defined by the requirement that every point Aleksandrov compactification of
Next to conditions of compactness the most important condition of compactness type is the condition of paracompactness (cf. Paracompactness criteria), requiring that every open covering
[1] P.S. Aleksandrov, "Einführung in die Mengenlehre und die allgemeine Topologie" , Deutsch. Verlag Wissenschaft. (1984) (Translated from Russian)
[2] N. Bourbaki, "Elements of mathematics. General topology" , Addison-Wesley (1966) (Translated from French)
[3] K. Kuratowski, "Topology" , 1 , PWN & Acad. Press (1966) (Translated from French)
[4] P.S. Aleksandrov, B.A. Pasynkov, "Introduction to dimension theory" , Moscow (1973) (In Russian)
Many authors use the pairs of terms "T3-space" and "regular space" , and "T4-space" and "normal space" , in precisely the opposite way to that indicated above.
Also, the Borel–Lebesgue condition is usually called the Heine–Borel property (cf. Heine–Borel theorem); and final compactness is usually called the Lindelöf property (cf. Lindelöf space). The term
"compactum" is little used; they are compact spaces (which might also be Hausdorff, metrizable, etc.).
Among the equivalent ways of determining a topology on a space, convergence structures should be noted. (All of the usual approaches — open sets, closure operations, convergence structures — also
give rise to further generalizations.) The early descriptions of convergence used external devices such as directed sets (cf. [a3] and Directed set), but there is an intrinsic version. It is
technically intrinsic in that the apparatus is determined by the underlying set ultrafilter [a2]; for the general case, in [a1]. In applications, convergence with auxiliary devices, especially
convergence of generalized sequences (nets, cf. Generalized sequence), is much more often used.
[a1] M. Barr, "Relational algebras" S. MacLane (ed.) , Midwest Category Sem. IV , Lect. notes in math. , 137 , Springer (1970) pp. 39–55
[a2] F.E.J. Linton, "Some aspects of equational categories" S. Eilenberg (ed.) et al. (ed.) , Proc. conf. categorical algebra (La Jolla, 1965) , Springer (1966) pp. 84–94
[a3] J.W. Tukey, "Convergence and uniformity in topology" , Princeton Univ. Press (1940)
[a4] R. Engelking, "General topology" , Heldermann (1989)
How to Cite This Entry:
Topological space. P.S. Aleksandrov (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Topological_space&oldid=15290
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
|
{"url":"http://www.encyclopediaofmath.org/index.php/Topological_space","timestamp":"2014-04-16T19:13:27Z","content_type":null,"content_length":"53953","record_id":"<urn:uuid:875e4f0b-e459-436d-909e-6ca325b14d08>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nonconservative forces
The Lagrangian formulation has been extended so far to handle constraints on that lower the dimension of the tangent space. The formulation can also be extended to allow nonconservative forces. The
most common and important example in mechanical systems is friction. The details of friction models will not be covered here; see [681]. As examples, friction can arise when bodies come into contact,
as in the joints of a robot manipulator, and as bodies move through a fluid, such as air or water. The nonconservative forces can be expressed as additional generalized forces, expressed in an vector
of the form . Suppose that an action vector is also permitted. The modified Euler-Lagrange equation then becomes
A common extension to (13.142) is
in which generalizes to include nonconservative forces. This can be generalized even further to include Pfaffian constraints and Lagrange multipliers,
The Lagrange multipliers become [725]
Once again, the phase transition equation can be derived in terms of phase variables and generalizes (13.148).
Steven M LaValle 2012-04-20
|
{"url":"http://msl.cs.uiuc.edu/planning/node706.html","timestamp":"2014-04-19T17:01:21Z","content_type":null,"content_length":"8717","record_id":"<urn:uuid:30e7747d-5178-49c2-9b86-2196cfae23dc>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Tools Newsletter
MATH TOOLS NEWSLETTER - March 18, 2011 - No. 96
***National Council of Supervisors of Mathematics (NCSM)
Visit us in Indianapolis, IN, at the National Council of
Supervisors of Mathematics 43rd Annual Conference,
April 11-13, 2011. The Math Forum will have a Sponsor Table in
the Display Area, and we'll also be presenting sessions.
***National Council of Teachers of Mathematics (NCTM)
Visit us in Indianapolis, IN, at the National Council of
Teachers of Mathematics (NCTM) Annual Meeting, April 14-16, 2011.
Stop by Booth 1243 in the Exhibit Hall, and attend our sessions.
***Summer C.A.M.P.
This summer, Groton School (Groton, Massachusetts) will be
hosting a program in complexity theory for high school
students. Offered in partnership with the Santa Fe Institute,
Complexity and Modeling Program (C.A.M.P.) is a novel immersion
program that introduces teens to complexity science scholarship.
Apply by March 31:
As you browse the catalog, please take a moment to rate a
resource, comment on it, review it -- join an existing
conversation, or start a new discussion!
***FEATURED TOOL
Tool: Finding Common Denominators
Conceptua Math
Teachers can use this tool to help students build conceptual
understanding of common denominators. By adjusting the models
of two fractions with uncommon denominators to show two
fractions with a common denominator, students experience the
understanding that to find a common denominator of two fractions
means to find equivalents that share the same denominator.
Teachers can also create examples in which students apply the
"one fraction" procedure with and without the support of the
models. The concept is developed through the use of a variety
of models that include pies, vertical bars, horizontal bars,
areas, sets and number lines.
***FEATURED TOOL
Tool: Factor Tree
National Library of Virtual Manipulatives (Utah State University)
This manipulative allows you to construct factor trees (to the
prime factors) for two numbers, and then from the prime
factorization, you are asked to identify the Least Common
Multiple (LCM) and the Greatest Common Factor (GCF) of the two
given numbers.
***FEATURED TOOL
Tool: Kali
Jeff Weeks
Kali lets you draw symmetrical patterns based on any of the 17
tiling groups. Kali does not assume the user knows how to read,
so even the youngest children can enjoy it. Older students can
systematically explore the wallpaper, frieze and rosette groups.
The command "Show Singularities" and the Conway and IUC notation
provide support for a theoretical analysis. Freely available for
both Windows and Macintosh computers.
***FEATURED TOOL
Tool: Wash Line
Hang the numbered shirts on the line in order. Choose from
numbers 1-5, odds or evens to 10, or a selection to 20.
***FEATURED TOOL
Tool: FooPlot - Online graphing calculator and plotter
Dheera Venkatraman
This tool lets you plot functions, polar plots, and 3D with just
a suitable web browser (within the IE, FireFox, or Opera web
browsers), and find the roots and intersections of graphs. In
addition, you can easily plot any function simply by putting it
after the URL; for example, http://fooplot.com/2x-1
***FEATURED TOOL
Tool: Primitives
Alec McEachran
Primitives is a small Flash application for visualizing numbers
in terms of their prime factors.
***FEATURED TOOL
Tool: Counting Using Money
Designed to assess or reinforce children's understanding of
number sequence and counting to 10, this activity can also
introduce addition by using coins to support the
counting activity.
Technology PoW: Train Trouble
Claire Mead
How fast is the train moving to allow Herman and Sheila to each
escape the tunnel? NOTE: A free login is required. Sign up using
the link on the login page, or use your existing KenKen or
Problems of the Week login--see this page for details.
Math Tools http://mathforum.org/mathtools/
Register http://mathforum.org/mathtools/register.html
Discussions http://mathforum.org/mathtools/discuss.html
Research Area http://mathforum.org/mathtools/research/
Developers Area http://mathforum.org/mathtools/developers/
Newsletter Archive http://mathforum.org/mathtools/newsletter/
Twitter Feed http://mathforum.org/pd/twitter.html
.--------------.___________) \
|//////////////|___________[ ]
`--------------' ) (
The Math Forum @ Drexel -- 18 March 2011
|
{"url":"http://mathforum.org/mathtools/newsletter/2011/March.html","timestamp":"2014-04-21T15:50:14Z","content_type":null,"content_length":"17083","record_id":"<urn:uuid:17502fea-90d5-4e6f-bc7f-da7f2d6da4ce>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Robotics Glossary
I would like this glossary of robotics terms to grow. If you have some terms you would like to see added please email them to me.
acceleration-level - Mathematical formulations working with the change in joint speeds with respect to time. Integrating accelerations twice provides displacements. See position-level and
analytical methods - Purely mathematical methods that do not require iteration.
autonomous - Operating without pre-programmed behaviors and without supervision from humans.
biomimetic - Mimicking natural biology
closed-form - A problem formulation that does not require iteration for its solution.
conservative motion - A path where both the end-effector and the joints repeatedly follow their same respective trajectories.
degrees of freedom - The number of independent variables in the system. Each joint in a serial robot represents a degree of freedom.
dexterity - A measure of the robot's ability to follow complex paths.
direct search - A method of solving problems numerically using sets of trial solutions to guide a search. The search is direct because it does not explicitly evaluate derivatives.
dynamics - The study of forces that cause motion
dynamic model - A mathematical model describing the motions of the robot and the forces that cause them.
end-effector - The robot's last link. The robot uses the end-effector to accomplish a task. The end-effector may be holding a tool, or the end-effector itself may be a tool. The end-effector is
loosely comparable to a human's hand.
end-effector space - A fixed coordinate system referenced to the base of the robot.
equality constraint - A restriction that requires the displacement or motion of the robot to equal a specified value. Equality constraints specify the position and orientation of the robot's
error function - The error function assigns a single value that represents the difference between the desired and actual values of one or several dependent variables.
fully constrained robot - A robot with as many independent joints as there are equality constraints on the placement of the end-effector.
inequality constraint - A restriction that limits the value of a dependent or independent variable. Inequality constraints limit the robot's joint travels Ooint limits), joint speeds (speed limits),
and torques, (torque limits).
inverse kinematics - The inverse kinematics problem is to find the robot's joint displacements given position and orientation constraints on the robot's end-effector.
iteration - Repeatedly applying a series of operations to progressively advance towards a solution.
Jacobian - The matrix of first-order partial derivatives. For robots, the Jacobian relates the end- effector velocity the joint speeds.
joint space - A coordinate system used to describe the state of the robot in terms of it's joint states. Inverse kinematics may also be thought of as a mapping from end-effector space to joint space.
kinematics - The study of motion without regards to the forces that cause those motions
kinematic influence coefficients - These coefficients describe the total influence the N input joints have on the motion of the robot and allow a direct statement of the complex and coupled nonlinear
differential equations controlling the response of the system.
LaGrange multipliers - A mathematical technique for transforming equality constraints into performance criteria, thus expressing a constrained problem as an unconstrained problem.
linearly dependent - A correspondence between quantities or functions that can be described by simply adding, subtracting, or multiplying a scalar.
normalize - Scaling a number of factors so that they will be of similar magnitudes.
numerical methods - Iterative methods of solving problems on a computer. Numerical methods may have an analytical basis or they may involve heuristics. optimization - Calculating the independent
variables in a function so as to generate the best function value for a given set of conditions. Optimization usually involves maximizing or minimizing a function.
performance criteria - Measures based on kinematic and dynamic models of the robot useful for evaluating the state of the robot.
plant description - A kinematic and dynamic model of the robot.
position-level - Mathematical formulations working with the joint displacements. See acceleration-level and velocity-level.
pseudoinverse - A simple method of inverting a matrix that is not square. As commonly applied to redundant robots, the pseudoinverse minimizes the two-norm of the joint speeds.
redundancy - More independent variables than constraints.
repeatability - The variability of the end-effector's position and orientation as the robot makes the same moves under the same conditions (load, temp, etc.)
resolved-rate - An extremely simple inverse kinematics method at the velocity-level.
scale - Changing magnitude by linear operation, i.e. multiplying by a scalar.
self-motion - The robot's ability to move it's intermediate links while holding the placement of the end-effector constant.
serial robot - A serial robot is a single chain of joints connected by links.
singularity - A position in the robot's workspace where one or more joints no longer represent independent controlling variables. Commonly used to indicate a position where a particular mathematical
formulation fails.
statics - The study of forces that do not cause motion
two-norm - The square root of the sum of the squares. The magnitude of a vector.
velocity-level - Mathematical formulations working with the joint speeds. Integrating the joint speeds once provides the displacements. See acceleration-level and position-level.
workspace - The maximum reach space refers to all of the points the robot can possibly reach. The dexterous workspace is all of the possible points the robot can reach with an arbitrary orientation.
The dexterous workspace is usually a subspace of the maximum reach space.
|
{"url":"http://www.learnaboutrobots.com/glossary.htm","timestamp":"2014-04-18T20:48:57Z","content_type":null,"content_length":"15833","record_id":"<urn:uuid:90d43c4e-7163-4112-8b54-c590a3a148eb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical mysteries: Getting the most out of life - Part 1
Issue 13
January 2001
There are many sorts of games played in a "bunco booth", where a trickster or sleight-of-hand expert tries to relieve you of your money by getting you to place bets - on which cup the ball is under,
for instance, or where the queen of spades is. Lots of these games can be analysed using probability theory, and it soon becomes obvious that the games are tipped heavily in favour of the trickster!
The punter is well advised to steer clear. If, as in the game I'll describe here, the probabilities don't seem to favour the trickster, steer clear anyway. He's probably bending the odds with some
classy sleight of hand.
The idea of this game is that I write two cheques for different amounts of money, put each cheque in an envelope, and offer you the two envelopes. You are allowed to choose one envelope and look at
its contents. Then you have to decide which one to keep - the one you've opened, or the other one, which you are not allowed to look inside before you make your decision.
What can you lose? This game sounds like free money. Unfortunately, the cheques aren't real; they'll bounce if you try to cash them. All the same, as a point of pride, you'd like to end up with the
larger of the two amounts. In fact, let's make a bet: I bet you'll end up with the smaller amount. Do you take the bet?
By choosing at random, you will succeed 50% of the time. This is a fair bet, like betting on the toss of a coin. If you could do even slightly better than this, you would make a profit in the long
run if we played the game many times. However, it seems obvious that, unless you know something about what numbers I am likely to choose, you cannot succeed more often than 50% of the time.
It's better than you think
However, the surprising fact is that it is possible for you to use a strategy that succeeds more often than 50% of the time. What's more, it doesn't depend on any assumptions about what numbers I
will choose. In fact, even if I know your strategy and I try to choose numbers that will break it, I won't be able to - you will still succeed with probability that is strictly greater than 50%.
So how does this strategy work? Before I tell you, you might like to give it some thought. This is quite a difficult problem, so if you don't find an answer at first, sleep on it for a day or two.
You might like to assume that the cheques are in some ordinary currency, like sterling, so they are both for (positive) whole numbers of pence. Actually, even if I could write cheques for any real
numbers, like pi or the square root of two, you would still be able to prevail. But it is easier to think about if the amounts are restricted to positive whole numbers.
Here's a last hint: the bigger the cheque you find in the first envelope, the more you will be inclined to keep it rather than swap it for some other, unknown amount. So you should try to arrange
that the larger the cheque you see, the more likely you are to keep it.
When you find an answer, or give up, carry on and read the second part of the article.
|
{"url":"http://plus.maths.org/content/mathematical-mysteries-getting-most-out-life-part-1","timestamp":"2014-04-21T02:08:58Z","content_type":null,"content_length":"25443","record_id":"<urn:uuid:b2024fc3-948a-4c63-a8b9-3c43624576d3>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Goes Pop!
I recently had the pleasure of stumbling across Paul Lockhart’s essay, A Mathematician’s Lament. Lockhart, a former research mathematician in analytic number theory who received his Ph.D. from
Columbia in 1990, decided to leave academia in 2000 in order to concentrate on K-12 math education, which he hass been doing at Saint Ann’s School in Brooklyn.
Lockhart’s article lambasts the current state of mathematics education in this country. Some of his main points are the following: Mathematics is an art form, but unlike other art forms like music or
painting, is not understood as such by the general population. As a result, students are not exposed to the beauty of mathematics, and are instead taught through drill and memorization, which
effectively kills any natural curiosity the student may have. The most important part of mathematics lies not in the facts or theorems that students memorize, but in the arguments that . . . → Read
More: Read a Mathematician’s Lament
A friend recently shared with me the following video from TED (see below). In it, mathematician (or, in this case, mathemagician) Arthur Benjamin gives a brief argument for eliminating calculus as
the top of the “mathematical pyramid” in high school education, and replacing it probability and statistics. The main reason for this shift is that unless you are planning to have a career in a
technical field, it’s unlikely you’ll find a use for calculus in your everyday life, but an understanding of statistics can benefit you no matter what you do. For example, it can help you to build an
intuition about day to day decision making when risk and uncertainty are involved. Here’s the video (it’s short, only a couple of minutes):
A noble goal, to be sure, and it’s certainly a solution that wouldn’t cost a whole lot. There is an argument to be made for such . . . → Read More: Restructuring the Math Pyramid?
Here’s an interesting article about Tom Farber, a high school Calculus teacher from San Diego who is fighting tough economic times and cutbacks in education spending in a rather novel way – he’s
selling ad space on math tests.
The goal here certainly doesn’t seem to be the development of a second income. Many teachers report having to spend money out of their own pockets for school supplies – in this case, Mr. Farber is
using the money to help cover the copying costs associated with making tests and practice exams to help students prepare for the APs. His intentions certainly seem benevolent, but are his actions as
It seems like the advertising is fairly non-intrusive. There are no graphics, and the ads run on the bottom of the page. The fact that a good chunk of the ad space was bought by parents who wanted to
run . . . → Read More: Commodify your Mathematics?
Even though we’d like to accuse our math teachers of being more or less incompetent, there is at least one indication that math education in this country is making some progress. In particular, the
results of the Trends in International Mathematics and Science Study shows American students have gained 11 points over their average performance in 1995. A comparison of US scores, along with an
article describing the findings, can be read here.
With an international average of 500, American 4th and 8th grade students scored a respectable 529, on par with the Netherlands, Lithuania, Germany, and Denmark. As might be expected, however, we’ve
still got a significant way to go when it comes to competing with other countries. Hong Kong made the top of the list, with a score of 607.
Of course, this data by itself doesn’t do much to explain what factors may be driving our improvement. . . . → Read More: Math in the News: Maybe the Sky Isn’t Falling, After All
It looks like middle school math teachers can’t catch a break. According to a recent study, a significant percentage of math teachers in grades 5-8 do not have a degree or a certification in math.
Sadly, the numbers are even worse for schools in low income areas. While it’s certainly true that you don’t need a math degree to teach middle school math effectively, the data does suggest that
there is a significant bloc of underqualified math teachers trying to impart essential knowledge to these young students.
Of course, I doubt this is all the teachers’ fault – elementary and middle school teachers are a rare commodity in this country, and kids need someone to teach them math. An understaffed school will
do what it takes to make sure there’s somebody at the front of the classroom. And I certainly don’t envy those teachers out there who may not feel . . . → Read More: Math in the News: Are Math
Teachers Really Only One Chapter Ahead?
Earlier this month, the New York Times ran an article about the dearth of U.S. students with strong skills in mathematics. While this is not quite a revelation, it is made more timely by the recent
release of a study that looked at data from Putnam exams, International Mathematical Olympiads, and data from other programs meant to nurture younger students in mathematics.
This type of data is more powerful than looking at SAT scores, for instance, because exams administered in a mathematics competition are notoriously difficult. There are thousands of students who
will score an 800 on the math section of the SAT, and so this test offers no way to distinguish between them. Looking at this other data, however, allows us to gain a much deeper insight into the
abilities of students in the U.S. with an aptitude for mathematics.
The data suggests a couple of things. First, contrary . . . → Read More: Math in the News: Is U.S. Culture Crushing Potential Mathletes?
As you may recall, my first post briefly discussed the California Board of Education’s mandate that every 8th grader in the state must take Algebra. My purpose here is not to discuss the ruling
further, but rather to point out the response article published last month in the San Francisco Chronicle.
The article is well-researched and thoroughly written. Not only does it feature discussion of the pros and cons of such a mandate from a wide range of interviewees, but it also tries to address the
question of why Algebra, and mathematics in general, is perceived so terribly by American kids and adults alike. It also attempts to paint a picture of what Algebra actually is, for those of us who
fell by the wayside of mathematics long ago.
The current state of mathematics education is given quite a scathing review by the people mentioned in the article who actually . . . → Read More: Math in the News: Math is Cool, I Swear!
Math made the headlines last Thursday, with an article about a recent study in the journal Science, which discredits the perceived Gender Gap in mathematics. The AP article can be found here – if you
can’t bring yourself to read the article, you can also watch the following clip from NBC Nightly News on the same topic.
The AP article offers a more thorough discussion of the study, which examined standardized test scores for more than 7 million American students. Given the breadth of the study, one hopes it will
help dispel any lingering notion girls may have that they are some how innately unable to measure up to boys in math. We do, however, have a ways to go before math professor Barbie starts flying off
the shelves.
Any news that can help persuade women to enter mathematically demanding fields is good news. Not only because America needs to . . . → Read More: Math in the News: The Gender Gap is Closed for
Commodify your Mathematics?
|
{"url":"http://www.mathgoespop.com/category/math-education/page/2","timestamp":"2014-04-16T10:14:29Z","content_type":null,"content_length":"101764","record_id":"<urn:uuid:de328518-c452-4d2b-bc5a-461872897286>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculus Tutors
Saint Petersburg, FL 33701
Physics and Mathemagician here to help!
...I have taken a three course sequence focusing on all of the techniques covered in
. The sum grade average of that sequence was an A-. I have previously tutored many of the subjects contained in
courses, including specifically finding the area...
Offering 10+ subjects including calculus
|
{"url":"http://www.wyzant.com/Palmetto_FL_calculus_tutors.aspx","timestamp":"2014-04-24T05:37:12Z","content_type":null,"content_length":"60320","record_id":"<urn:uuid:fc57b218-4c56-405a-905b-c410d0cfa770>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the volume of 45.6 g of silver if the density of silver is 10.5 g/mL? (1 point)
0.23 mL
4.34 mL
479 mL
none of the above
What is the volume of 45.6 g of silver if the density of silver is 10.5 g/mL? (1 point) 0.23 mL 4.34 mL 479 mL none of the above
4.34 mL is the volume of 45.6 g of silver if the density of silver is 10.5 g/mL
Not a good answer? Get an answer now. (FREE)
There are no new answers.
|
{"url":"http://www.weegy.com/?ConversationId=AC836AB4","timestamp":"2014-04-20T08:23:33Z","content_type":null,"content_length":"40958","record_id":"<urn:uuid:a899878a-0c92-4755-ad68-b30975f1eefc>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
|
No more state monad version of hash maps / sets in Haskell?
up vote 1 down vote favorite
Is the monadic interface to hash sets and maps gone in Haskell? What kind of performance model should I have in mind when using the modern versions? (Data.Map, Data.HashMap, Data.HashSet). They do
not appear to have any IO code in the version I have (ghc 7.0.2),
> :browse Data.HashSet
type HashSet a = Set a
newtype Set a
= Data.HashSet.Set (Data.IntMap.IntMap (Data.HashSet.Some a))
(\\) :: Ord a => Set a -> Set a -> Set a
delete :: (Data.Hashable.Hashable a, Ord a) => a -> Set a -> Set a
difference :: Ord a => Set a -> Set a -> Set a
elems :: Set a -> [a]
empty :: Set a
Data.HashSet.filter :: Ord a => (a -> Bool) -> Set a -> Set a
fold :: (a -> b -> b) -> b -> Set a -> b
fromList :: (Data.Hashable.Hashable a, Ord a) => [a] -> Set a
insert :: (Data.Hashable.Hashable a, Ord a) => a -> Set a -> Set a
intersection :: Ord a => Set a -> Set a -> Set a
isProperSubsetOf :: Ord a => Set a -> Set a -> Bool
isSubsetOf :: Ord a => Set a -> Set a -> Bool
Data.HashSet.map ::
(Data.Hashable.Hashable b, Ord b) => (a -> b) -> Set a -> Set b
member :: (Data.Hashable.Hashable a, Ord a) => a -> Set a -> Bool
notMember ::
(Data.Hashable.Hashable a, Ord a) => a -> Set a -> Bool
Data.HashSet.null :: Set a -> Bool
partition :: Ord a => (a -> Bool) -> Set a -> (Set a, Set a)
singleton :: Data.Hashable.Hashable a => a -> Set a
size :: Set a -> Int
toList :: Set a -> [a]
union :: Ord a => Set a -> Set a -> Set a
unions :: Ord a => [Set a] -> Set a
haskell hash state-monad
add comment
3 Answers
active oldest votes
Is the monadic interface to hash sets and maps gone in Haskell?
No, there's still a monadic hash map, Data.HashTable, which lives in the IO monad. (It's pretty annoying that it doesn't live in the ST monad, but that would make it slightly less
portable and slightly less easy to understand I suppose, because ST isn't Haskell 98.) It works pretty much like a hashtable in any imperative language. The performance
up vote 5 down characteristics should be the same as well.
vote accepted
And of course from any map, including a hashtable, you can create a set, by storing dummy values (e.g. just map every key to itself).
6 Nitpick on the whole discussion: calling this a "monadic hash map" is pretty misleading, and I was confused by it. I was expecting a MonadHashTable class or something. I'd call it
an "impure hash table" or even a "hashtable with operations in IO". The word "monad" doesn't really belong here. – luqui May 23 '11 at 7:49
1 I don't see why. What's wrong with calling something "monadic" if has a bunch of functions which return values in a monad? – Robin Green May 23 '11 at 7:53
7 I think to call something "monadic" those functions should return results in an arbitrary monad, or at least any monad in a given type class. If they return only values in IO,
well... IO is a functor and a type constructor as well as a monad; why not call it "functorial" or "type constructoric"? – Alexey Romanov May 23 '11 at 7:59
3 @Alexey I just Googled monadic. The usage of the term "monadic parsing", including in a well-cited paper by Hutton, doesn't conform to your restricted definition of the word. –
Robin Green May 23 '11 at 8:02
3 Would you call functions that return lists monadic? If functions returning IO are called monadic then functions returning lists should be as well. – augustss May 23 '11 at 10:00
show 4 more comments
Data.HashMap and Data.HashSet use Patricia trees to store the hash, so the performance for the operations have the same asymptotic complexity as Data.Map. But that said, the constant
up vote 7 down factor is much smaller and they perform quite a lot faster in my experience.
add comment
There is still a HashTable, living in IO, available. However, that one is deprecated (precisely because it is not living in the ST monad) and will be removed in GHC 7.8.
up vote 0 down vote Yet there is a monadic hashtable available, which is living in ST. See the hashtables package in the hackageDB.
add comment
Not the answer you're looking for? Browse other questions tagged haskell hash state-monad or ask your own question.
|
{"url":"http://stackoverflow.com/questions/6094098/no-more-state-monad-version-of-hash-maps-sets-in-haskell/6094273","timestamp":"2014-04-23T15:32:41Z","content_type":null,"content_length":"78662","record_id":"<urn:uuid:8292a19d-04c5-4936-8da3-e23da385a4a8>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical mysteries: Getting the most out of life - Part 1
Issue 13
January 2001
There are many sorts of games played in a "bunco booth", where a trickster or sleight-of-hand expert tries to relieve you of your money by getting you to place bets - on which cup the ball is under,
for instance, or where the queen of spades is. Lots of these games can be analysed using probability theory, and it soon becomes obvious that the games are tipped heavily in favour of the trickster!
The punter is well advised to steer clear. If, as in the game I'll describe here, the probabilities don't seem to favour the trickster, steer clear anyway. He's probably bending the odds with some
classy sleight of hand.
The idea of this game is that I write two cheques for different amounts of money, put each cheque in an envelope, and offer you the two envelopes. You are allowed to choose one envelope and look at
its contents. Then you have to decide which one to keep - the one you've opened, or the other one, which you are not allowed to look inside before you make your decision.
What can you lose? This game sounds like free money. Unfortunately, the cheques aren't real; they'll bounce if you try to cash them. All the same, as a point of pride, you'd like to end up with the
larger of the two amounts. In fact, let's make a bet: I bet you'll end up with the smaller amount. Do you take the bet?
By choosing at random, you will succeed 50% of the time. This is a fair bet, like betting on the toss of a coin. If you could do even slightly better than this, you would make a profit in the long
run if we played the game many times. However, it seems obvious that, unless you know something about what numbers I am likely to choose, you cannot succeed more often than 50% of the time.
It's better than you think
However, the surprising fact is that it is possible for you to use a strategy that succeeds more often than 50% of the time. What's more, it doesn't depend on any assumptions about what numbers I
will choose. In fact, even if I know your strategy and I try to choose numbers that will break it, I won't be able to - you will still succeed with probability that is strictly greater than 50%.
So how does this strategy work? Before I tell you, you might like to give it some thought. This is quite a difficult problem, so if you don't find an answer at first, sleep on it for a day or two.
You might like to assume that the cheques are in some ordinary currency, like sterling, so they are both for (positive) whole numbers of pence. Actually, even if I could write cheques for any real
numbers, like pi or the square root of two, you would still be able to prevail. But it is easier to think about if the amounts are restricted to positive whole numbers.
Here's a last hint: the bigger the cheque you find in the first envelope, the more you will be inclined to keep it rather than swap it for some other, unknown amount. So you should try to arrange
that the larger the cheque you see, the more likely you are to keep it.
When you find an answer, or give up, carry on and read the second part of the article.
|
{"url":"http://plus.maths.org/content/mathematical-mysteries-getting-most-out-life-part-1","timestamp":"2014-04-21T02:08:58Z","content_type":null,"content_length":"25443","record_id":"<urn:uuid:b2024fc3-948a-4c63-a8b9-3c43624576d3>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
|
matrix Multiplication
David 71david at libero.it
Wed Oct 18 14:51:16 CEST 2006
Il 18 Oct 2006 04:17:29 -0700, Sssasss ha scritto:
> hi evrybody!
> I wan't to multiply two square matrixes, and i don't understand why it
> doesn't work.
Can I suggest a little bit less cumbersome algorithm?
def multmat2(A,B):
if len(A)!=len(B): return "error" # this check is not enough!
n = range(len(A))
C = []
for i in n:
C.append([0]*len(A)) # add a row to C
for j in n:
a = A[i] # get row i from A
b = [row[j] for row in B] # get col j from B
C[i][j] = sum([x*y for x,y in zip(a,b)])
return C
More information about the Python-list mailing list
|
{"url":"https://mail.python.org/pipermail/python-list/2006-October/397781.html","timestamp":"2014-04-21T12:51:57Z","content_type":null,"content_length":"3024","record_id":"<urn:uuid:acd8f448-2f84-4cd7-aa88-03ca9ceb85b6>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is better, math or logic?
02-24-2006 #1
Registered User
Join Date
Aug 2005
What is better, math or logic?
Take the following code snippets that find the absolute value of the variable x.
This one uses logic:
if ( x < 0 )
x *= -1.0f;
And this one which uses math:
Given that the probability of x being negative is equal to the probability of x being positive, which of the two methods is best?
Last edited by thetinman; 02-24-2006 at 08:26 AM. Reason: english
Don't know much about how floating-point works indepth but I think that
if ( x < 0 )
x = -x;
would be faster for the first method.
The compiler would probably optimise both to be the same thing (assuming you meant fabs(), not fabsf()).
Seek and ye shall find. quaere et invenies.
"Simplicity does not precede complexity, but follows it." -- Alan Perlis
"Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra
"The only real mistake is the one from which we learn nothing." -- John Powell
Other boards: DaniWeb, TPS
Unofficial Wiki FAQ: cpwiki.sf.net
My website: http://dwks.theprogrammingsite.com/
Projects: codeform, xuni, atlantis, nort, etc.
02-24-2006 #2
02-24-2006 #3
02-25-2006 #4
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/76204-what-better-math-logic.html","timestamp":"2014-04-16T19:23:06Z","content_type":null,"content_length":"50719","record_id":"<urn:uuid:d885751e-cf21-4aee-8947-504f63c384d0>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Introduction to graph theory
Robin J. Wilson
Graph Theory has recently emerged as a subject in its own right, as well as being an important mathematical tool in such diverse subjects as operational research, chemistry, sociology and genetics.
Robin Wilson's book has been widely used as a text for undergraduate courses in mathematics, computer science and economics, and as a readable introduction to the subject for non-mathematicians.
The opening chapters provide a basic foundation course, containing such topics as trees, algorithms, Eulerian and Hamiltonian graphs, planar graphs and colouring, with special reference to the
four-colour theorem. Following these, there are two chapters on directed graphs and transversal theory, relating these areas to such subjects as Markov chains and network flows. Finally, there is a
chapter on matroid theory, which is used to consolidate some of the material from earlier chapters.
For this new edition, the text has been completely revised, and there is a full range of exercises of varying difficulty. There is new material on algorithms, tree-searches, and graph-theoretical
puzzles. Full solutions are provided for many of the exercises.
Robin Wilson is Dean and Director of Studies in the Faculty of Mathematics and Computing at the Open University.
We haven't found any reviews in the usual places.
Definitions and examples 8
Paths and cycles 26
7 other sections not shown
Bibliographic information
|
{"url":"http://books.google.com/books?id=tSolAQAAIAAJ&dq=wilson+graph+theory&source=gbs_book_other_versions_r&cad=2","timestamp":"2014-04-21T07:26:20Z","content_type":null,"content_length":"112738","record_id":"<urn:uuid:82303b0c-4f5a-48a8-b440-c82fadfdbe08>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Copyright Notice
This text is copyright by CMP Media, LLC, and is used with their permission. Further distribution or use is not permitted.
This text has appeared in an edited form in SysAdmin/PerformanceComputing/UnixReview magazine. However, the version you are reading here is as the author originally submitted the article for
publication, not after their editors applied their creativity.
Please read all the information in the table of contents before using this article.
Like most algorithmic languages, Perl provides a mechanism to place portions of the code into a ``subroutine''. Subroutines can be used to provide easy re-use of algorithms, especially when someone
else has written the code. Subroutines can also make it easier to follow the logic of a program, because the details of a subroutine are hidden away from its use. In this column, I'm going to talk
about the basics of subroutine invocation and linkage, from parameters to recursion.
Let's take a simple problem. You have a bunch of numbers in @data, and you want to know the sum total of those numbers. You could write code that looks like this:
... code ...
$sum = 0;
foreach (@data) {
$sum += $_;
# now use $sum
This initializes the value $sum to zero, and then adds each element of @data into the current value of $sum. We can wrap this up into a subroutine like so:
sub sum_data {
$sum = 0;
foreach (@data) {
$sum += $_;
and when we want to set $sum equal to the current value of @data, simply invoke the subroutine:
This works. I can type the code to add @data into $sum once, somewhere in the program (often towards the end), and then re-use the code repeatedly by invoking the subroutine from different places in
the main part of the code.
The result is left in the variable $sum. However, every subroutine invocation also returns a value, because technically the invocation is always within some expression. (In this case, the
expression's value is thrown away.) This ``return value'' of a subroutine is whatever expression is evaluated last within the subroutine. As it turns out, the last thing evaluated in this subroutine
is always the $sum += $_ line, which will result in the return value being the same as $sum!
So, we can write the subroutine invocation like this:
$total = &sum_data();
and $total will also be the same value as $sum. Or even:
$two_total = &sum_data() + &sum_data();
which evaluates the sum twice, ending up in $two_total. This is wasteful, of course, and would probably be reduced to:
$two_total = 2 * &sum_data();
in a real program.
If you can't tell that $sum is the return value of &sum_data(), you can also put $sum explictly as the last expression evaluated, like so:
sub sum_data {
$sum = 0;
foreach (@data) {
$sum += $_;
$sum; # return value
Note that $sum as an expression on its own is enough to make it the last expression evaluated within the subroutine, and therefore the return value of the subroutine.
This subroutine is interesting, but it is limited to computing the sum of values in the @data array. What if we had @data_one and @data_two? We'd have to write a different version of &sum_data() for
each array. Well, no, that's not necessary. Just as a subroutine can return back a value, it can also take a list of values as arguments or parameters:
$total = &sum_this(@data);
In this case, the values of @data are collected up into a new array called @_ within the subroutine, like so:
sub sum_this {
$sum = 0;
foreach (@_) {
$sum += $_;
Note that all I've done is change @data to @_, which holds the values of the passed-in parameters. The values passed to the subroutine are constructed from any list. For example, I can also say:
$more_total = &sum_this(5,@data);
which will prepend 5 to @data, yielding an array in @_ that is one element larger than @data.
The routine &sum_this is pretty useful now. However, what if I'm using the $sum variable in some other part of my program? By default, all variables within a subroutine refer to the global use of
those variables, so the &sum_this routine will clobber whatever value was previously in $sum. To fix this, I can (and should) make the $sum variable local to the subroutine:
sub sum_this {
my $sum = 0;
foreach (@_) {
$sum += $_;
Now, for the duration of this routine, the variable $sum refers not to a global $sum, but to a new local variable that is thrown away as soon as the subroutine returns.
If you are not yet up to Perl version 5 (released roughly a year ago, but surprisingly, some have not gotten with the program yet), you can use Perl version 4's construct called ``local'', which
performs a similar function:
sub sum_this {
local($sum) = 0;
foreach (@_) {
$sum += $_;
However, had this subroutine (with ``local'' instead of ``my'') called another subroutine, any reference in that subroutine to $sum would have accessed this subroutine's $sum, rather than the global
$sum, and that can get quite confusing to say the least.
If you had a program with &sum_data, and also added &sum_this, you could rewrite &sum_data in terms of &sum_this like so:
sub sum_data { &sum_this(@data); }
I have done this from time to time as I rewrite specific subroutines into general ones.
A subroutine can return a list of values rather than just a single value (a scalar). Let's hack on this subroutine a bit to get it to return all the intermediate sums instead of just the final sum:
sub sum_this {
my $sum = 0;
my @sums;
for (@_) {
$sum += $_;
Now, what's going on here? I'm creating a new array called @sums that will hold the incremental results of adding each new element to the sum. As each sum is calculated, it is pushed onto the end of
the list. When the loop is complete, the value of @sums is returned. This means I can call this subroutine like so:
@result = &sum_this(1,2,3);
print "@result\n"; # prints "1 3 6\n"
What happens when I call this subroutine in a scalar context (such as assigning the result to a scalar)? Well, the scalar context gets passed down into the last expression evaluated -- in this case,
the name of @sums. The name of an array in a scalar context is the number of elements in the array, so we'll get:
$what = &sum_this(1,2,3);
print $what;
which will print ``3'', the number of elements in the return value. With a little bit of trickery, we can combine the two kinds of subroutines into one:
sub sum_this {
my $sum = 0;
my @sums;
for (@_) {
$sum += $_;
if (wantarray) {
} else {
In this case, if the subroutine is being invoked in an array context (assigned to an array, for example), the builtin value ``wantarray'' is true, and the @sums array is returned. If not, the builtin
value ``wantarray'' is false, so $sum gets returned instead.
Now, we get results like this:
$total = &sum_this(1,2,3); # gets 6
@totals = &sum_this(1,2,3); # gets 1,3,6
Obviously, a subroutine designer has a lot of flexibility. If you implement a general-purpose subroutine for others, be sure to consider what it does in a scalar and array context, and if sense, use
the wantarray construct to make sure that it returns an appropriate value.
Like most algorithmic languages, Perl supports recursive subroutines. This means that a subroutine can call itself to perform a part of the task. A classic example of this is a subroutine to
calculate a Fibonacci number F(n), defined as follows:
F(0) = 0;
F(1) = 1;
F(n) = F(n-1)+F(n-2) for n > 1;
Now, this definition can be translated directly into a Perl subroutine as follows:
sub F {
my ($n) = @_;
if ($n == 0) {
} elsif ($n == 1) {
} else {
&F($n - 1) + &F($n - 2);
This will indeed generate the correct result. However, for a large value of $n, the subroutine will be called repeatedly with the exact same values of numbers smaller than $n. For example, the call
to compute F(10) will compute F(9) and F(8). However, the call to compute F(9) will also call F(8) again, and so on.
A quick solution to this is to maintain a cache of the previous return values. Let's call the cache @F_cache, and use it as follows:
sub F {
my ($n) = @_;
if ($n == 0) {
} elsif ($n == 1) {
} elsif ($F_cache[$n]) {
} else {
$F_cache[$n] =
&F($n - 1) + &F($n - 2);
Now, if a number greater than 1 is passed into this function, one of two things can happen: (1) if the number has been computed already, we simply return the computed value, or (2) if the number
hasn't been computed, we compute the value, and remember it for a possible future invocation. Note that the assignment to @F_cache is also the return value.
I've used this technique on many subroutines that have an expensive value to calculate. For example, mapping the IP number to a name via DNS can take a little while, so I wrote a routine that
remembers the previous return values that it has seen, thereby saving the second and subsequent lookups (at least in this particular invocation of the program). The subroutine looked like this:
sub ip_to_name {
if ($ip_to_name{$_[0]}) {
} else {
$ip_to_name{$_[0]} =
... calculations ...
Here, the first parameter $_[0] is looked up as the key of an
associative array. If the entry is found, that value is returned
immediately, otherwise a new value is calculated and remembered for
future invocations. This kind of cache is a speed-up only when the
subroutine is likely to be called with multiple instances of the same
argument -- otherwise, you're just wasting time.
I hope you've enjoyed this little excursion into subroutines. Next time, I'll probably talk about something different.
|
{"url":"http://www.stonehenge.com/merlyn/UnixReview/col05.html","timestamp":"2014-04-16T10:35:55Z","content_type":null,"content_length":"21220","record_id":"<urn:uuid:779dc60c-c259-418e-b062-c5053d76d5c1>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The set of $\Delta_1$ indices
Take the 2-minute tour ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Is the set of Godel numbers of $\Delta_1$ formulae itself $\Delta_1$ definable (i.e., computable)?
By $\Delta_1$ formula, do you mean a pair of $\Sigma_1$ formulas that define complementary sets, or a pair of $\Sigma_1$ formulas that are proved to define complementary sets by some theory (e.g.,
PA)? (I'm pretty sure the answer is no in both cases.)
Trevor Wilson Aug 11 '12 at 16:11
I'm happy to work with either of those definitions.
neophyte neologican Aug 11 '12 at 16:46
add comment
The answer is no, in either of the cases Trevor outlined.
For example, for a fixed $e$, consider the formulas $p(x)\equiv$"There is some $n$ such that $\Phi_e(x)[n]\downarrow=1$" and $q(x)\equiv$"There is some $n$ such that $\Phi_e(x)[n]\downarrow=0$."
These formulas are both $\Sigma^0_1$, and they define complementary sets if and only if $\Phi_e$ is a total $\lbrace 0, 1\rbrace$-valued function. So if the set of indices for pairs of $\Sigma^
0_1$-formulas defining complementary sets was $\Delta^0_1$, we'd have that the set of indices for total $\lbrace 0, 1\rbrace$-valued functions is computable, which is a clear contradiction.
Something similar will work in the other case Trevor described. The basic idea is that the set of indices of pairs of $\Sigma^0_1$-formulas which PA (say) proves are complementary is c.e. (just
search through proofs) but not computable. To flesh that out, for $e$ a natural number, consider the $\Sigma^0_1$-formulas $p_e(x)\equiv$"$x\not=x$" (defining $\emptyset$) and $q_e(x)\equiv$"$\
exists s(e\in W_{e, s}\vee x=2)$." These formulas are complementary iff $e\in W_e$, and moreover are complementary iff PA proves that they are complementary (I guess I'm assuming soundness of PA
here, but that's not a huge assumption to make). But if the indices for PA-provably complementary $\Sigma^0_1$-formulas were $\Delta^0_1$, we could use the above setup to compute the set of $e$ such
that $e\in W_e$ - i.e., the Halting Problem. So the answer is again no. (Note that the only properties of PA that are being used here are soundness, and the fact that it is sufficiently strong to
express and prove basic facts about c.e. sets.)
add comment
|
{"url":"http://mathoverflow.net/questions/104493/the-set-of-delta-1-indices/104501","timestamp":"2014-04-18T15:59:17Z","content_type":null,"content_length":"52934","record_id":"<urn:uuid:aa538760-bfa4-4941-876f-d1a461c32f00>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ceil — Returns the smallest integer not less than x
ceil(x) (init-, control-, or audio-rate arg allowed)
where the argument within the parentheses may be an expression. Value converters perform arithmetic translation from units of one kind to units of another. The result can then be a term in a further
|
{"url":"http://www.csounds.com/manualOLPC/ceil.html","timestamp":"2014-04-19T11:58:59Z","content_type":null,"content_length":"4585","record_id":"<urn:uuid:138777bb-5f2e-4f87-ae89-c7e4c1b73b1d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
|
- -
Please install Math Player to see the Math Symbols properly
Click on a 'View Solution' below for other questions:
f Given a regular hexagon with one of its base as 8 cm, it was cut into co-triangles with center O. What is the sum of the length of all the midsegments of all the triangles? f View Solution
f Vertex of a kite are (3, 0), (0, 4), (- 3, 0) and (0, - 8). The mid points of the sides of the kite are joined to form a quadrilateral. Find the area of the quadrilateral. View Solution
f Find the value of z from the figure. View Solution
f In the figure, AD¯ is the median of ΔABC. BE : ED = 2 : 3. P is the midpoint of AB¯ and PQ¯ || BC¯. Find the length of PQ¯. f View Solution
f Which of the following is/are correct?
1. Line joining the midpoints of the sides of a square form a square.
2. Line joining the midpoints of the sides of a rhombus form a rhombus. View Solution
3. Line joining the midpoints of the sides of a parallelogram form a parallelogram.
4. Line joining the midpoints of the sides of a rectangle form a rectangle.
f Find the value of (x² + 2). View Solution
f XY¯ is the length of the midsegment drawn for the two triangles. Find the length of FY¯. View Solution
f Find the length of midsegment EF¯. View Solution
f Find the value of DE¯. View Solution
f Which of the following statements are correct?
1. If BC¯ || DE¯, then D and E are the midpoints of AB¯ and AC¯. View Solution
2. If the length of BC¯ is twice the length of DE¯, then D and E are the midpoints of AB¯ and AC¯.
3. If D is the midpoint of AB¯ and DE¯ || BC¯, then E is the midpoint of AC¯. f
f AB¯ || DE¯. D is the midpoint of AC¯. Find the coordinates of D and E. View Solution
f Find the point F from the figure if AB¯ || GH¯. f View Solution
f The length of one diagonal of a rhombus is twice the other. Area of the rectangle obtained by joining the midpoints of the sides of the rhombus is 18 units. Find the length of the View
longer diagonal. f Solution
f Find the value of x from the figure. View Solution
f Find the value of x from the given figure. f View Solution
f What type of triangle is formed by joining the midsegments of an equilateral triangle? f View Solution
f Find the value of x from the figure. f View Solution
f Find the value of x from the figure. View Solution
f Find the values of x and y from the following equilateral triangle. f View Solution
f In the given triangle if FG¯ is midsegment of ΔAED and DE¯ is the midsegment of ΔACB, then find the value of x. View Solution
f If x is the length of MN¯ which is the midsegment of ΔABC, then find the value of BC¯. f View Solution
f Given a regular hexagon with one of its base as 4 cm, it was cut into co-triangles with center O. What is the sum of the length of all the midsegments of all the triangles? f View Solution
BB The length of the diagonal of a square is 12 in. Two diagonals intersect to form four congruent triangles. Find the length of the mid-segment of any triangle parallel to the side of View
the square. BB Solution
BB Find the value of x from the figure if OC = CE and OD = DF. BB View Solution
BB Find the values of x and y from the given figure. View Solution
BB ABCD is a rhombus. E, F, G and H are the mid points of the sides.What can you say about the area of EFGH? BB View Solution
BB ABCD is a quadrilateral. P, Q, R and S are the mid points of sides AB¯, BC¯, CD¯ and DA¯ respectively. What is the best description of PQRS? BB View Solution
BB ABCD is a quadrilateral with vertices A(0, 0), B(4, 1), C(6, 8) and D(0, 3). P, Q, R and S are the midpoints of AB¯, BC¯, CD¯ and DA¯ respectively. Find the perimeter of PQRS. View Solution
|
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgegaxkjbek&.html","timestamp":"2014-04-19T10:11:39Z","content_type":null,"content_length":"73522","record_id":"<urn:uuid:526a87b2-ccec-4406-bd7e-cee1a06d68b0>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Posts by Hero
Total # Posts: 39
If a rectangle has a perimeter of 12 meters and each side is a whole number of meters, what are the possible dimentions for the length and width? List them and explain your answer.
Explain why its necessary to rename 4 1/4 if you subtract 3/4 from it.
Oh it was supposed to be the other way. Whoops
Which is a good comparison of the estimated sum and the actual sum of 7 7/8 + 2 11/12? A. Estimated < actual B. Actual> estimated C. Actual = estimated D. Estimated < actual I chose D
Can the sum of two mixed numbers be equal to 2? Explain why or why not.
Which of the following represents the difference between two equal fractions? A.1 B.1/2 C.1/4 D.0
Math Explanation
How can estimating be helpful before finding an actual product.
Marco and Suzi each multiplied 0.721 x100. Marco got 7.21 for his product. Suzi got 72.1 for her product. Which student multiplied correctly? Why?
Math Explaining
How would you use a number line to round 347 to the nearest ten?
Which number of markers can not be divided equally except by giving to one person? A)4 B)12 C)19 D)39
What is the prime factorization of 12? A.2^2*3 B.2^3*3 C.3*4^2 D.2*3*4
Adult tickets cost $6 and children s tickets cost $4. Mrs. LeCompte says that she paid $30 for tickets, for both adults and children. How many of each ticket did she buy?
Math Explanation
Marcy wants to put tiles on a bathroom floor that measures 10 ft x 12 ft. What kind of square tiles should she buy to tile her floor? Explain.
No, like what the answer is.
I still dont understand
What size rectangular floor can be completely covered by using only 3x3 or 5x5 ft tiles? You can't cut tiles or combine the two tile sizes.
Math Critical Thinking
So each 12 months 3 times, so 6 in 2 years
Quantum physics
What is the superposition that results if you apply the Hadamard transform H⊗n to the state 12√(|0n⟩+|1n⟩)? (Here, |0n⟩:=|00⋯0n⟩ and |1n⟩:=|11⋯1...
Quantum physics
Q11 answer please......
Quantum physics
any help for Q 3 and 11 I am not sure what to tick....... sorry anoy.....it was just a assumption.
Quantum physics
answer for Q3 is option 4 answer for Q11 is option 2 Please revert..........with thanks and let me know if any help needed
Quantum physics
anoy, answer for 8 is 8(a) 1/sqrt(2) |00> 0 |01> 0 |10> 1/sqrt(2) |11> 8(b) 0 |00> 1/sqrt(2) |01> -1/sqrt(2) |10> 0 |11> 8(c) 1/2 |00> 1/2 |01> -1/2 |10> 1/2 |11>
Quantum physics
Any help for Q3, posted above here
could you please guide me from where should I start learning German Language and its prunciation from scratch
17g A=(.5)t/k
How do we find the molar mass of a gas produced? Example if 1.00 kg of N2H2 reacts with 1.00kg of H202. How many moles of N2 could be formed? THe equation is N2H2+2H202 > N2 + 4H20
oral cavity - the opening through which food is taken in and vocalizations emerge; "he stuffed his mouth with candy" mouth, oral fissure, rima oris teeth, dentition - the kind and number and
arrangement of teeth (collectively) in a person or animal glossa, lingua, to...
By studying in the USA, I believe that I am not only getting an internationally recognized education, but also a thrifty one. My decision to study here will help me become independent. In addition,
the money I save by studying here will eventually help me realize my life's...
By studying in the USA, I believe that I am not only getting an internationally recognized education, but also a thrifty one. My decision to study here will help me become independent. In addition,
the money I save by studying here will eventually help me realize my life's...
Essay Check
This is a paragraph in my essay. Please tell me if this makes any sense. (basically I want to save money by earning a quality and cheap education so I can save money to be able to open a free clinic
to help people) >>>>>>>>>>>>>>>&g...
Essay Check
This is a paragraph in my essay. Please tell me if this makes any sense. (basically I want to save money by earning a quality and cheap education so I can save money to be able to open a free clinic
to help people) >>>>>>>>>>>>>>>&g...
And what about the second one, can you explain that to me?
The following combinations of quantam numbers are not allowerd 1. n=3 l=0 m=-1 2. n=4 l=4 m=0 Can someone explain to me why the second one is not allowed. Also check my explaination for the first
one. If L=0 then m also has to also =0 The s subshell (ℓ = 0) contains only...
Diversity Essay
This is my second essay, please I want some constructive criticism on this. Moreover this site does not give away our essays so they can be seen as plagiriazed correct? In my long distance trip to
United Arab Emirates which I also call home, I volunteered at this posh governme...
Pre-Dental Essay
I have to write a personal comments essay, please help me out in figuring if this is well written,makes sense and is worth it.. Truth be told I am fascinated by teeth. I find each molar precious and
I want to preserve and pamper every atom that builds a tooth. Not only does yo...
12th grade Math - word problems
Who Knows
a rectangular box of given volume is to be "a" times long as it is wide. find the dimension for which it has the least total surface area ? L= aw Total surface area= 2LW + 2Lh + 2Wh = 2aw^2 + 2awh +
2wh But volume= lwh= 2aw^2 h or h= V/2aw^2 Total Surface area= = 2aw...
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Hero","timestamp":"2014-04-20T02:10:46Z","content_type":null,"content_length":"13756","record_id":"<urn:uuid:69055bad-8d7d-455e-9d44-e60198f43e10>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Formula for Calculating the Number of Theoretical Plates : SHIMADZU (Shimadzu Corporation)
Formula for Calculating the Number of Theoretical Plates
N, the number of theoretical plates, is one index used to determine the performance and effectiveness of columns, and is calculated using equation (1).
・・・1) where tr: retention time, and W: peak width
This peak width, W, is based on the baseline intercepts of tangent lines to a Gaussian peak, which is equivalent to the peak width at 13.4 % of the peak height.
However, to simplify the calculation and accommodate non-Gaussian peaks, the following calculation methods are used in actual practice.
1. Tangent Line Method
Peak width is the distance between points where lines tangent to the peak's left and right inflection points intersect the baseline, and is calculated using equation (1). The USP (United States
Pharmacopeia) uses this method. This results in small N values when peak overlap is large.
This also presents a problem if the peak is distorted, so that it has multiple inflection points.
2. Half Peak Height Method
Width is calculated from the width at half the peak height (W[0.5]). Since width can be calculated easily by hand, it is the most widely used method. This is the method used by the DAB (German
Pharmacopeia), BP (British Pharmacopeia), and EP (European Pharmacopeia).
The Japanese Pharmacopoeia 15th revision issued in April 2006 changed the coefficient from 5.55 to 5.54.
(LCsolution allows selecting the coefficient via the [Column Performance] setting, where the calculation method for 5.54 is "JP" and for 5.55 is "JP2."
For broader peaks, the half peak height method results in larger N values than other calculation methods.
3. Area Height Method
Width is calculated from the peak area and height values. This method provides relatively accurate and reproducible widths, even for distorted peaks, but results in somewhat larger N values when peak
overlap is significant.
4. EMG Method (Exponentially Modified Gaussian)
This method introduces parameters that accommodate the asymmetry of peaks, and uses the peak width at 10 % of the peak height (W[0.1]). Since it uses a width near the baseline, it results in N values
larger than other methods for broad peaks. Furthermore, it cannot calculate the width unless the peak is completely separated.
・・・4) a[0.1]: Width of first half of peak at 10 % height b[0.1]: Width of second half of peak at 10 % height
Comparison of Calculation Methods
Given a Gaussian peak, each of these calculation methods results in the same N value. However, normally peaks tend to have some tailing, which results in different N values for different calculation
Therefore, the four calculation methods were compared using chromatograms. Profile A shows a typical chromatogram (with some tailing), whereas profile B shows a chromatogram with significant tailing.
The theoretical number of plates calculated using the four methods are indicated in the table below. Results for N varied even for chromatogram A. Also, peaks with more significant distortion, such
as at peak 1 in profile B, can result in N values that differ by many times.
A key factor for performing reliable quantitative analysis is whether or not separation is possible, so there is a common opinion that a calculation method that judges broader peaks, such as with
tailing, more severely is more practical. However, unfortunately, there seems to be no consensus on opinions regarding N and W.
Consequently, if a certain method is already being used for evaluation, then to achieve correlation, it is probably preferable to keep using the same method.
Comparison of Theoretical Number of Plates
A (roughly typical peak) B (significant
Half Peak Height Method 15649 20444 20389 22245 5972 7917 - 9957
Tangent Line Method 14061 18516 20309 21447 5773 7692 5795 9707
Area Height Method 13828 19207 17917 21020 4084 7845 6217 8641
EMG Method 10171 15058 14766 17836 1356 - - 4671
A hyphen indicates calculation was not possible. In the half peak height method, 5.54 was used as the coefficient.
Shimadzu's LC workstation software is able to output performance reports using any of the methods indicated above - 1. tangent line, 2. half peak height (5.54), 2'. half peak height (5.55), 3. area
height, or EMG. We recommend recording the corresponding column performance results along with analytical results!
For Research Use Only. Not for use in diagnostic procedures.
This page may contain references to products that are not available in your country.
Please contact us to check the availability of these products in your country.
|
{"url":"http://www.shimadzu.com/an/hplc/support/lib/lctalk/34/34tec.html","timestamp":"2014-04-16T20:29:46Z","content_type":null,"content_length":"27954","record_id":"<urn:uuid:e04fabdf-63da-4566-a269-b198565ac20a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
|
History of Black Women in the Mathematical Sciences
The first American Woman to earn a Ph.D. In Mathematics was Winifred Edgerton Merrill (Columbia U. 1886). In the early half of the twentieth century many Black women obtained a Masters Degree in
Mathematics (my mother was one); however, it was not until 1943, 20 years after the first African American earned a Ph.D. In mathematics that a Black woman reached that level.
1943 The first african american woman to earn a Ph.D. In Mathematics and ninth african american to earn a Ph.D. In Mathematics was Euphemia Lofton Haynes (Catholic University).
1949 The second african american woman to earn a Ph.D. In Mathematics was Evelyn Boyd Granville (Yale University).
1950 The third african american woman to earn Ph.D.s in Mathematics was Marjorie Lee Browne (University of Michigan).
1956 Gloria Ford Gilmer is the first african american woman to publish a non-Ph.D.-thesis mathematics research paper. Had she not stopped grad school at the University of Wisconsin for marriage she
would have been the fourth african american woman to earn a Ph.D. In Mathematics. Some years later she earned a Doctorate in Curriculum Instruction. However, during her entire career she has been a
major force and an instrumental figure for the advancement of african americans in the Mathematical Community.
Euphemia Lofton Haynes
Evelyn Boyd Granville
Marjorie Lee Browne
Gloria Ford Gilmer
Gloria Conyers Hewitt
1960 Argelia Velez-Rodriguez becomes the fourth african american women to earn a Ph.D. In Mathematics (University of Havana).
1961 Sadie Gasaway is the fifth african american woman to earn the Ph.D. (Cornell University). Georgia Caldwell Smith passed her Ph.D. Defense in 1960, but died before the Ph.D. Was conferred
posthumously in 1961. Lillian K. Bradley became the first Black woman to earn any kind of doctorate in any field at the University of Texas when she earned her D.Ed. In Mathematics Education. This is
important because of the racist attitudes in the University of Texas Mathematics Department.
1962 The seventh and eigth african american women to earn a Ph.D. In Mathematics were Gloria Conyers Hewitt (University of Washington -Seattle) and Louise Nixon Sutton (New York University).
1963 Grace Lele Williams became the first Nigerian (perhaps African) woman to earn any doctorate when she got her Ph.D. In Mathematics Education (University of Chicago).
1965 Beryl Eleanor Hunte (New York University) and Thyrsa Frazier Svager at Ohio State University.
1966 The eleventh, twelfth and thirteenth african american women earned a Ph.D. In Mathematics this year: Eleanor Dawley Jones at Syracuse University; Shirley Mathis McBay at the University of
Georgia; and Vivienne Malone Mayes was also the first african american woman Ph.D. In any field at the University of Texas at Austin. Mayes' struggle is typical of the difficulty of african americans
who attempted to earn a doctorate in the South prior to 1970.
Argelia Velez-Rodriguez
Louise Nixon Sutton
Eleanor Dawley Jones
Vivienne Malone-Mayes
Shirley Mathis McBay
1967 Geraldine Darden was the fourteenth african american woman to earn a Ph.D. In Mathematics (Syracuse University).
1968 Mary Lovenia DeConge-Watson was the fifteenth african american woman to earn a Ph.D. In Mathematics (St. Louis University).
1969 Etta Zuba Falconer was the sixteenth african american woman to earn a Ph.D. In Mathematics (Emory University).
1970 Genevieve Knight was the seventeenth african american woman to earn a Ph.D. In mathematics (Math Education-University of Maryland).
Geraldine Darden
Sister Mary L. S. DeConge
Etta Falconer
Genevieve Knight
1971 Joella H. Gipson (University of Illinois) and Dolores Spikes (Louisiana State University) were the eighteenth and nineteenth african american women to earn a PhD. In Mathematics.
1972 Rada Higgins McCreadie was the twentieth african american woman to earn a Ph.D. In Mathematics (Ohio State University). Also in this year, Prince Winston Armstrong earned her D.Ed. In
Mathematics Education from the University of Oklahoma.
1972 Willie Hobbs Moore (University of Michigan) is first African American Woman to earn a Ph.D. In Physics
1973 Evelyn Thornton (University of Houston) earn the Ph.D.
1976 Shirley Ann Jackson is the second African American woman to earn a Ph.D. In Physics (M.I.T. 1973).
1978 Fern Hunt (The Courant Institute of New York University) earns a Ph.D. In Mathematics.
1979 Fanny Gee (University of Pittsburgh) was the eigteenth african american woman to earn a Ph.D. In Mathematics.
Joella Gipson
Dolores Spikes
Fern Hunt
Kate Okikiolu
1980 The first book on African American Mathematicians, Black Mathematicians and their Works , finally published by Virginia K. Newell, Joella H. Gipson, L. W. Rich, and B. Stubblefield.
1985 Gloria Gilmer co-founds ISGEm, the Ethnomathematics Organization.
1990 AMUCWMA - The African Mathematical Union Commission on Women in Mathematics in Africa is founded with Grace Lele Williams as Chairman.
1992 Gloria Gilmer is the first woman to deliver a major National Association of Mathematicians lecture (the Cox-Talbot Address).
1997 Kate Okikiolu becomes the first Black to be awarded Mathematics' most prestigious young person's award, the Sloan Research Fellowship. She also won the new $500,000 Presidential Early Career
Awards for Scientists and Engineers.
Dorothy Jones, in 1994-95 became the first Black woman to be National president for the Canadian Operational Research Society.
2000 The first school to graduate three African American women Ph.Ds in one year: The University of Maryland also graduates its first Mathematics women Ph.D. (Knight's 1971 degree was in Education).
They are:
Tasha Inniss, Sherry Scott and Kimberly Weems.
T. Inniss, S. Scott, K. Weems
2001 Kate Okikiolu becomes the first Black woman to publish in the best mathematics journal, The Annals of Mathematics.
These web pages are brought to you by
The Mathematics Department of
The State University of New York at Buffalo.
created and maintained by
Dr. Scott W. Williams
Professor of Mathematics
CONTACT Dr. Williams
"In ancient Kushite-Kemetan space science, they used the cosmis rhythm of the female body and the phases of the moon as the basis for reckoning time. [John G.] Jackson says, 'At an extremely early
date, a connection had doubtless been observed between the 28-day cycle of the moon and the menstrual cycle of woman, and between changes of the moon and ocean tide.' This was the beginnings of the
lunar calendar and as fas as we know the first calendar. As the Kemetan astronomers continued to observe the roundness of the heavenly bodies, 2 days-symbolizing the duality of the masculine and
feminine force-were added to the 28 day month making 30 days so that at the end of the 12 months the calendar would fulfill the 360 degree cycle-circle of all things. After further investigation they
created the 365 day calendar year to synchronize with the journey of the golden orb of our solar system. There is a very interesting story of how and why the god Tehuti added the 5 days and created
the solar calendar which again is directly connected with the birth cycle."
~Ishakamusa Barashango, Afrikan Woman the Original Guardian Angel
|
{"url":"http://afrikafriend.4bb.ru/viewtopic.php?id=1343","timestamp":"2014-04-18T23:15:38Z","content_type":null,"content_length":"27951","record_id":"<urn:uuid:31367510-9fb7-4cbe-9afa-8e677e384a67>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Watch and Clock Escapements Summary
The reader is urged to make the drawings for himself on a large scale, say, an escape wheel 10” pitch diameter. Such drawings will enable him to realize small errors which have been tolerated too
much in drawings of this kind. The drawings, as they appear in the cut, are one-fourth the size recommended, and many of the lines fail to show points we desire to call attention to. As for instance,
the pallet center at B is tangential to the pitch circle a from the point of tooth contact at f. To establish this point we draw the radial lines A c and A d from the escape-wheel center A, as shown,
by laying off thirty degrees on each side of the intersection of the vertical line i (passing through the centers A B) with the arc a, and then laying off two and a half degrees on a and establishing
the point f, and through f from the center A draw the radial line A f’. Through the point f we draw the tangent line b’ b b’’, and at the intersection of the line b with i we establish the center of
our pallet staff at B. At two and a half degrees from the point c we lay off two and a half degrees to the right of said point and establish the point n, and draw the radial line A n n’, which
establishes the extent of the arc of angular motion of the escape wheel utilized by the pallet arm.
[Illustration: Fig. 90]
We have now come to the point where we must exercise our reasoning powers a little. We know the locking angle of the escape-wheel tooth passes on the arc a, and if we utilize the impulse face of the
tooth for five degrees of pallet or lever motion we must shape it to this end. We draw the short arc k through the point n, knowing that the inner angle of the pallet stone must rest on this arc
wherever it is situated. As, for instance, when the locking face of the pallet is engaged, the inner angle of the pallet stone must rest somewhere on this arc (k) inside of a, and the extreme outer
angle of the impulse face of the tooth must part with the pallet on this arc k.
With the parts related to each other as shown in the cut, to establish where the inner angle of the pallet stone is located in the drawing, we measure down on the arc k five degrees from its
intersection with a, and establish the point s. The line B b, Fig. 90, as the reader will see, does not coincide with the intersection of the arcs a and k, and to conveniently get at the proper
location for the inner angle of our pallet stone, we draw the line B b’, which passes through the point n located at the intersection of the arc a with the arc k. From B as a center we sweep the
short arc j with any convenient radius of which we have a sixty-degree
|
{"url":"http://www.bookrags.com/ebooks/17021/62.html","timestamp":"2014-04-16T19:49:38Z","content_type":null,"content_length":"34037","record_id":"<urn:uuid:e71e73bd-64a2-474d-a67f-89a7a4fb1e64>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 16
- COMPUTATIONAL OPTIMIZATION AND APPLICATIONS , 1997
"... The paper describes an interior-point algorithm for nonconvex nonlinear programming which is a direct extension of interior--point methods for linear and quadratic programming. Major
modifications include a merit function and an altered search direction to ensure that a descent direction for the mer ..."
Cited by 144 (13 self)
Add to MetaCart
The paper describes an interior-point algorithm for nonconvex nonlinear programming which is a direct extension of interior--point methods for linear and quadratic programming. Major modifications
include a merit function and an altered search direction to ensure that a descent direction for the merit function is obtained. Preliminary numerical testing indicates that the method is robust.
Further, numerical comparisons with MINOS and LANCELOT show that the method is efficient, and has the promise of greatly reducing solution times on at least some classes of models.
, 2000
"... 1 Introduction 1 Testing Methods 2 1 Largest Small Polygon 3 2 Distribution of Electrons on a Sphere 5 3 Hanging Chain 7 4 Shape Optimization of a Cam 9 5 Isometrization of ff-pinene 11 6 Marine
Population Dynamics 13 7 Flow in a Channel 16 8 Robot Arm 18 9 Particle Steering 21 10 Goddard Rocket 23 ..."
Cited by 23 (0 self)
Add to MetaCart
1 Introduction 1 Testing Methods 2 1 Largest Small Polygon 3 2 Distribution of Electrons on a Sphere 5 3 Hanging Chain 7 4 Shape Optimization of a Cam 9 5 Isometrization of ff-pinene 11 6 Marine
Population Dynamics 13 7 Flow in a Channel 16 8 Robot Arm 18 9 Particle Steering 21 10 Goddard Rocket 23 11 Hang Glider 26 12 Catalytic Cracking of Gas Oil 29 13 Methanol to Hydrocarbons 31 14
Catalyst Mixing 33 15 Elastic-Plastic Torsion 35 16 Journal Bearing 37 17 Minimal Surface with Obstacle 39 Acknowledgments 41 References 41 ii Benchmarking Optimization Software with COPS by
Elizabeth D. Dolan and Jorge J. Mor'e Abstract We describe version 2.0 of the COPS set of nonlinearly constrained optimization problems. We have added new problems, as well as streamlined and
improved most of the problems. We also provide a comparison of the LANCELOT, LOQO, MINOS, and SNOPT solvers on these problems. Introduction The COPS [5] test set provides a modest selection of
difficult nonlinearly constrai...
"... 1 1 Introduction 1 2 Largest Small Polygon (Gay [8]) 3 3 Distribution of Electrons on a Sphere (Vanderbei [13]) 5 4 Sawpath Tracking (Vanderbei [13]) 7 5 Hanging Chain (H. Mittelmann, private
communication) 10 6 Shape Optimization of a Cam (Anitescu and Serban [1]) 12 7 Isometrization of ff-pinene ( ..."
Cited by 18 (1 self)
Add to MetaCart
1 1 Introduction 1 2 Largest Small Polygon (Gay [8]) 3 3 Distribution of Electrons on a Sphere (Vanderbei [13]) 5 4 Sawpath Tracking (Vanderbei [13]) 7 5 Hanging Chain (H. Mittelmann, private
communication) 10 6 Shape Optimization of a Cam (Anitescu and Serban [1]) 12 7 Isometrization of ff-pinene (MINPACK-2 test problems [3]) 15 8 Marine Population Dynamics (Rothschild et al. [11]) 18 9
Flow in a Channel (MINPACK-2 test problems [3]) 21 10 Non-inertial Robot Arm (Vanderbei [13]) 24 11 Linear Tangent Steering (Betts, Eldersveld, and Huffman [4]) 29 12 Goddard Rocket (Betts,
Eldersveld, and Huffman [4]) 32 13 Hang Glider (Betts, Eldersveld, Huffman [4]) 35 14 Implementation of COPS in C 38 References 43 ii COPS: Large-Scale Nonlinearly Constrained Optimization Problems
Alexander S. Bondarenko, David M. Bortz, and Jorge J. Mor'e Abstract We have started the development of COPS, a collection of large-scale nonlinearly Constrained Optimization ProblemS. The primary
purpose of this col...
- MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY , 2004
"... ..."
- Proc. 11th ACM Symp. on Computational Geometry , 1995
"... A thrackle is a graph drawn in the plane so that its edges are represented by Jordan arcs and any two distinct arcs either meet at exactly one common vertex or cross at exactly one point
interior to both arcs. About forty years ago, J. H. Conway conjectured that the number of edges of a thrackle can ..."
Cited by 12 (2 self)
Add to MetaCart
A thrackle is a graph drawn in the plane so that its edges are represented by Jordan arcs and any two distinct arcs either meet at exactly one common vertex or cross at exactly one point interior to
both arcs. About forty years ago, J. H. Conway conjectured that the number of edges of a thrackle cannot exceed the number of its vertices. We show that a thrackle has at most twice as many edges as
vertices. Some related problems and generalizations are also considered. 1
- IN AUTOMATIC DIFFERENTIATION OF ALGORITHMS: THEORY, IMPLEMENTATION, AND APPLICATION , 1991
"... We describe favorable experience with automatic differentiation of mathematical programming problems expressed in AMPL, a modeling language for mathematical programming. Nonlinear expressions
are translated to loop-free code, which makes analytically correct gradients and Jacobians particularly easy ..."
Cited by 10 (9 self)
Add to MetaCart
We describe favorable experience with automatic differentiation of mathematical programming problems expressed in AMPL, a modeling language for mathematical programming. Nonlinear expressions are
translated to loop-free code, which makes analytically correct gradients and Jacobians particularly easy to compute -- static storage allocation suffices. The nonlinear expressions may either be
interpreted or, to gain some execution speed, converted to Fortran or C.
"... Thrackleation of graphs and global optimization for quadratically constrained quadratic programming are used to nd the octagon with unit diameter and largest area. This proves the rst open case
of a conjecture of Graham (1975). Keywords: octagon, area, diameter, thrackleation, quadratic programming ..."
Cited by 7 (3 self)
Add to MetaCart
Thrackleation of graphs and global optimization for quadratically constrained quadratic programming are used to nd the octagon with unit diameter and largest area. This proves the rst open case of a
conjecture of Graham (1975). Keywords: octagon, area, diameter, thrackleation, quadratic programming. Resume On utilise la trackleation des graphes et l'optimisation globale de programmation
quadratique a contraintes quadratiques pour determiner l'octogone de diametre unitaire de surface maximale. Ceci prouve le premier cas ouvert d'une conjecture de Graham (1975). Mots clefs: octogone,
surface, diametre, trackleation, programmation quadratique. 1
- Discrete and Computational Geometry
"... Abstract. The maximal area of a polygon with n = 2m edges and unit diameter is not known when m ≥ 5, nor is the maximal perimeter of a convex polygon with n = 2 m edges and unit diameter known
when m ≥ 4. We construct improved polygons in both problems, and show that the values we obtain cannot be i ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. The maximal area of a polygon with n = 2m edges and unit diameter is not known when m ≥ 5, nor is the maximal perimeter of a convex polygon with n = 2 m edges and unit diameter known when m
≥ 4. We construct improved polygons in both problems, and show that the values we obtain cannot be improved for large n by more than c1/n 3 in the area problem and c2/n 5 in the perimeter problem,
for certain constants c1 and c2. 1.
"... This paper describes a set of nonlinearly constrained optimization problems that can be used to test and develop optimization algorithms for nonlinearly constrained problems. We drew these test
problems from a wide variety of sources, including some of the already existing collections, such as the A ..."
Cited by 1 (0 self)
Add to MetaCart
This paper describes a set of nonlinearly constrained optimization problems that can be used to test and develop optimization algorithms for nonlinearly constrained problems. We drew these test
problems from a wide variety of sources, including some of the already existing collections, such as the AMPL problems on Vanderbei's web site [15] and MINPACK-2 collection [3]. We chose the problems
for this collection that are related to various applications (e.g. fluid dynamics, optimal shape design, population dynamics) and/or interesting. In the following section 2, each problem has a short
derivation and description of the problem, providing the information in Table 1.1, followed by the general comments on the problem's specific features and difficulties. Then we provide the results of
the computational Table 1.1: Description of test problems
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1195439","timestamp":"2014-04-21T11:34:57Z","content_type":null,"content_length":"34902","record_id":"<urn:uuid:5fc26141-f728-419a-8300-1c809e0dbeb4>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
|
draw normal random numbers with fixed sum
"Roger Stafford" wrote in message <il8eig$jht$1@fred.mathworks.com>...
> "Claudia" wrote in message <il7mnh$j2s$1@fred.mathworks.com>...
> > Hi all,
> >
> > I would like to draw a given number k of random numbers from a normal
> > distribution with given parameters mu and sigma, and I would like the sum of this random numbers to be like another given parameter call it s.
> >
> > How can I do that in Matlab?
> >
> > Thank you in advance
> - - - - - - - - -
> Claudia, in my opinion the problem as you originally posed it does make sense if you interpret it as a conditional probability statement. With k mutually independent normal random variables, x1,
x2, ..., xk, you want to generate samples of them given that their sum is some specified amount. That is a perfectly well-defined statement in conditional probability theory. Of course the resulting
samples will no longer be mutually independent, but in fact each one will still be normally distributed, though with an altered variance.
> If we subtract the mean values from each variable we then have independent normal variables with mean zero and constant variance. It is known that if we apply any k by k unitary transformation to
these we will still have independent normal variables with mean zero and the same variance.
> Suppose we select a unitary matrix in which the top row has all values equal to 1/sqrt(k) as it is to be applied to a column of the k x's: y = U*x. This means that y1 will be equal to the sum of
the x's divided by sqrt(k). Now since the resulting y's are also mutually independent and have the same normal distribution, if we place the condition that y1 be equal to the specified constant sum
of the x's divided by sqrt(k), the original independence of the y's implies that the distribution of the remaining y's will be unaffected - placing a condition on one variable does not affect the
distribution of variables that are independent of it. Then if we apply the inverse transformation on the y's with y1 specified in this way and the remaining y's still independent and normal, the x's
we obtain will have the necessary conditional probability distribution given that their sum is specified.
> This tells us how to generate the desired x's. Set y1 equal to the above value and generate k-1 additional y's using 'randn' appropriately. An appropriate U can easily be created using matlab's
'null' function. Then apply the inverse of such a U to these y's and obtain the required x's. The x's will then possess the desired conditional probability distribution with fixed sum. At this point
the required means can then be added back again.
> I haven't spelled this out in detail because I am not sure that this is what you really want. If it is, one of us can no doubt lay out some specific matlab code that would do the job.
> Roger Stafford
Thank's a lot. That's exactly what I was looking for! I tried it by myself to write the code. What I don't understand is how to generate the matrix U...
Here my code:
% given parameters
mu =
sigma =
k = % count of variables
% in my case s = mu * k
s = % sum of variables
% subtract the mean -> mean value 0
% x = x_norm - mu;
% generate y; (Transformation y = sum(x)/sqrt(k))
y = s / sqrt(k); %first value
% generate k-1 normally distributed random numbers
y(2:k) = randn(k-1, 1);
% How should I generate the matrix U??? U = null(?)
U = 1/sqrt(k) * ones(1, k); %first row
U(2:k, :) = null(y); %???
% calculate the x's
x = inv(U)*y;
% add the mean to get the normally distributed values with mean mu
m = x + mu;
|
{"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/304141","timestamp":"2014-04-18T13:35:55Z","content_type":null,"content_length":"55640","record_id":"<urn:uuid:a4d13b03-f4a7-4f76-bcb0-74c311fdf798>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Roman Abacus
Home Abacus Learning
It was during the Ancient Times from the period of c. 300 B.C to c. 500 A.D that the Roman Abacus was invented. It was properly termed as Roman hand abacus.
Development of Roman Abacus
There was no recorded discovery of the existence of the Roman hand abacus or even a specimen that would prove that indeed there was such as Roman hand abacus. However, language one of the most
reliable sources of past culture or history would prove to the fact that indeed there was such a counting device Roman hand abacus.
It would confirm as such that the Greek has psephoi its Roman counterpart is calculi. In Latin the word calx would mean pebble. Thus, a calculus means little stones that would be used as counters. It
served as a proof that Romans had an actual counting board which they will use this calculi as counters. This counting board was the Roman hand abacus.
Structure of Roman Abacus
The Roman Abacus was made of metal plate and this was where the beads ran in slots. The size of the abacus was such that it can fit into the pocket of a modern shirt. It consists of seven longer
which had up to four beads in each and seven shorter grooves which had only one bead that was used for the whole number counting. The rightmost two grooves were used for fractional counting.
The Roman hand abacus had a lower groove which represents units and the beads in the upper grooves were in fives such that five units, five tens and so on. The groove values were marked by roman
numerals. This was designed to bi-quinary coded decimal system.
Computation using Roman Abacus
Computations through the use of Roman Hand Abacus made by using the beads and slid them up and down to the grooves in order to denote the value of each of the column.
The upper slots of the Roman Hand Abacus contained a single bead while the lower slots contained four beads but there is an exception that is the two rightmost columns. These two slots are used for a
mixed-base math which is unique to the Roman hand abacus.
The longer slots which contained five beads are used for counting of 1/12th of a whole unit. This makes the abacus very useful to the Romans in terms of measures and currency.
The rightmost slots were used to enumerate fractions of an uncia. These were from the top to the bottom, such as 1/2s, 1/4s and 1/12s of an uncia.
Absence of Zero and Negative Numbers in Roman Abacus
In a counting board or abacus the rows or columns represent zero. The Romans used Roman numerals in order to record results. Roman numerals were all positive so there was no need for a zero notation.
The Romans already knew the presence of the concept of zero incorporated in any place value, column or row.
The Roman merchants already knew that there was a need to learn and understand the concept of negative numbers in order to compare and analyze their loans as to their investments and their
liabilities as to their assets.
Even if the Roman hand abacus only represents positive numerals still the Romans used the concept of zero and negative numbers in terms of trade.
Please send me formulas of multiplication with abacus system.
Post Comment
Abacus Lessons
Related Tutorials
|
{"url":"http://www.abacuslessons.com/roman-abacus.html","timestamp":"2014-04-17T15:26:02Z","content_type":null,"content_length":"27547","record_id":"<urn:uuid:b2f8554c-7da5-4024-a862-3f1967da90be>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Well-trained PETs: Improving probability estimation trees
Results 1 - 10 of 29
, 2004
"... Receiver Operating Characteristics (ROC) graphs are a useful technique for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and
in recent years have been increasingly adopted in the machine learning and data mining research communitie ..."
Cited by 227 (1 self)
Add to MetaCart
Receiver Operating Characteristics (ROC) graphs are a useful technique for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in
recent years have been increasingly adopted in the machine learning and data mining research communities. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls
when using them in practice. This article serves both as a tutorial introduction to ROC graphs and as a practical guide for using them in research.
, 2003
"... Receiver Operating Characteristics (ROC) graphs are a useful technique for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and
in recent years have been increasingly adopted in the machine learning and data mining research communitie ..."
Cited by 157 (0 self)
Add to MetaCart
Receiver Operating Characteristics (ROC) graphs are a useful technique for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in
recent years have been increasingly adopted in the machine learning and data mining research communities. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls
when using them in practice. This article serves both as a tutorial introduction to ROC graphs and as a practical guide for using them in research. Keywords: 1
, 2002
"... Tree induction is one of the most effective and widely used methods for building classification models. However, many applications require cases to be ranked by the probability of class
membership. Probability estimation trees (PETs) have the same attractive features as classification trees (e.g., c ..."
Cited by 130 (4 self)
Add to MetaCart
Tree induction is one of the most effective and widely used methods for building classification models. However, many applications require cases to be ranked by the probability of class membership.
Probability estimation trees (PETs) have the same attractive features as classification trees (e.g., comprehensibility, accuracy and efficiency in high dimensions and on large data sets).
Unfortunately, decision trees have been found to provide poor probability estimates. Several techniques have been proposed to build more accurate PETs, but, to our knowledge, there has not been a
systematic experimental analysis of which techniques actually improve the probability-based rankings, and by how much. In this paper we first discuss why the decision-tree representation is not
intrinsically inadequate for probability estimation. Inaccurate probabilities are partially the result of decision-tree induction algorithms that focus on maximizing classification accuracy and
minimizing tree size (for example via reduced-error pruning). Larger trees can be better for probability estimation, even if the extra size is superfluous for accuracy maximization. We then present
the results of a comprehensive set of experiments, testing some straghtforward methods for improving probability-based rankings. We show that using a simple, common smoothing method--the Laplace
correction--uniformly improves probability-based rankings. In addition, bagging substantioJly improves the rankings, and is even more effective for this purpose than for improving accuracy. We
conclude that PETs, with these simple modifications, should be considered when rankings based on class-membership probability are required.
- In Proceedings of the ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD) (2003
"... Classification trees are widely used in the machine learning and data mining communities for modeling propositional data. Recent work has extended this basic paradigm to probability estimation
trees. Traditional tree learning algorithms assume that instances in the training data are homogenous and i ..."
Cited by 117 (33 self)
Add to MetaCart
Classification trees are widely used in the machine learning and data mining communities for modeling propositional data. Recent work has extended this basic paradigm to probability estimation trees.
Traditional tree learning algorithms assume that instances in the training data are homogenous and independently distributed. Relational probability trees (RPTs) extend standard probability
estimation trees to a relational setting in which data instances are heterogeneous and interdependent. Our algorithm for learning the structure and parameters of an RPT searches over a space of
relational features that use aggregation functions (e.g. AVERAGE, MODE, COUNT) to dynamically propositionalize relational data and create binary splits within the RPT. Previous work has identified a
number of statistical biases due to characteristics of relational data such as autocorrelation and degree disparity. The RPT algorithm uses a novel form of randomization test to adjust for these
biases. On a variety of relational learning tasks, RPTs built using randomization tests are significantly smaller than other models and achieve equivalent, or better, performance. 1.
, 2002
"... For large, real-world inductive learning problems, the number of training examples often must be limited due to the costs associated with procuring, preparing, and storing the data and/or the
computational costs associated with learning from the data. One question of practical importance is: if n ..."
Cited by 109 (9 self)
Add to MetaCart
For large, real-world inductive learning problems, the number of training examples often must be limited due to the costs associated with procuring, preparing, and storing the data and/or the
computational costs associated with learning from the data. One question of practical importance is: if n training examples are going to be selected, in what proportion should the classes be
represented? In this article we analyze the relationship between the marginal class distribution of training data and the performance of classification trees induced from these data, when the size of
the training set is fixed. We study twenty-six data sets and, for each, determine the best class distribution for learning. Our results show that, for a fixed number of training examples, it is often
possible to obtain improved classifier performance by training with a class distribution other than the naturally occurring class distribution. For example, we show that to build a classifier robust
to different misclassification costs, a balanced class distribution generally performs quite well. We also describe and evaluate a budgetsensitive progressive-sampling algorithm that selects training
examples such that the resulting training set has a good (near-optimal) class distribution for learning.
- In Proceedings of the Seventh International Conference on Knowledge Discovery and Data Mining , 2001
"... In many machine learning domains, misclassication costs are dierent for dierent examples, in the same way that class membership probabilities are exampledependent. In these domains, both costs
and probabilities are unknown for test examples, so both cost estimators and probability estimators must be ..."
Cited by 96 (9 self)
Add to MetaCart
In many machine learning domains, misclassication costs are dierent for dierent examples, in the same way that class membership probabilities are exampledependent. In these domains, both costs and
probabilities are unknown for test examples, so both cost estimators and probability estimators must be learned. This paper rst discusses how to make optimal decisions given cost and probability
estimates, and then presents decision tree learning methods for obtaining well-calibrated probability estimates. The paper then explains how to obtain unbiased estimators for example-dependent costs,
taking into account the diculty that in general, probabilities and costs are not independent random variables, and the training examples for which costs are known are not representative of all
examples. The latter problem is called sample selection bias in econometrics. Our solution to it is based on Nobel prize-winning work due to the economist James Heckman. We show that the methods we
propose are s...
- In Proceedings of the Eighteenth International Conference on Machine Learning , 2001
"... Accurate, well-calibrated estimates of class membership probabilities are needed in many supervised learning applications, in particular when a cost-sensitive decision must be made about
examples with example-dependent costs. This paper presents simple but successful methods for obtaining calibrated ..."
Cited by 95 (4 self)
Add to MetaCart
Accurate, well-calibrated estimates of class membership probabilities are needed in many supervised learning applications, in particular when a cost-sensitive decision must be made about examples
with example-dependent costs. This paper presents simple but successful methods for obtaining calibrated probability estimates from decision tree and naive Bayesian classifiers. Using the large and
challenging KDD'98 contest dataset as a testbed, we report the results of a detailed experimental comparison of ten methods, according to four evaluation measures. We conclude that binning succeeds
in significantly improving naive Bayesian probability estimates, while for improving decision tree probability estimates, we recommend smoothing by -estimation and a new variant of pruning that we
call curtailment.
, 2001
"... In this article we analyze the effect of class distribution on classifier learning. We begin by describing the different ways in which class distribution affects learning and how it affects the
evaluation of learned classifiers. We then present the results of two comprehensive experimental studie ..."
Cited by 82 (2 self)
Add to MetaCart
In this article we analyze the effect of class distribution on classifier learning. We begin by describing the different ways in which class distribution affects learning and how it affects the
evaluation of learned classifiers. We then present the results of two comprehensive experimental studies. The first study compares the performance of classifiers generated from unbalanced data sets
with the performance of classifiers generated from balanced versions of the same data sets. This comparison allows us to isolate and quantify the effect that the training set's class distribution has
on learning and contrast the performance of the classifiers on the minority and majority classes. The second study assesses what distribution is "best" for training, with respect to two performance
measures: classification accuracy and the area under the ROC curve (AUC). A tacit assumption behind much research on classifier induction is that the class distribution of the training data should
match the "natural" distribution of the data. This study shows that the naturally occurring class distribution often is not best for learning, and often substantially better performance can be
obtained by using a different class distribution. Understanding how classifier performance is affected by class distribution can help practitioners to choose training data---in real-world situations
the number of training examples often must be limited due to computational costs or the costs associated with procuring and preparing the data. 1.
- CEDER WORKING PAPER #IS-01-02, STERN SCHOOL OF BUSINESS , 2001
"... Tree induction and logistic regression are two standard, off-the-shelf methods for building models for classi cation. We present a large-scale experimental comparison of logistic regression and
tree induction, assessing classification accuracy and the quality of rankings based on class-membership pr ..."
Cited by 62 (16 self)
Add to MetaCart
Tree induction and logistic regression are two standard, off-the-shelf methods for building models for classi cation. We present a large-scale experimental comparison of logistic regression and tree
induction, assessing classification accuracy and the quality of rankings based on class-membership probabilities. We use a learning-curve analysis to examine the relationship of these measures to the
size of the training set. The results of the study show several remarkable things. (1) Contrary to prior observations, logistic regression does not generally outperform tree induction. (2) More
specifically, and not surprisingly, logistic regression is better for smaller training sets and tree induction for larger data sets. Importantly, this often holds for training sets drawn from the
same domain (i.e., the learning curves cross), so conclusions about induction-algorithm superiority on a given domain must be based on an analysis of the learning curves. (3) Contrary to conventional
wisdom, tree induction is effective atproducing probability-based rankings, although apparently comparatively less so foragiven training{set size than at making classifications. Finally, (4) the
domains on which tree induction and logistic regression are ultimately preferable canbecharacterized surprisingly well by a simple measure of signal-to-noise ratio.
- Machine Learning , 2004
"... In many cost-sensitive environments class probability estimates are used by decision makers to evaluate the expected utility from a set of alternatives. Supervised learning can be used to build
class probability estimates; however, it often is very costly to obtain training data with class labels ..."
Cited by 60 (9 self)
Add to MetaCart
In many cost-sensitive environments class probability estimates are used by decision makers to evaluate the expected utility from a set of alternatives. Supervised learning can be used to build class
probability estimates; however, it often is very costly to obtain training data with class labels. Active learning acquires data incrementally, at each phase identifying especially useful additional
data for labeling, and can be used to economize on examples needed for learning. We outline the critical features of an active learner and present a sampling-based active learning method for
estimating class probabilities and class-based rankings. BOOT- STRAP-LV identifies particularly informative new data for learning based on the variance in probability estimates, and uses weighted
sampling to account for a potential example's informative value for the rest of the input space. We show empirically that the method reduces the number of data items that must be obtained and
labeled, across a wide variety of domains. We investigate the contribution of the components of the algorithm and show that each provides valuable information to help identify informative examples.
We also compare BOOTSTRAP- LV with UNCERTAINTY SAMPLING, an existing active learning method designed to maximize classification accuracy. The results show that BOOTSTRAP-LV uses fewer examples to
exhibit a certain estimation accuracy and provide insights to the behavior of the algorithms. Finally, we experiment with another new active sampling algorithm drawing from both UNCERTAINTY SAMPLING
and BOOTSTRAP-LV and show that it is significantly more competitive with BOOTSTRAP-LV compared to UNCERTAINTY SAMPLING. The analysis suggests more general implications for improving existing active
sampling ...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=219368","timestamp":"2014-04-21T16:31:49Z","content_type":null,"content_length":"42451","record_id":"<urn:uuid:86788e87-91d7-4597-82a9-69b31a6d4ef7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
PLEASEEEE HELPPPP!!! WILL GIVE MEDAL! 46. Find the value of x.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Aha..These are similar triangles, because they have two sides with the same ratio 2:1. (the sides with lines through them). Since these are in ratio 2:1, so will every other side. so 20:(3x-5) =
2:1 or can be written as \[\frac{20}{3x-5}=\frac{2}{1}\] multiply up the 3x-5 to get \[20=2(3x-5)=6x-10\] Divide both sides by common factor: 2 \[10=3x-5\] Adding 5 to both sides: \[15=3x\]
Divide both sides by 3 So x = 5
Best Response
You've already chosen the best response.
Thank You Sooo Much! Your better than my teachat explaining! Are you up for another?
Best Response
You've already chosen the best response.
Like this one?
Best Response
You've already chosen the best response.
It has to do with finding x on an angle.
Best Response
You've already chosen the best response.
Sure I'll have a look
Best Response
You've already chosen the best response.
What is the value of x?
Best Response
You've already chosen the best response.
Yeah you should probably post a new question, but I'll answer anyway... These are vertically opposite angles which are ALWAYS equal So just solve: 142 = 2x+24 divide by 2: 71 = x + 12 59 = x
Best Response
You've already chosen the best response.
Ok, I just have 2 more. Would you like me to post a new question?
Best Response
You've already chosen the best response.
Yeah. That'd be great. Sometimes people get pissed off if you don't
Best Response
You've already chosen the best response.
Lol ok.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50ff6cf8e4b0426c636842e6","timestamp":"2014-04-20T06:28:01Z","content_type":null,"content_length":"53695","record_id":"<urn:uuid:4e9346c7-9869-4971-ba37-f9c2deac598d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An Analysis of Students Performance Using Genetic Algorithm
T. Miranda Lakshmi^1, A. Martin^2, , V. Prasanna Venkatesan^2
^1Department of Computer Science, Bharathiyar University, Coimbatore, India
^2Department of Banking Technology, Pondicherry University, Puducherry, India
Genetic algorithm plays a significant role, as search techniques for handling complex spaces, in many fields such as artificial intelligence, engineering, robotic, etc. The genetic processes on the
natural evolution principles of populations have been fairly successful at solving problems and produce optimized solution from generation to generation. This is applied in students’ quantitative
data analysis to identify the most impact factor in their performance in their curriculum. The results will help the educational institutions to improve the quality of teaching after evaluating the
marks achieved by the students’ in academic career. This student analysis model considers the quantitative factors such as theoretical, mathematical, practical, departmental and other departmental
marks to find the most impacting factor using genetic algorithm.
At a glance: Figures
Keywords: students’ performance, quantitative factors, genetic algorithm, influencing parameter, students evaluation results
Journal of Computer Sciences and Applications, 2013 1 (4), pp 75-79.
DOI: 10.12691/jcsa-1-4-3
Received December 31, 2012; Revised May 30, 2013; Accepted May 31, 2013
© 2013 Science and Education Publishing. All Rights Reserved.
Cite this article:
• Lakshmi, T. Miranda, A. Martin, and V. Prasanna Venkatesan. "An Analysis of Students Performance Using Genetic Algorithm." Journal of Computer Sciences and Applications 1.4 (2013): 75-79.
• Lakshmi, T. M. , Martin, A. , & Venkatesan, V. P. (2013). An Analysis of Students Performance Using Genetic Algorithm. Journal of Computer Sciences and Applications, 1(4), 75-79.
• Lakshmi, T. Miranda, A. Martin, and V. Prasanna Venkatesan. "An Analysis of Students Performance Using Genetic Algorithm." Journal of Computer Sciences and Applications 1, no. 4 (2013): 75-79.
Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks
1. Introduction
In present day’s educational system, performance in career growth is determined by the assessment and examination achievements. The assessment is carried out by curricular activities such as class
test, viva, seminar, assignments, general proficiency, attendance, lab work and finally semester exams. On considering the quantitative factors such as attendance, CGPA (cumulative), marks obtained
in theoretical , mathematical, elective and departmental papers, knowledge in learning, understanding capability, communication etc. thus the computation is done by real genetic algorithm is used to
find the most influencing factor in those selected factors.
In order to formulate the equation that contains several important subjects and its marks, their priority values plays the major role in the sensitivity analysis. It can be added to the equation
along with their parametric variables and the output returned from the equation is considered as the overall performance value. Now according to the passing criteria as fixed, the performance is
classified as good, average and poor categories. The real genetic algorithm is applied RGSPAT (Real Genetic Student Performance Analysis Tool) that helps us to know where the variations happened can
be identified. From the result, the focus towards the potential problems in the course is gained. Educators can also use this information to guide their way of evaluation and implementing according
to curriculum changes.
The remaining section of this paper is organized as follows, section 2 describes the several techniques used in evaluation of student’s performance, section 3 discuss about designing principles of
various components in our student performance analysis model, section 4 describes about application of real genetic algorithm to find the most important quantitative feature, section 5 about working
model of our Analysis tool and section 6 concludes the paper with remarkable outcome.
2. Prior Research
This section discuss about various techniques that are used to evaluate and predict the performance of employees, staffs, teachers and students and implementation of real genetic algorithm in bank
applications. They are as follows.
The most impact banking features in bankruptcy model are analyzed and results are produced. Deakin’s bankruptcy model and its various features such as net income, cash assets, current assets, sales
and total assets are considered. On application of Genetic algorithm they predicted the most important feature in this Deakin model is cash assets/ total assets. This ratio is crucial for predicting
the bankruptcy with 96% accuracy (Martin et al, 2011), similarly this approach is adapted here to find the most impacting parameter from the students results.
A fuzzy Expert System tested with 20 student’s marks obtained by semester-1 and semester-2 examinations, both inputs had same Triangular Membership Functions. The proposed a Fuzzy Expert System (FES)
for student academic performance evaluation based on fuzzy logic techniques uses a suitable fuzzy inference mechanism and associated rule (Ramjeet et al, 2011). The academic performances are taken
and compare the results (performance) with existing statistical method. Similarly FES was developed for evaluation of teachers’ performance in teaching activity is especially relevant for the
academic institutions. It helps to define efficient plans to guarantee quality of teachers and the teaching learning process by considering students’ feedback, result, students’ attendance, teaching
learning process and academic development of teachers (Chaudhari et al, 2012).
The various factors in his research that may likely influence the performance of a student were identified such as examination scores, age on admission, parental background and gender etc., The
Artificial neural network (ANN) shown the potential for enhancing the effectiveness of a university admission system. The model was developed based on some selected input variables from the pre
admission data of five different sets of university graduates. It achieved an accuracy of over 74%, which shows the potential efficacy of Artificial Neural Network as a prediction tool and a
selection criterion for candidates seeking admission into a university (Oladokun et al, 2011). The adaptive Neuro-fuzzy technique introduced for prediction of student performance evaluation (Osman
and Bahattin, 2009). Here Assessment of students’ achievements in the previous year to evaluate the learning process and performance of students is considered to be more meaningful than referring the
marks individually. The previous and current student’s performance data are kept in a computer system that is an excellent source to achieve satisfactory results in evaluation of the students’
performance as a group, their learning level and evaluate the quality of education. It is illustrated with students’ achievements, academic performances, linguistic grades granted. It also gives the
outcomes of the SAP for statistical and neuro-fuzzy modeling approaches with their linguistic values.
A methodology by the derivation of performance prediction indicators to deploying a simple student performance assessment and monitoring system focusing on performance monitoring of students’
continuous assessment (tests) and examination scores in order to predict their final achievement status upon graduation. This is done based on various data mining techniques (DMT) and the application
of machine learning processes, rules are derived that enable the classification of students in their predicted classes (Emmanuel, 2007). The deployment of the prototyped solution, integrates
measuring, ‘recycling’ and reporting procedures in the new system to optimize prediction accuracy. A fuzzy qualitative classification system for academic performance evaluation using the link
analysis methodology (Tossapon et al, 2011), Unlike the conventional approach where fuzzy rules are used to encode information provided by training data, the proposed model considers involving
variables, classes and their relations as elements of a social network that can be modeled as a weighted graph. The resulting linguistic descriptions of grade-specific likelihood are useful for
further revisions regarding the formal partition of performance levels and students’ improvement.
Genetic algorithm along with decision tree is used in distant learning for analyzing the student academic performance. These concepts form the basis of the GATREE System (Papagelis and Kalles, 2001),
which has been built on top of the GALIB library (Wall, 1996). The genetic operators on the tree representations are relatively straightforward. A mutation may modify the test attribute at a node or
the class label at a leaf. A cross-over may substitute whole parts of a decision tree by parts of another decision tree. It is used to assist in the task of continuously monitoring a student’s
performance with reference to the possibility of passing the final exam.
From these literature surveys, few researches have compared for analyzing the performance of student quantitative data and there is no considerable work on finding any equation model for to find the
best parameter that influence the performance of the students. This research also analyzes the impact of quantitative parameters in student’s academic performance.
3. Genetic Approach for Students Performance Analysis
Genetic algorithms are now widely applied in science and engineering as adaptive algorithms for solving practical problems. The general acceptance is that GA is particularly suited to
multidimensional global search problems where the search space potentially contains multiple local minima. Unlike other search methods, correlation between the search variables is not generally a
problem. Genetic algorithm works modeling the parameters of a problem as real strings. The following figure depicts the various components of RGSPAT model.
The proposed system consists of starting edge from the student analyzing data and ending edge by finding the most important parameter that is found by undergoing several processes as follows
● Analyzing the performance information in educational systems that are quantifiable such as attendance, internal assessment marks, project marks, previous semester marks, seminar, general
proficiency, papers that are considered important in that course etc.
● On collecting those parameters, the equation construction that containing all those parameter values with weight values are designed.
● After designing the equation, the real-coded genetic algorithm is applied.
● The operations such as mutation and cross over are carried out.
● The best parameter is identified, once analyzing all the parameters through real genetic algorithm.
The quantitative factors that are considered into the analysis of students’ performance are given below:
● PSM Previous Semester Marks/Grade obtained in course
● IA Internal Assessment marks
● SEM Seminar Performance obtained. In each semester seminar are organized to check the performance of students.
● ASS Assignment marks. In each semester two assignments are given to students by each teacher.
● GP General Proficiency performance. Like seminar, in each semester general proficiency tests are organized
● ATT Attendance of Student. Minimum 70% attendance is compulsory to participate in End Semester Examination. But even through in special cases low attendance students also participate in End
Semester Examination on genuine reason. Attendance is divided
● PM Project Work. Completion of the full project with report, presentation, system model.
● T Theoretical subject marks obtained in all semesters
● M Mathematical subject marks obtained in all semesters
● E Elective subject marks obtained in all semesters
● D Departmental subject marks obtained in all semesters
● O Other departmental subject marks in all semesters
● PV Performance value obtained from the equation as the result
Here RGSPAT tool that is developed for predicting student performance requires the quantitative attributes of student (i.e.) measurable variables like internal mark, seminar mark, attendance, daily
test marks etc. are listed below.
Now the equation used to analyze and categorize the performance of the student consists of the input parameters such as attendance, internal assessments, Assignment, General proficiency, theory
subject, mathematical subject, elective and departmental subject marks as mentioned above. The parameter which shows more variation in the evaluation of the performance using the algorithm, it is
chosen as the optimum parameter. On finding the result for each student, we can generally find the most optimizing parameter for the student management and thereby taking necessary steps for
improving the overall performance of the students.
4. Most Important Parameter Using RGA
This analysis is concerned with finding most important attribute that affects the performance of student. As the fore mentioned properties of RGA are highly advantageous, RGA for RGSPAT is designed
using Crossover and Mutation process. For this experiment we have selected the quantitative factors among the student in college/school. The real-valued genetic algorithm (RGA) uses a real value as a
parameter of the offspring in populations without performing coding and encoding process before calculates the fitness values of individuals. The performance analysis for RGSPAT model is given by the
following equation.
The working of real genetic algorithm in performance analysis is as follows.
a. [Start] Generate random population of attributes as chromosomes
b. [Fitness] Evaluate the fitness f(x) of each chromosome x in the population
c. [New population] Create a new population of attributes by repeating following steps until the new population is complete
i. [Selection] Select two parent chromosomes from a population according to their fitness (which satisfies the fitness function)
ii. [Crossover] with a crossover probability Pc crossover the parents to form a new offspring (children). For real values, linear crossover is performed. ax+(1-a)y, where a is priority value in the
equation and x, y are real values
iii. [Mutation] with a mutation probability Pm mutate new offspring at each locus (position in chromosome), For real value mutation, add any random value as x + N(0,0.1)
iv [Accepting] Place new offspring in a new population
d. [Replace] Use new generated population for a further run of algorithm.
e. [Test] if the end conditions are satisfied, stop, and returns the best parameter in current population
f. [loop] go to step b.
Initially a population of chromosomes, each of which represents a potential solution to the problem at hand, is generated randomly and each of them is evaluated by finding its fitness. The next
generation of same size is created by selecting more fit individuals from this population and by applying genetic operators like crossover and mutation to them. Mutation is an operator which creates
a new individual by making a random change in the old one, whereas crossover creates new individuals by combining parts from multiple individuals. Classic mutation randomly alters a single gene,
while crossover exchanges genetic material between two or more parents. This completes one generation and after repeating this procedure for a number of generations, due to selective pressure the
algorithm converges and it yields a better solution.
5. Results and Discussion
Table 2 illustrates the sample iterative process of the genetic algorithm to find the best parameter from the population. The selected iterated values are given in the Table 2. The fitness value is
chosen from these values. According to that the influencing parameter can be identified. In this application, we have chosen the performance fitness value as > 2.
The following diagram illustrates the working of RGSPAT model for analyzing the student performance in mathematical, technical, other department, and theoretical papers. The respective subject marks
are loaded and their corresponding influence on students is shown in the graph. In fig 2, the mathematical subject marks are taken whose maximum value is 100, given in y axis. The x axis represents
the corresponding quantitative factors in terms of numbers. Thus each subject marks are analyzed from their semester values and results are produced separately to find the most influencing parameter
as given below.
In the above Figure 2, we have selected the 120 students of IT departments and the results are used to analyze the most important parameter. This graph represents that parameter 1 and 4 have high
impact on the performance of the students (1 indicates mathematical subjects and 4 is Theoretical subjects). Thus those two subjects have given high importance to reach the better performance. It is
observed from the graph that next most important subject is practical, other departmental, technical and internal assessments respectively. Thus this system can be implemented in the educational
environment to evaluate the student results and to find the importance given to the subjects through analyzing their end results. Further research is to implement fuzzy clustering technique to group
the students on the basis of their performance towards the selected parameters.
6. Conclusion
Thus real genetic process is successfully implemented to find the most influencing parameter that affects performance in the educational system. It will be a better solution for the classification,
analyzing and evaluating the quantitative factors while course year and thereby predicting the students’ overall performance at the end of the semester examination. In this genetic process when we
made changes in single parent characteristic we are able to find important ratios. This way of using Real Genetic Algorithm can be applied to any areas to find the most influencing ratios or
parameters to predict the performance effectively. This research outlines the best features to develop a student performance model to analyze and can predict individual’s performance efficiently. It
will also act as a self-assessment tool for the students to know their position and area where to get improve. Thus we will provide the successful tool to predict the student’s performance using
genetic algorithm.
[1] A. Martin, V. Prasanna Venkatesan et al, “To find the most impact financial features for bankruptcy model using genetic algorithm”, International Conference on Advances in Engineering and
Technology, (ICAET-2011), May 27-28, 2011.
[2] Ramjeet Singh Yadav, “Modeling Academic Performance Evaluation Using Soft Computing Techniques: A Fuzzy Logic Approach”, International Journal on Computer Science and Engineering (IJCSE), Vol. 3
No. 2 Feb 2011.
[3] O.K Chaudhari, P.G khot, “Soft computing model for academic performance of teachers using fuzzy logic”, British journal of applied science and technology, 2(2):213-226, 2012.
[4] V.O. Oladokun, “Predicting Students’ Academic Performance using Artificial Neural Network: A Case Study of an Engineering Course”, The Pacific Journal of Science and Technology, Volume 9. Number
1. May-June 2008 (Springer).
[5] Osman Taylan, Bahattin Karagozog, “An adaptive neuro-fuzzy model for prediction of student’s academic performance computers & Industrial Engineering”, 57 (2009) 732-741
[6] Emmanuel N. Ogor “Student Academic Performance Monitoring and Evaluation Using Data Mining Techniques” published on Fourth Congress of Electronics, Robotics and Automotive Mechanics, 2009
[7] Tossapon boongoe, Qiang shen and Chris price, , International Journal of Uncertainty, fuzzy qualitative link analysis for academic performance evaluation Fuzziness and Knowledge-Based Systems
Vol. 19, No. 3 (2011) 559-585.
[8] Dimitris Kalles, Analyzing student performance in distance learning with genetic algorithms and decision trees, Hellenic Open University, Greece.
[9] Harvinder Singh Atwal, Predicting Individual StudentPerformance by Mining Assessment and Exam Data, 2010.
[10] Aavo Luuk Kersti Luuk, Predicting students’ academic performance in Aviation College from their admission test results, 2011
[11] Mukta Paliwal, Usha A. Kumar, A study of academic performance of business school graduates using neural network and statistical techniques, Expert Systems with Applications 36 (2009) 7865-7872.
|
{"url":"http://pubs.sciepub.com/jcsa/1/4/3/index.html","timestamp":"2014-04-20T03:11:10Z","content_type":null,"content_length":"73946","record_id":"<urn:uuid:4721e9d6-feea-4de3-8c2e-035fbe5ca5c1>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Introduction To Loop Quantum Gravity
It is suggestive that you mention Ashtekar variables, and also mention the variables of BF theory (which Freidel tries to reform us so that we write EF thinking that it makes better sense than BF).
Let me tell you what my sense of direction tells me. I listened to a (January, Toronto?) recorded talk by Vafa and I heard something ring in his voice when he said "form theories of gravity"----and I
went back and looked at the current paper Dijkgraaf, Gukov, Neitzke, Vafa just to make sure. there was a sense of relief. it represents a hopeful general idea for him.
from my perspective, Ashtekar variables and BF are foremostly examples of "form theories" and there could be modifications and other "form theories" we dont know about yet. there is a mental compass
needle pointing in this general direction.
it we want to play the game of making verbal (non-math) definitions for an intelligent reader, then it is important the ORDER we define the concepts and also the GOAL or where we are going. I think
the direction is that we want to get to where we can say what a "form defined on a manifold" is, or to be more official we should always say "differential form" defined on a "differentiable
manifold". So we need to say what the "tangent vectors" are at a point in a manifold.
the obstacle here is that these concepts are unmotivated, have too many syllables if you try to speak correctly, and seem kind of arbitrary and technical.
So I am thinking like this. the thing about a tangent vectors and forms is that they are BACKGROUND INDEPENDENT. All that means, basically, is that you dont have to have a metric. A background
independent approach to any kind of physics simply means in practice that you start with a manifold as usual (a "continuum" you say Einstein liked to say) and you refrain from giving yourself a
Well, how can you do physics on a manifold that (at least for now at the beginning) has no metric? What kind of useful objects can you define without a metric? Well, you do have infinitesimal
directions because you have coordinates and you can take the derivative at any point, so at a microscopic level you do have a vectorspace of directions-----call them TANGENTS. and on any vectorspace
one can readily define the dual space of linear functionals of the vectors-----things that eat the vectors up and give a number. The dual space of the tangents is called the FORMS.
and also the forms dont have to be number-valued, they can be "matrix" valued, one form can eat a tangent vector and produce therefrom not simply one number but 3 numbers or 4 numbers, or a matrix of
numbers, but that is not quite right let's say it eats the tangent vector and produces not a number but an element of some Lie algebra. then it is a ALGEBRA-VALUED form.
now this already seems disgustingly complicated so let's see why it might appeal to Cumrun Vafa arguably the world's top string theorist still functioning as such.
I think it appeals to Cumrun Vafa because it is a background independent way to do physics. that is essentially what "form theory of gravity" means.
And string theorists have been held up for two decades by not having a background independent approach. And it JUST HAPPENS that the Ashtekar variables are forms, and the B and F of BF theory are
forms, and (no matter what detractors say) Loop has been making a lot of progress lately, and Vafa says "hey, this might be the way to get background independence" and he creates a new fashion called
"topological Mtheory" which is a way of focussing on forms and linking up with "form theories of gravity".
So maybe the point is not that this or that particular approach is good or not, but simply that one should work with a manifold sans metric, and do physics with the restricted set of tools that can
be defined without a metric. And that means that, painfully abstract as it sounds, nightcleaner has to understand 3 things:
1. the tangent space at a point of a manifold is a vectorspace
2. any vectorspace has a dual space (the things that eat the vectors) and that dual space IS ITSELF a vectorspace.
3. the dual of the tangentspace is the forms and you can do stuff with forms.
Like, you can multiply two forms together (the cute "wedge" symbol), and you can construct more complicate forms that eat two vectors at once or that produce something more jazzy, in place of a
The hardest thing in the world to accept is that this is not merely something that mathematicians have invented to do for fun, a genteel and slightly exasperating amusement. The hardest thing to
accept is that nature wants us to consider these things because it is practically the only thing you can do with a manifold that doesnt require a metric!
So instead of talking about BF theory or Ashtekar variables in particular, my compass is telling me to wait for a while and see if anyone is interested in "forms on a manifold" that is to say in the
clunky polysyllabic language "differential forms defined on the tangent space of a differentiable manifold" UGH.
Also, selfAdjoint, you mentioned the word "bundle". Bundles may be going too far but they are in this general area of discussion, and there is also "connection"
A "connection" is a type of form. So if you understand "form" then you can maybe understand connection.
there is also this extremely disastrous thing that "form" is a misleading term. In real English it means "shape" but a differential form is not a shape at all. Richard being a serious fan of words
will insist that it means shape. But no. Some frenchman happened accidentally to call a machine that eats tangent vectors and spits out numbers by the name "form" and so that is what it is called,
even tho it is in nowise a shape. It is more like an incometax form, than it is a shape-form. And it is not like an incometax form either.
And as a final ace in the hole we can always say that Gen Rel is an example of a physical theory defined on a manifold without a metric. The metric is a variable that you eventually solve the
equation to get. you start without a metric and you do physics and you eventually get a metric.
If there is any useful sense to Kuhntalk then this is a "paradigm". and when Vafa has a good word to say about "form theories of gravity" then this might be the kind of softening that accompanies a
shift in perspective.
|
{"url":"http://www.physicsforums.com/showpost.php?p=508857&postcount=43","timestamp":"2014-04-17T18:29:43Z","content_type":null,"content_length":"14140","record_id":"<urn:uuid:3f9d468a-18de-4b3a-ad89-86ceea702f14>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Efficient broadcasting from several originators
in a d\Gamma dimensional grid
A. Averbuch \Lambda I. Gaber y Y. Roditty z
Given an undirected graph representing a network of processors,
and source nodes (called originators) containing messages that must
be broadcast to all other nodes we want to find a scheme that ac
complishes the task in as small number of time steps as possible. The
general problem is known to be NPcomplete ([11] unless the structure
of the graph is known in advance. Let G be a d\Gammadimensional grid, and
let f(d; k) be the time needed to broadcast one message in G from k
different originators.
Our main results are:
1. Exact values of f(1; k)
2. Optimal bounds on f(2; k)
3. Bounds on f(d; k).
The results are extensions of earlier results of Farley and Hedet
niemi [5] , VanScoy and Brooks [12] and Roditty and Shoham [10].
\Lambda Department of Computer Science, School of Mathematical Sciences, Tel Aviv Univer
sity, Tel Aviv 69978, Israel.
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/1008/3951688.html","timestamp":"2014-04-18T14:10:42Z","content_type":null,"content_length":"8117","record_id":"<urn:uuid:3037a913-0db4-4ed8-ac10-d911de20b22e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wissenschaft und Deutsch (on Hiatus)
The Möbius strip or Möbius band (UK /ˈmɜrbiəs/ or US /ˈmoʊbiəs/; German: [ˈmøːbi̯ʊs]), also Mobius orMoebius, is a surface with only one side and only one boundary component. The Möbius strip has the
mathematical property of being non-orientable. It can be realized as a ruled surface. It was discovered independently by the German mathematicians August Ferdinand Möbius and Johann Benedict Listing
in 1858.^[1]^[2]^[3]
A model can easily be created by taking a paper strip and giving it a half-twist, and then joining the ends of the strip together to form a loop. In Euclidean space there are two types of Möbius
strips depending on the direction of the half-twist: clockwise and counterclockwise. That is to say, it is a chiral object with “handedness” (right-handed or left-handed).
The Möbius band (equally known as the Möbius strip) is not a surface of only one geometry (i.e., of only one exact size and shape), such as the half-twisted paper strip depicted in the illustration
to the right. Rather, mathematicians refer to the (closed) Möbius band as any surface that is topologically equivalent to this strip. Its boundary is a simple closed curve, i.e., topologically a
circle. This allows for a very wide variety of geometric versions of the Möbius band as surfaces each having a definite size and shape. For example, any closed rectangle with length L and width W can
be glued to itself (by identifying one edge with the opposite edge after a reversal of orientation) to make a Möbius band. Some of these can be smoothly modeled in 3-dimensional space, and others
cannot (see section Fattest rectangular Möbius strip in 3-space below). Yet another example is the complete open Möbius band (see sectionOpen Möbius band below). Topologically, this is slightly
different from the more usual — closed — Möbius band, in that any open Möbius band has no boundary.
read more, this is basically explaining life right now
finnishyourcheese reblogged this from scienceyoucanlove
ladybonerz likes this
theymightbeclippy likes this
taoofpaschx likes this
genius-vision reblogged this from scienceyoucanlove
mysleak likes this
jewelsallaround likes this
time-to-geek reblogged this from scienceyoucanlove
imfuckingdead likes this
okorogariist reblogged this from scienceyoucanlove
fdispersion reblogged this from scienceyoucanlove
full-synchro likes this
relativeterm reblogged this from scienceyoucanlove
finnishyourcheese likes this
addicted-to-spuds reblogged this from scienceyoucanlove
cattomicnuko likes this
sepian-scrawlings likes this
clereclere likes this
murmuringartist reblogged this from scienceyoucanlove
okorogariist likes this
megtroid reblogged this from scienceyoucanlove
megtroid likes this
hopeycakes likes this
scienceyoucanlove posted this
|
{"url":"http://scienceyoucanlove.tumblr.com/post/45080601380/the-mobius-strip-or-mobius","timestamp":"2014-04-20T08:58:38Z","content_type":null,"content_length":"102492","record_id":"<urn:uuid:16207976-a23d-4171-9ba1-fbfff584755f>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve the following inequality: -4y + 5 ≥ 17 *would the answer be y ≤ -3 or y ≥ -3? because im not sure
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50c27717e4b066f22e1048e7","timestamp":"2014-04-16T22:37:51Z","content_type":null,"content_length":"39831","record_id":"<urn:uuid:fb040e66-b95d-4a7a-b437-ac8f0a225b37>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Einstein indicial notation problem
April 15th 2013, 02:33 PM
Einstein indicial notation problem
I am new to the forum so I don't know if I am posting in the right section :)
I want to prove the following equality in index notation :
S[i,k]S[k,j]S[i,l]S[l,j]=1/2*S[i,j]S[i,j]S[k,l]S[k,l] .
S is a symmetric second-order tensor with property S[i,i] = 0 .
I have been struggling with this for a week ,any help is appreciated, thanks in advance :)
April 15th 2013, 05:38 PM
Re: Einstein indicial notation problem
Hey milad.
If something is symmetric then it means that S[i,k] = S[k,i]. Did you try and expand the summation in an explicit way and not in the shorthand Einstein notation?
April 16th 2013, 12:03 AM
Re: Einstein indicial notation problem
I expanded the equation using matlab and I got :
S[i,k]S[k,j]S[i,l]S[l,j]-1/2*S[i,j]S[i,j]S[k,l]S[k,l]= s[i,i](~)
which means if we apply the property s[i,i]= 0 and symmetry they are equal .
I also checked this numerically for over 10 different set numbers and if you have symmetry and S[i,i]=0 ,then they are equal .
April 16th 2013, 01:24 AM
Re: Einstein indicial notation problem
Is that all you need to do or do you need to prove it symbolically?
April 16th 2013, 01:58 AM
Re: Einstein indicial notation problem
I need to prove it symbolically :)
April 16th 2013, 02:16 AM
Re: Einstein indicial notation problem
The first thing I would do is to convert from Einstein notation to actually sigma sums.
April 16th 2013, 02:19 AM
Re: Einstein indicial notation problem
tried that ,I couldn't solve it :( .
April 16th 2013, 08:58 AM
Re: Einstein indicial notation problem
What do you mean by $S_{[i,j]}$? The component $(i,j)$ of $S$ or the antisymmetrization $S_{[i,j]}=\frac{1}{2}(S_{ij}-S_{ji})$
April 16th 2013, 09:05 AM
Re: Einstein indicial notation problem
I mean (i,j) component . antisymmetrization of a symmetric tensor will be zero .
April 16th 2013, 10:55 AM
Re: Einstein indicial notation problem
By S[i,i] = 0 do you mean diagonal elements are zero or sum of diagonal elements is zero?
April 16th 2013, 11:01 AM
Re: Einstein indicial notation problem
sum of diagonal elements are zero
April 16th 2013, 11:51 AM
Re: Einstein indicial notation problem
What do you mean by different set of numbers? There are no free indices in your expression, there is one set of numbers only, say the range of values that $i,j,k,l$ can assume.
April 16th 2013, 12:00 PM
Re: Einstein indicial notation problem
I meant I checked the expression numerically ,I assigned different sets of numbers to elements of the tensor S , and checked the answers. two sides of the equation are equal if the values
assigned to S have those two properties(symmetry , S[i,i]=0 ) . :)
April 18th 2013, 10:50 AM
Re: Einstein indicial notation problem
I solved this and wrote it up in Word because of all the indices. I did a copy paste in Latex help and everything copied correctly. When I tried to put it in this thread, none of the subscripts
and superscripts copied over and to do it all manually is a nightmare. So I copied the solution to Latex help and give the link here.
I would appreciate it if someone put the solution in this post directly. OK, did copy paste from Latex post and that worked.
OP: I want to prove the following equality in index notation :
S[i,k]S[k,j]S[i,l]S[l,j]=1/2*S[i,j]S[i,j]S[k,l]S[k,l] .
let S =|s[ij]| and S[ij] ≡ s[ij]
S^4[ii] = ½ (S^2[jj])(S^2[kk]) *
S is symmetric so there is a coordinate system in which S is diagonal with real components. In this coordinate system:
2dim proof (3dim is the same with more algebra. You can’t get beyond this point with summation convention):
s[11] + s[22] = 0, sum of diagonal elements is a tensor invariant
s[11]^2 + s[22]^2 = -2s[11]s[22]
(s[11]^2 + s[22]^2) (s[11]^2 + s[22]^2) = 4s[11]^2s[22]^2
s[11]^4 + s[22]^4 = 2s[11]^2s[12]^2 = ½ (s[11]^2 + s[22]^2) (s[11]^2 + s[22]^2)
S^4[ii] = ½ (S[jj])(S[kk])
Proof holds in any coordinate system because contraction of a tensor is a tensor.
* s[ik]s[kj]s[il]s[lj] = S^2[ij]S^2[ij] = S^4[ii]
(A[ij]A[jk] = A^2[ik], A[ij]A[ji] = A^2[ii])
April 18th 2013, 11:49 AM
Re: Einstein indicial notation problem
thanks .I will try to do the same in 3D .
|
{"url":"http://mathhelpforum.com/advanced-algebra/217566-einstein-indicial-notation-problem-print.html","timestamp":"2014-04-17T12:40:06Z","content_type":null,"content_length":"16109","record_id":"<urn:uuid:24a16e9b-c465-4505-b793-2793deb4d5bd>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Figure Your Odds of Holding a Winning Lottery Ticket
Edit Article
Edited by Ksoileau, Krystle, Zack, Nicole Willson and 18 others
If you buy one random drawing lottery ticket, you have a chance of winning. What are your chances of winning though? If you buy more than one lottery ticket, how much does this improve your odds of
having at least one winning ticket? In this article, you'll learn how to answer these questions and add a little bit of clarity to what is, to most people, a shot in the dark. The first method below
is the most accurate, but it takes some time. The second method presented is simpler and faster, and will approximate the odds closely enough for most people--in fact the answer is the same for the
example below.
1. 1
Gather the following information:
□ T = How many lottery tickets will be sold
□ W = How many tickets sold will be winners
□ P = How many tickets you are planning to buy
2. 2
Write down your first fraction: (T-W)/T. For the moment, don't simplify the fraction by reducing it to lowest terms.
□ For example, suppose the lottery will sell 23,000,000 tickets, there will be 1,000 winning tickets, and you are planning to buy 6 tickets. Then T = 23000000, W = 1000, and P = 6. The first
fraction will be (23000000-1000)/23000000 = 22999000/23000000.
3. 3
Simplify the fraction (punch 22999000 divided by 23000000 into a calculator to get a decimal, in this case 0.99995652173913043478260869565217) and jump to the Subtract step if you only bought one
ticket. If you bought more than one ticket, skip this step.
4. 4
Write more fractions by reducing the previous numerators and denominators by one, until the total number of fractions is equal to P, the number of tickets you are planning to buy. If you bought 6
tickets in the above example, these are the fractions you should come up with:
1. 22999000/23000000
☆ This is your first fraction, from an earlier step.
2. 22998999/22999999
☆ This is the result of subtracting one from each of the numbers in your first fraction; the following fractions continue this pattern.
3. 22998998/22999998
4. 22998997/22999997
5. 22998996/22999996
6. 22998995/22999995
5. 5
Multiply all of the fractions together. There are two ways to do this. You can multiply all of the numerators, then multiply all the denominators, and then divide the numerator product by the
denominator product. Or, if you have a calculator that can generate long decimals (such as the standard calculator in your computer), simplify all of the fractions (numerator divided by
denominator to generate a decimal) and then multiply all of the decimals together. The resulting number is the probability of having NO winners among the tickets you purchased.
□ The 5 fractions in this example generate these decimals, respectively:
☆ 0.99995652173913043478260869565217
☆ 0.99995652173724007553217719705118
☆ 0.99995652173534971611736661890145
☆ 0.99995652173345935653817693976221
☆ 0.99995652173156899679460813819272
☆ 0.99995652172967863688666019275222
□ Since you have 5 fractions, you must multiply the resulting decimals together, which in this case results in 0.99973915876017716698091198324496
6. 6
Subtract the result from 1 to get the probability of having AT LEAST ONE winner among the tickets you purchased.
□ 1 - 0.99973915876017716698091198324496 = 0.000260841239822833019088016756
□ For ease of calculation, let's cut it down to .0002608412
7. 7
Invert the fraction. Press the "1/x" or "x^ -1" button on your calculator to invert the fraction you calculated in the previous step. 1 / .0002608412 ~= 3834, meaning that you have about a 1 in
3834 chance of holding at least one winning ticket. If your calculator doesn't have this function, skip this step and move on to the next.
8. 8
Convert the decimal into a fraction. Count the number of characters after the decimal point, as that will be the number of zeros following 1 in your denominator; to get the numerator, take the
decimal point and any preceding zeros away.
□ .0002608412 has 10 characters after the decimal point, so your denominator is 10,000,000,000 (1 followed by 10 zeros)
□ Without the decimal and the preceding zeros, .0002608412 = 2608412
□ Your fraction is 2,608,412/10,000,000,000
9. 9
Calculate your chances. Divide the denominator by the numerator. In this case, 10,000,000,000 divided by 2,608,412 equals 3,834 (when rounded). This corresponds to about 1 chance in 3,834 of
holding at least one winning ticket.
Simplified Method
1. 1
Use this equation: Chance of Win = [1 - (1 - W/T)^P] * 100, which for P = 1 would reduce to: Chance of Win = W/T * 100. The resulting number is your percentage of winning. Using the numbers from
the example above, this equation tells us that we have a 0.026084% (less than three one-hundredths of a percent) chance of winning.
2. 2
Convert to a fraction as above. Before multiplying by 100 to obtain the percentage, you will have calculated ~0.00026084. Create a fraction as from this as in the last steps of the first method
above, so that you have 26084/100,000,000 (8 zeros after the one in this case because there are only 8 digits after the decimal point). Divide the denominator by the numerator, and your answer,
when rounded, is 3,834--the same number you would have arrived at by use of the longer method.
• Some lotteries allow multiple wins if more than one ticket is purchased. These methods simply calculate your chance of at least one win.
• Some lotteries provide the odds of winning, so you can check your math.
• If the odds of winning with any given ticket are low, and you are only buying a few tickets, you can get a good estimate of your odds by multiplying the number of tickets you purchase by the odds
of winning for each one (the number of winners divided by the number of tickets to be sold). If, for example, you buy 6 tickets in a 23,000,000 ticket lottery with 1,000 winners, then the odds of
any one ticket being a winner is 1,000/23,000,000 or 1/23,000, so the probability of you holding a winner is 6/23,000 (about 1 in 3833, which is still about 0.00026 or 0.026%). This estimate
slightly overstates your odds of winning, but if your chances are low enough, the error will be very small.
• These calculations are only valid for a random drawing lottery. A lottery based on matching numbers would have an entirely different calculation, based on the number of balls drawn and the number
of total balls to draw from.
• If playing a lottery is anything more than just fun for you, and you are thinking of risking money you cannot afford to lose, you are advised to avoid playing a lottery.
Article Info
Featured Article
Categories: Featured Articles | Increasing Odds of Winning
Recent edits by: Chris Hadley, BR, Wingrider
In other languages:
Português: Como Calcular suas Chances de Ganhar na Loteria
Thanks to all authors for creating a page that has been read 138,287 times.
Was this article accurate?
|
{"url":"http://www.wikihow.com/Figure-Your-Odds-of-Holding-a-Winning-Lottery-Ticket","timestamp":"2014-04-17T03:58:17Z","content_type":null,"content_length":"75194","record_id":"<urn:uuid:dc853275-40e3-4189-a336-36378b6bca10>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
|
R: st: Re:
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
R: st: Re:
From "Carlo Lazzaro" <carlo.lazzaro@tin.it>
To <statalist@hsphsun2.harvard.edu>
Subject R: st: Re:
Date Sat, 3 Dec 2011 13:45:08 +0100
For those who may concern, I have freely downloaded Yuval's article at:
Kindest Regards,
-----Messaggio originale-----
Da: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] Per conto di Yuval Arbel
Inviato: giovedì 1 dicembre 2011 7.05
A: statalist@hsphsun2.harvard.edu
Oggetto: Re: st: Re:
Steve and David,
I still suggest you take a look at the full version of my RSUE paper:
you can access it through science direct (www.sciencedirect.com). The
paper also includes the formula for calculating the t-test that I used
there. In addition, i believe you should take a look at the references
of our paper, particularly about the literature that deals with
hedonic indices
Note that RSUE in one of the best journals in the field of urban
economics and its editor (Dan McMillen) is a highly appreciated
What i did in the second part of the paper is a paired t-test after
controlling all the characteristics of the apartment: I applied a
methodology with a cross-sectional nature and simulated a situation
where you modify the position of each dwelling unit with identical
characteristics from frontline to non-frontline streets and
vice-versa: you can check into the statistical literature to see that
this is the appropriate test in this case
I am not sure that you can do precisely what I did in the paper by
using -margins-: Maybe if -margins- can give you the standard
deviation of the point estimator (something equivalent to
-predict,stdp- but for a point estimator) - but this does not seem to
be precisely equivalent to what i did and, in fact, it seems that
while using this methodology - you loose information (you take only
the average characteristics instead of the characteristics of each
dwelling unit seperately).
Finally, note that in empirical work you can only do your best effort
to isolate the effects. You cannot get into perfection. Richard Arnott
(another very famous empirical urban economist) said in one of his
papers that isolating an effect via regression analysis is similar to
a New-Yorker who comes out of his apartment during a busy morning in
new York and tries to listen to a whisper
On 11/30/11, David Ashcraft <ashcraftd@rocketmail.com> wrote:
> Steve,
> Can you please explain a little further. Let me rephrase the question
> initially asked. Whether coefficients obtained after running regression on
> all managers (full dataset) are same as the
> average coefficients obtained from running regressions on individual
> mangers. I don't know a paper that has done analysis on this pattern, and
> would like to know, if there exist any analysis like that. My idea is,
> method should reflect the similar results.
> David
> ----- Original Message -----
> From: Steve Samuels <sjsamuels@gmail.com>
> To: statalist@hsphsun2.harvard.edu
> Cc:
> Sent: Thursday, December 1, 2011 1:39:29 AM
> Subject: Re: st: Re:
> Yuval,
> I don't have access to your article, but I have an observation: The
> predictions (real and counterfactual) that are averaged are not
> because they are all functions of the estimated regression coefficients. I
> don't think a t-test accommodate the non-independence. In Stata, I would
> -margins- or -lincom- after -margins-.
> Steve
> On Nov 26, 2011, at 9:09 AM, Yuval Arbel wrote:
> David,
> You can simply use Difference in Difference (DD) analysis:
> Run a regression on the group of managers who take the first (second)
> approach. Then predict what would have happened to the performance of
> each manager in the case that he/she takes the other approach and use
> the -ttest- to see whether the difference is significant.
> Note to define dummy variables in any case that variables are ordinal,
> i.e., the numerical values have no quantitative meaning
> I use this approach quite often. You can look at the second part of my
> following paper published in RSUE:
> Arbel, Yuval; Ben Shahar,Danny; Gabriel, Stuart and Yossef Tobol:
> "The Local Cost of Terror: Effects of the Second Palestinian Intifada
> on Jerusalem House Prices".Regional Science and Urban Economics (2010)
> 40: 415-426
> On Sat, Nov 26, 2011 at 12:11 PM, David Ashcraft
> <ashcraftd@rocketmail.com> wrote:
>> Hi Statalist,
>> This is more like an econometric than a Stata question. I am little lost
>> on the following scenario:
>> The situation is: I want to measure the performance of managers, who has
>> specific approach against those who do not. I have several individual
>> managers in each category. One way is to regress the performance of these
>> managers against their benchmark for the whole data using
>> -regress manager benchmark, by(belief)
>> The second option is to run individual regression on each manager and get
>> the coefficients of individual regressions and run a ttest alpha,
>> by(belief) .
>> Now the question is, how different is the result from the ttest of alpha
>> from that of the alpha of the regression equation.
>> Any help will be really appreciated.
>> If anyone can suggest an academic paper on similar scenarios, that would
>> be a great help.
>> David
>> *
>> * For searches and help try:
>> * http://www.stata.com/help.cgi?search
>> * http://www.stata.com/support/statalist/faq
>> * http://www.ats.ucla.edu/stat/stata/
> --
> Dr. Yuval Arbel
> School of Business
> Carmel Academic Center
> 4 Shaar Palmer Street, Haifa, Israel
> e-mail: yuval.arbel@gmail.com
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
Dr. Yuval Arbel
School of Business
Carmel Academic Center
4 Shaar Palmer Street, Haifa, Israel
e-mail: yuval.arbel@gmail.com
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2011-12/msg00087.html","timestamp":"2014-04-20T06:30:43Z","content_type":null,"content_length":"15362","record_id":"<urn:uuid:aab0bf2d-42d8-4839-8f08-998d0a9e5f45>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Show , lim x->0 ( sin 1/x) not equal to 0 use epsilon delta method
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f80887de4b0505bf082d6bd","timestamp":"2014-04-21T10:29:53Z","content_type":null,"content_length":"85198","record_id":"<urn:uuid:e4695660-20df-46ca-822a-a51f7a797ebf>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Basic Introduction to MATLAB -- JHU MATLAB Help Page
Basic Arithmetic
Logarithms and Exponentials
Retrieving Variable Values
Saving and Loading
Recording Your Steps
Helpful Links
How to Examples How to:
All of the trigonometry functions are easily accessed within MATLAB. To take the function of some variable x:
sine - sin(x)
cosine - cos(x)
tangent - tan(x)
cotangent - cot(x)
secant - sec(x)
cosecant - csc(x)
The inverse (arc-) trig. functions are just as easy:
arc-sine - asin(x)
arc-cosine - acos(x)
arc-tangent - atan(x)
arc-cotangent - acot(x)
arc-secant - asec(x)
arc-cosecant - acsc(x)
And, if we are going to be dealing with trig. functions, we are going to need an accurate way to represent "pi." Luckily, MATLAB has made it so we simply need only type pi to use it.
Let's find the value of tangent of 45 degrees. Angle measurements in MATLAB must be given in radians, and not degrees:
Now, let's find the inverse cosine of (the square root of 3) divided by 2:
You'll notice that MATLAB returns a value of .5235... and not pi/6 like you may have expected. This is the computed value of pi/6. MATLAB does not recognize our "special angles" in trigonometry. You
will have to rely on yourself to recognize them.
|
{"url":"http://www.mathematics.jhu.edu/matlab/0-2.html","timestamp":"2014-04-19T22:59:19Z","content_type":null,"content_length":"3846","record_id":"<urn:uuid:61b83c6f-616a-439c-937f-b52829ae90d6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ltspice Time Step too small
1. 9th November 2011, 03:30 #1
Member level 1
Join Date
Oct 2011
0 / 0
ltspice Time Step too small
Hi All,
I am getting a "Time step too small; initial timepoint: trouble with node x"
My SPICE file is as follows:
* MemristorHP
.SUBCKT MemristorHP plus minus PARAMS:
+ phio=0.95 Lm=0.0998 w1=0.1261 foff=3.5e-6 ioff=115e-6 aoff=1.2 fon=40e-6 ion=8.9e-6 aon=1.8 b=500e-6 wc=107e-3
G1 plus internal value={sgn(V(x))*(1/V(dw))^2*0.0617*(V(phiI)*exp(-V(B)*V(sr))-(V(phiI)+abs(V(x)))*exp(-V(B)*V(sr2)))}
Esr sr 0 value={sqrt(V(phiI))}
Esr2 sr2 0 value={sqrt(V(phiI)+abs(V(x)))}
Rs internal minus 215
Eg x 0 value={V(plus)-V(internal)}
Elamda Lmda 0 value={Lm/V(w)}
Ew2 w2 0 value={w1+V(w)-(0.9183/(2.85+4*V(Lmda)-2*abs(V(x))))}
EDw dw 0 value={V(w2)-w1}
EB B 0 value={10.246*V(dw)}
ER R 0 value={(V(w2)/w1)*(V(w)-w1)/(V(w)-V(w2))}
EphiI phiI 0 value={phio-abs(V(x))*((w1+V(w2))/(2*V(w)))-1.15*V(Lmda)*V(w)*log(V(R))/V(dw)}
C1 w 0 1e-9 IC=1.2
R w 0 1e8MEG
Ec c 0 value={abs(V(internal)-V(minus))/215}
Emon1 mon1 0 value={((V(w)-aoff)/wc)-(V(c)/b)}
Emon2 mon2 0 value={(aon-V(w))/wc-(V(c)/b)}
Goff 0 w value={foff*sinh(stp(V(x))*V(c)/ioff)*exp(-exp(V(mon1))-V(w)/wc)}
Gon w 0 value={fon*sinh(stp(-V(x))*V(c)/ion)*exp(-exp(V(mon2))-V(w)/wc)}
.ENDS MemristorHP
Vtest test GND DC 0 SIN(0 1.8V 1 0 0 0)
R0 test X 1k
Xmemristor X GND MemristorHP
.TRAN 1m 1 uic
2. 9th November 2011, 19:11 #2
Junior Member level 2
Join Date
Nov 2004
18 / 18
Re: ltspice Time Step too small
Try to decrease the step ceiling. Note that your command
.TRAN 1m 1 uic
is not optimal: the first number 1m has no effect to the simulation run; you have not set the step ceiling, thus it is defined automatically as 1/50=20ms and it is too high.
Try the following:
.TRAN 0 1 0 1m uic
Then it works in PSpice.
I do not work with LtSpice, thus I do not know if GND can be used for the ground. In PSpice, I replaced it by 0 (zero) in your code. Then it works fine.
Hope it helps you.
3. 9th November 2011, 19:51 #3
Advanced Member level 5
Join Date
Mar 2008
831 / 831
Re: ltspice Time Step too small
You might look at whether this memristor model is singular
at any point. That's a good way to fail numerically.
|
{"url":"http://www.edaboard.com/thread230393.html","timestamp":"2014-04-17T18:23:34Z","content_type":null,"content_length":"64149","record_id":"<urn:uuid:fb7a7fb5-3392-4629-a7a6-9709c9c1cd04>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Compressed Least-Squares Regression
Odalric-Ambrym Maillard and Rémi Munos
NIPS 2009 2009.
We consider the problem of learning, from K data, a regression function in a linear space of high dimension N using projections onto a random subspace of lower dimension M. From any algorithm
minimizing the (possibly penalized) empirical risk, we provide bounds on the excess risk of the estimate computed in the projected subspace (compressed domain) in terms of the excess risk of the
estimate built in the high-dimensional space (initial domain). We show that solving the problem in the compressed domain instead of the initial domain reduces the estimation error at the price of an
increased (but controlled) approximation error. We apply the analysis to Least-Squares (LS) regression and discuss the excess risk and numerical complexity of the resulting ``Compressed Least Squares
Regression'' (CLSR) in terms of N, K, and M. When we choose M=O(\sqrt{K}), we show that CLSR has an estimation error of order O(\log K / \sqrt{K}).
PDF - Requires Adobe Acrobat Reader or other PDF viewer.
|
{"url":"http://eprints.pascal-network.org/archive/00006106/","timestamp":"2014-04-20T05:46:39Z","content_type":null,"content_length":"7429","record_id":"<urn:uuid:df93e8ee-e8a9-4392-9c60-6b280e3b8432>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Cosmology] Scale Factor Values
What 'math' are you following through with?
I used the equation for the Hubble Parameter as a function of redshift, then changed this over to be a function of scale factor instead.
What is the definition of [itex]\Omega_{s0}[/itex] for some species [itex]s[/itex]? What is [itex]\Omega_{\rm total 0}[/itex] in the universe you are studying?
[tex]\Omega_{total 0} = 1[/tex]
I don't understand the first bitof the question I'm sorry.
Make use of the second Friedmann equation to make sure that when [itex]H(a)[/itex] goes to zero, [itex]dH/da[/itex] is negative.
I'm uncertain as to how that determines which of the two remaining parameters is the recollapsing universe?
|
{"url":"http://www.physicsforums.com/showthread.php?t=498304","timestamp":"2014-04-18T18:27:21Z","content_type":null,"content_length":"42686","record_id":"<urn:uuid:fca68087-8ea9-4e16-8936-3c2f86a91b57>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hilshire Village, TX Geometry Tutor
Find a Hilshire Village, TX Geometry Tutor
...I have a grasp of the basics of most all physical sciences. I have an immensely broad background in and knowledge of the elementary sciences. From geology to meteorology, geology, physics,
biology, chemistry, cosmology, etc.
37 Subjects: including geometry, chemistry, calculus, physics
...Chemistry - I have the equivalent of a minor in chemistry (37 semester hours), and both a BA and a BS in Chemical Engineering from Rice University. I have done industrial chemical research,
primarily in refining and petrochemicals. I hold three patents for chemical processes.
11 Subjects: including geometry, chemistry, English, physics
...Additionally, I have taught it to students in Austin Community College. I love using laws of sines and cosines to solve equations and problems. I am very good at it, and I have taught
trigonometry during home tutoring in Kathmandu, Nepal.
12 Subjects: including geometry, calculus, physics, algebra 1
...I have also tutored high school students in various locations. My reputation at the Air Force Academy was the top calculus instructor. I have taught precalculus during the past two years and
have enjoyed success with the accomplishments of my students.
11 Subjects: including geometry, calculus, statistics, algebra 1
...For 9 years, I have worked with H.I.S.D. teaching all levels of middle school math. In addition, I was the Mathcounts competition coach for 6 years. The strategies I taught in the Mathcounts
program are identical to the strategies used to solve SAT math problems.
24 Subjects: including geometry, calculus, statistics, biology
|
{"url":"http://www.purplemath.com/Hilshire_Village_TX_geometry_tutors.php","timestamp":"2014-04-20T16:00:50Z","content_type":null,"content_length":"24481","record_id":"<urn:uuid:b0c8085a-c164-4827-83e2-deb8b7e62abd>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Glass Flask Whose Volume Is 1000 At A Temperature ... | Chegg.com
A glass flask whose volume is 1000 at a temperature of 1.00 is completely filled with mercury at the same temperature. When the flask and mercury are warmed together to a temperature of 52.0, a
volume of 8.35 of mercury overflows the flask.
If the coefficient of volume expansion of mercury is 1.80×10^−4
Express your answer in inverse kelvins.
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/glass-flask-whose-volume-1000-temperature-100-completely-filled-mercury-temperature-flask--q1701935","timestamp":"2014-04-16T09:01:45Z","content_type":null,"content_length":"22209","record_id":"<urn:uuid:48d836a5-763c-425a-b45e-030704124f97>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Solve a Math Problem
There are three steps to solving a math problem.
1. Figure out what the problem is asking.
2. Solve the problem.
3. Check the answer.
Sample Problem
It's early spring, and Cierra is planting a new lawn. She's decided that the length of the lawn should be 1 ft less than double the width, and the area should be 15 ft^2. She also plans to construct
a concrete path around the lawn. Fancy. Find the length of the path.
In this problem, we need to find the perimeter of the lawn. Since the dimensions of the lawn are unknown, let l be the length and w be the width. The length (l) is 1 ft less than double the width (2w
), or l = 2w – 1.
The area of the lawn is given as 15 ft^2, so
lw = 15
(2w – 1)w = 15
2w^2 – w – 15 = 0.
We can solve the quadratic equation by factoring
(2w + 5)(w – 3) = 0, so
After discarding the negative value, since it would be extra-super-fancy (not to mention bizarre) to have a negative lawn, w = 3 ft and l = 2(3) – 1 = 5 ft. The length of the path is 2(l + w) = 2(5 +
3) = 16 ft.
Sample Problem
Serena and her BFF (or for this week, anyway) Blair want to go to a movie playing at a theater 45 miles away. Whatever, saving on gas is for plebes. Serena starts out first, driving at 40 miles/hour,
while Blair starts driving 10 minutes later at 50 m/hr. When will Blair pass Serena?
(Because she will pass Serena. Hey, nobody ever said they were great drivers.)
Let t be the time in hours when Blair passes Serena. Therefore, the distance traveled by Blair in t hours will be same as the distance traveled by Serena in t hours and 10 minutes (she starts driving
10 minutes, or
Using Serena and Blair's driving speed given in the problem, we solve the following linear polynomial equation:
Blair will pass Serena after 40 minutes. Eat her dust, xoxo!
Sample Problem
Lisa Simpson is planning to deposit her pocket money in the bank, like the bizarrely responsible eight-year-old she is. If she is depositing $500 and the interest rate is r% per year, how much money
will she have in her account after 2 years at the ripe old age of 10?
Using simple interest formula, Lisa will have
After two years, she will have
which is a polynomial of degree 2 in r. Just in time for fifth grade, too.
Sample Problem
Stewie stole $5 from Brian and has been trying to hide away. Not because he's scared, just because of...reasons. Brian goes looking for him running north at 4 miles/hour, so Stewie starts running
(okay, toddling) east at 3 miles/hour. When will they be separated by 1 mile?
Let t be the time when Stewie and Brian are 1 mile apart. In t hours, Brian has traveled 5t miles and Stewie has traveled 3t miles. Using the Pythagorean formula, we get the quadratic equation:
They will be a mile apart in 12 minutes.
Sample Problem
Bobby Hill decides to open a factory to produce Shmickerdoodle cookies. He found out that the cost of making q boxes is given by C(q) = 800 + 3q. If the cost of each box is 3.50, how many boxes
should the factory produce in order to have no loss?
The revenue obtained by selling q boxes is the price of each box times the number of boxes sold, R(q) = 3.5q. Note that R(q) and C(q) are both polynomials of degree 1. The formula for loss is given
by subtracting revenue R(q) from the cost C(q):
Loss = R(q) – C(q) = 800 + 3q – 3.5q = 800 – 0.5q.
The loss is 0 when
That's a long way from breaking even, Bobby-o. Hope those Shmickerdoodles are worth it.
|
{"url":"http://www.shmoop.com/polynomials/solving-math-problems-help.html","timestamp":"2014-04-20T21:25:57Z","content_type":null,"content_length":"36998","record_id":"<urn:uuid:bc966b84-f883-4422-9ccb-923e90dabb22>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/aussy123/medals","timestamp":"2014-04-18T19:23:02Z","content_type":null,"content_length":"96599","record_id":"<urn:uuid:750ba335-df69-4067-b5f7-236d430fb217>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Santa Monica Trigonometry Tutors
...Finally grasping a frustrating concept is one of the most exhilarating and rewarding feelings we can experience. Patience and understanding are my strongest attributes, and help me fully
connect and guide each student towards a set of skills that are customary just for them! My specialties lie in Math and Science, but am qualified to tutor the additional subjects listed below.
56 Subjects: including trigonometry, reading, chemistry, English
...Your child's skills will be improved in a few sessions. I am organized, professional and friendly. My degree in Mathematics is from UCLA, which was a very rigorous course of study.
14 Subjects: including trigonometry, reading, Spanish, ESL/ESOL
...I taught at the RAND Frederick S. Pardee Graduate School during my 29 year career there. Now that I am retired, I want to spend time helping students get over a fear of mathematics.
7 Subjects: including trigonometry, geometry, algebra 2, algebra 1
...Geometry is a good way to make algebra more real too. Trigonometry has both a practical side and a theoretical side. I help students bridge from the concrete to the abstract concepts.
24 Subjects: including trigonometry, English, chemistry, algebra 1
...One thing I always practice in my classroom is starting from a level where the students can understand, and moving forward from there. I believe that building a good foundation is the key to
success in any subject. I have come up with many different tricks for helping students remember key idea...
11 Subjects: including trigonometry, physics, geometry, algebra 1
|
{"url":"http://www.algebrahelp.com/Santa_Monica_trigonometry_tutors.jsp","timestamp":"2014-04-17T12:42:19Z","content_type":null,"content_length":"25175","record_id":"<urn:uuid:e72a64f3-ebfa-48b1-8bd9-183bde9ecced>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Network input handling - Page 5 - Multiplayer and Network Programming
wouldnt work well for pvp type of situations
Note that the problem is the network, not the implementation. The implementation gives you as good an implementation as your network allows, assuming you use an adaptive de-jitter buffer size. PvP
doesn't work well on poor networks.
|
{"url":"http://www.gamedev.net/topic/635009-network-input-handling/page-5","timestamp":"2014-04-17T10:35:34Z","content_type":null,"content_length":"197683","record_id":"<urn:uuid:6e4a4b74-560c-44f2-bb71-2332385e6b0c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is there a "groupoid integral" with values in a groupoid?
up vote 5 down vote favorite
Let $G = \{G_1 \rightrightarrows G_0\}$ be a finite groupoid, i.e. $G_1,G_0$ are both finite sets, and let $A$ be $\mathbb Q$-module. Regard $A$ as a discrete groupoid $A \rightrightarrows A$, and
let $f: G\to A$ be a functor — equivalently, there is a set $G_0/G_1$ of isomorphism classes of $G$, and $f$ is an $A$-valued function on this set. Then Baez and Dolan define a groupoid integral: $$
\int_G f = \sum_{x\in G_0/G_1} \frac{f(x)}{\lvert {\rm Aut}(x)\rvert} $$ where to make the above precise I'm using the fact that if $x,y \in G_0$ are isomorphic, then $\lvert {\rm Aut}(x)\rvert = \
lvert {\rm Aut}(y)\rvert$, and choosing an isomorphism $x \to y$ in fact induces an isomorphism ${\rm Aut}(x) \to {\rm Aut}(y)$. Anyway, the point is that $\int_G f$ actually depends only on the
"stack" $G_0 // G_1$, where for our purposes "stack" can mean "groupoid up to equivalence": if $G,G'$ are equivalent groupoids and $f':G' \to A$ is the functor corresponding to $f$ under the
equivalence, then $\int_Gf = \int_{G'}f'$.
Suppose now that $A$ is not a $\mathbb Q$-module but some (possibly weak) version of a "$\mathbb Q$-module in groupoids". (If you want, I have no objection to you thinking of $A$ as a strict object —
the weakened version would replace all the axioms for a $\mathbb Q$ vector space with natural isomorphisms that have their own coherency.) Then it still makes sense to talk about functors $f: G \to
Question: Does there exists an extension of the groupoid integral above to integrals of the form $\int_Gf$ where $f: G \to A$ is a functor but $A$ is not discrete, and that plays well with
equivalences of groupoids $G \to G'$ and $A \to A'$?
groupoids ct.category-theory
$\int_Gf$ only depending on the 'stack' is trivial for the case of discrete $A$ (as in discrete as a groupoid), because $G_0/G_1 \simeq G'_0/G'_1$ for equivalent groupoids $G, G'$. I'd like to
think of A as a groupoid with one object, so that the sum is the product in that groupoid, but then I don't know how to think of the functor $f$. – David Roberts Apr 16 '10 at 1:05
For the more general case, an easier first case to consider is when $A$ is/comes from a crossed module $V_1 \to V_0$ in the category of $\mathbb{Q}$-modules. (On a technical note, since all such
modules are free, the naive 2-category of such crossed modules is the 'right' one, you don't need to pass to a localisation, such as Noohi's butterflies/papillons). BTW these crossed modules are
usually called Baez-Crans 2-vector spaces. My guess is that the desired integral will give an element of $V_1\times V_0$, which when one translates back to groupoids is an arrow of $A$. – David
Roberts Apr 16 '10 at 1:08
add comment
1 Answer
active oldest votes
Yes. It seems you're only looking for an object up to isomorphism, so all you really need is divisibility and addition to yield objects that are well-defined up to isomorphism. Assuming your
notion of rational vector space in groupoids involves a Picard groupoid (i.e., symmetric monoidal category with all objects invertible) with "multiplication by a/b" functors, and a zoo of
compatibility isomorphisms under multiplication and addition, then you can use the Baez-Dolan formula.
up vote
3 down This is a groupoid version of pushing forward a constant sheaf on a zero dimensional stack (which is well-known under suitable assumptions on stabilizers and the characteristic of the
vote coefficients).
So I'm confused. I have a groupoid $G$ and another groupoid $A$, and $A$ happens to be a $\mathbb Q$-vector space object. So if I use the Baez-Dolan formula, I can pick out an object of
$A$ up to isomorphism. But I certainly cannot pick out an object of $A$, and I would like to, with their formula. Moreover, suppose that $g\in G$ and $a\in A$ are objects. Then $f$ gives a
map of groups $f_g: {\rm Aut}(g) \to {\rm Aut}(a)$, and I would expect the integral, far from just dividing by $|{\rm Aut}(g)|$, to involve both groups (for example, only divide by $|{\rm
ker}(f_g)|$). – Theo Johnson-Freyd Apr 16 '10 at 2:47
I guess by "plays well with equivalences" I mean something like: if I have an equivalence $G \sim G'$, then I'd expect $\int_G f \cong \int_{G'}f'$ as objects in $A$, and I'd even expect
to be able to compute this isomorphism from the equivalence. But I would like to be able to pick out a particular object of $A$. – Theo Johnson-Freyd Apr 16 '10 at 2:49
I have heard that some category theorists frown upon picking out particular objects in a category. A better integral may involve an object of A together with some datum that satisfies a
universal property. You won't get a unique object this way, but it will be unique up to unique isomorphism (or will lie in a contractible space of solutions, if you're working with higher
categories). It is difficult to tell what precise properties you want, since you don't provide concrete motivation in your question. – S. Carnahan♦ Apr 16 '10 at 4:43
add comment
Not the answer you're looking for? Browse other questions tagged groupoids ct.category-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/21516/is-there-a-groupoid-integral-with-values-in-a-groupoid","timestamp":"2014-04-18T10:51:57Z","content_type":null,"content_length":"59435","record_id":"<urn:uuid:2dd77f0f-ddb5-4e21-a883-6a81d0ce564c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
|
list of unique non-subset sets
Raymond Hettinger vze4rx4y at verizon.net
Sat Mar 19 02:43:42 CET 2005
[les_ander at yahoo.com ]
> >OK, so I need to be more precise.
> >Given a list of sets, output the largest list of sets (from this list,
> >order does not matter) such that:
> >1) there is no set that is a PROPER subset of another set in this list
> >2) no two sets have exactly the same members (100% overlap)
[Bengt Richter]
> two from the above come out length 5:
> 5: [set(['k', 'r', 'l']), set(['a', 'c', 'b']), set(['a', 'e', 'd', 'f']),
set(['h', 'g']), set(['i', 'h'])]
> 5: [set(['k', 'r', 'l']), set(['a', 'e', 'd', 'f']), set(['a', 'c']),
set(['h', 'g']), set(['i', 'h'])]
> How do you define "largest" ? ;-)
> I guess you could sum(map(len, setlist)) as a measure, but what if that makes
> a tie between two setlists (as I think it could, in general)?
With multiple outputs satisfying the constraints, I would suspect that there is
something wrong with the original spec (not as a stand-alone problem, but as
component of a real application).
Raymond Hettinger
More information about the Python-list mailing list
|
{"url":"https://mail.python.org/pipermail/python-list/2005-March/345274.html","timestamp":"2014-04-19T21:29:40Z","content_type":null,"content_length":"3654","record_id":"<urn:uuid:192fd964-84c2-4b29-9549-5f87b49b5c64>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
|