url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://mathhelpforum.com/advanced-applied-math/14826-4-vectors.html
|
## 4 vectors
Let us define 4 vector by 4 co-ordinates (x1,x2,x3,x4) where (x1,x2,x3) are space components (like x,y,z) and x4 is related to time as x4=ict.Express the following equations in tensor notation.
(i)The continuity equation: div J+(del*rho/del t)=0
(ii)The wave equation laplacian ø-(1/c^2) [del^2 ø/del t^2]=0
(iii)What will be the value of j4 instead of above?
my attempts:Please tell me if I am going through the right way.
div(V)+d(pho)/dt=
dJ1/dx1+dJ2/dx2+dJ3/dx3+ic*d(Rho)/(ic*dt)=
sum(dJi/dxi,i=1..4)= di (Ji)
Where J is the quadrivector (Jx,Jy,jz,icRho)
So the charge conservation equation is just the divergence of the 4vector.
Along the same line, the generalisation of the Laplacian is:
Laplacian=sum[(d/dxi)^2,i=1..3]=dii
D'Alembertian= sum[(d/dxi)^2,i=1..3]+(d/d(ict))^2
= sum[(d/dxi)^2,i=1..4]
=dii
So a wave equation looks like a generalization of a Poisson equation to a 4 dimensional space.
[Laplacian-1/c^2(d/dt)](phi)=dii(phi)
|
2016-10-22 02:08:27
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8683254718780518, "perplexity": 3699.2632188280786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718423.28/warc/CC-MAIN-20161020183838-00329-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://2021.help.altair.com/2021.2/winprop/topics/winprop/user_guide/proman/propagation_models/indoor_empirical_models_cost_multiwall.htm
|
# COST-Multi-Wall Model (MW)
The multi-wall model gives the path loss as the free space loss added with losses introduced by the walls and floors penetrated by the direct path between transmitter and receiver.
It has been observed that the total floor loss is a function of the number of the penetrated floors. This characteristic is taken into account by introducing an additional empirical correction factor.
The individual penetration losses for the walls (depending on their material parameters) are considered for the prediction of the path loss. Therefore, the multi-wall model can be expressed as follows:
(1)
where
• ${l}_{FS}$ is the free space loss between transmitter and receiver
• ${l}_{c}$ is the constant loss
• ${k}_{wi}$ is the number of penetrated walls of type i
• ${k}_{f}$ is the number of penetrated floors
• ${l}_{wi}$ is the loss of wall type i
• ${l}_{f}$ is the loss between adjacent floors
• $N$ is the number of different wall types
The constant loss in the equation above is a term which results when wall losses are determined from measurement results by using multiple linear regression. Normally it is close to zero. The third summand in the equation represents the total wall loss as a sum of the walls between transmitter and receiver. For practical reasons in ProMan the individual wall loss of the intersected walls is considered.
It is important to notice that the loss factors in the formula express not physical wall losses but model coefficients which are optimized with the measured path loss data. Consequently, the loss factors implicitly include the effect of furniture. However, wave guiding effects are not considered with this model, thus the accuracy is moderate. On the other hand, this model has a low dependency on the database accuracy and because of the simple approach a very short computation time. Therefore, no preprocessing of the building data is needed for the computation of the prediction and no settings have to be adapted for this prediction model.
|
2023-01-28 16:21:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7460069060325623, "perplexity": 535.1542336538487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499646.23/warc/CC-MAIN-20230128153513-20230128183513-00351.warc.gz"}
|
https://codereview.stackexchange.com/questions/187920/implementation-of-a-kennel/187932
|
# Implementation of a Kennel
I need a code review for the following question:
Using C++ object oriented design, provide the implementation of Kennel so that:
• The method AddCat() adds a Cat to the Kennel, providing its name.
• The method AddDog() adds a Dog to the Kennel, providing its name.
• The method RollCall() prints the Animal's name and sound to stdout:
• Cats identify themselves by printing "Meow" to stdout.
• Dogs identify themselves by printing "Woof" to stdout.
Kennel .h
#pragma once
#ifndef KENNEL_H
#define KENNEL_H
#include <string>
#include <iostream>
#include <vector>
class Kennel
{
public:
Kennel() { };
virtual ~Kennel();
void AddCat(const std::string & name);
void AddDog(const std::string & name);
void RollCall();
virtual void makeSound(std::string name) { }
private:
std::vector <Kennel*> KennelList;
protected:
std::string name;
};
//Dog inherits Kennel
class Dog :public Kennel
{
public:
Dog(std::string dogName)
{
name = dogName;
}
~Dog() { };
void makeSound(std::string name)
{
std::cout << name << " says Woof" << std::endl;
};
};
//Cat inherits Kennel
class Cat :public Kennel
{
public:
Cat(std::string catName)
{
name = catName;
}
~Cat() { };
void makeSound(std::string name)
{
std::cout << name << " says Meow" << std::endl;
};
};
#endif
Kennel.cpp
#include "Kennel.h"
Kennel::~Kennel()
{
for (auto i : KennelList)
{
delete i;
}
}
void Kennel::AddCat(const std::string & name)
{
KennelList.push_back(new Cat(name));
}
void Kennel::AddDog(const std::string & name)
{
KennelList.push_back(new Dog(name));
}
void Kennel::RollCall()
{
for (unsigned int i = 0; i < KennelList.size(); ++i)
{
KennelList[i]->makeSound(KennelList[i]->name);
}
}
main
#include "Kennel.h"
int main()
{
Kennel kennel;
kennel.RollCall();
}
• Disappointed that there's no Kipper the Dog. :) Feb 21, 2018 at 3:00
• I also miss the implementation of the AddDog and AddCat methods. Could you please post them? Feb 21, 2018 at 3:01
I like how you made the Kennel class the virtual base for your animals. The other reviewers might have a point in that this could become confusing, but for such a simple project I think this is quite effective. It avoids the need for a pure virtual Animal class, which is kind of neat. Mind you, I'm highly skeptical of the "object oriented" approach as it is typically practiced. I can count on one hand the number of times I've been able to use inheritance to actually simplify things.
You also do not use using namespace std. Fantastic!
## Naming
Your variable names are OK for the most part. But do note that AddCat and AddDog use name as input argument, and the object has a name member variable also. Thus within these functions, the member variable is not visible. You should try to avoid duplicating names like this. One common solution is to have member variables consistently named as m_name or name_. This makes it obvious what they are, and it also prevents name clashes.
The methods you implemented based on the exercise's requirements have a different naming style than the makeSound method you created. You also have two member variables, one starts with an upper case letter and the other with a lower case letter. Try to be consistent in naming!
## Memory management
The destructor for Kennel iterates over KennelList and destroys all allocated memory. From the looks of it you're not leaking memory. But ideally you don't want to have to deal with deallocation. Why not let the compiler figure out what to deallocate?
std::vector<std::unique_ptr<Kennel>> KennelList;
If you use a vector of std::unique_ptr elements, you don't need to worry about delete. When you stick a pointer into the KennelList, it will from that moment on take care of not only the pointer, but also the pointed-to data. You would do something like this to put your pointers into the vector:
KennelList.emplace_back(new Cat(name));
## Input arguments
For the AddCat method you take a const reference to a string as input argument. This is excellent (though I prefer to write std::string const& name rather than const std::string & name, it prevents some misunderstandings between me and the compiler...). However, for all the other methods you take a std::string by value (Dog and Cat constructors, makeSound). There is no need to make these copies.
Though one alternative for the constructor could be
Dog(std::string dogName)
{
name = std::move(dogName);
}
Here, instead of taking dogName by reference, you take it by value, making a copy. This copy you then move into your member variable. The advantage of making the copy at the input of the function is that when you call the function,
Dog("Kipper");
a temporary std::string is made. The compiler will not create a copy of this temporary to pass to the Dog constructor, but it will directly use that temporary. Thus, the std::string is constructed only once, and directly stored in your new object.
Note also that makeSound doesn't need an input argument at all. The object should know its own name!
## Loops
In your destructor you use the modern-style loops:
for (auto i : KennelList)
But in RollCall you write
for (unsigned int i = 0; i < KennelList.size(); ++i)
Again, consistency is good. Also, the first loop form is better because it's less verbose, and thus much easier to read.
## std::endl
std::endl not only prints a newline, it also flushes the stream. It's much more efficient to let the system decide when to flush.
std::cout << name << " says Woof" << std::endl;
produces the same output as
std::cout << name << " says Woof\n";
## Inheritance
I know you're supposed to use "object oriented" design in this exercise, but I think if you have one object it's "object oriented", no? I like to avoid inheritance where I can, because I usually find the resulting code simpler to read and to maintain, and it tends to be much more efficient. For example, note how this code is much shorter:
class Kennel {
public:
void AddCat(std::string name) {
kennelList_.emplace_back(Species::Cat, std::move(name));
}
void AddDog(std::string const& name) {
kennelList_.emplace_back(Species::Dog, std::move(name));
}
void RollCall() {
for (auto animal : kennelList_) {
std::cout << animal.name;
switch (animal.species) {
case Species::Dog:
std::cout << " says Woof\n";
break;
case Species::Cat:
std::cout << " says Meow\n";
break;
}
}
}
private:
enum class Species { Dog, Cat };
struct Animal {
Species species;
std::string name;
Animal(Species s, std::string n)
: species(s), name(std::move(n)) {}
};
std::vector <Animal> kennelList_;
};
Besides simpler code, it is also more efficient. Not that it matters at all in this application, but I like to think about efficiency. Feel free to skip the rest of this section, you're just starting out, but maybe you find this interesting.
One of the most important concepts in modern hardware is data locality. Data that is close together is faster to process because it plays better with the cache. The other important concept is branch prediction. Modern hardware tries to predict what the outcome of a conditional statement is, and will start executing that code even if it hasn't done the computations yet that the condition depends on. If it fails to predict correctly, stuff gets thrown out and it starts to process the correct branch. If it can predict well, if statements don't delay things all that much. Otherwise, it's good to prevent ifs.
Note what happens when processing the elements of a std::vector<Kennel*>: the pointers themselves are contiguous in memory, it's easy to get them. But they point at things allocated independently, and therefore possibly not contiguous. The RollCall function must fetch these objects (cache is ineffective here), get their virtual function table pointer, look up the address for the makeSound method, then call the method (branch prediction is not applicable, because the CPU is literally waiting for a pointer that points at the code to be executed).
Compare this to what happens in the simpler code above: the Animal objects are all contiguous in memory (because we hold the items directly). The RollCall method loops over these, and one of two code branches is executed depending on a value in it. The cache and branch prediction can do their thing here.
Though, since what is being done inside the loop is writing to stdout, all of the above is way fast in comparison, and the efficiency matters little here.
• Cris: Kind of torn here - lots of good information on the coding side but I have some different opinions on the code intent side. Using inheritance to make your life easier rather than being dogmatic about Object Orientation is something I agree with - to a point. That point is future maintainability. Under the current exercise, each animal only differs by the sound it makes ("Woof", "Meow", "Neigh") - with proper inheritance, creating a new Animal sub-class means just adjusting a single in-sub-class constant for sound against an abstract function that takes a string argument.
– AJD
Feb 23, 2018 at 5:42
• (cont.) However, you totally destroyed the benefits of OOP in your example code (and you have acknowledge that in a sideways way). In My example, adding a Horse merely means creating one new sub-class - and if abstracted properly (std::cout << name << " says " + sound << std::endl;) it means only adding a single constant string. In your example, adding a Horse means important changes to three places in the code. And that doesn't address what happens if we also want to add an Eat or Move function.
– AJD
Feb 23, 2018 at 5:48
• (cont.) Yes, the exercise wanted AddCat and AddDog - which is valid for a question aimed at those learning OOP in stages. But I also read into this a graduated learning exercise in OOP. So a strong foundation (X is a Y, X has a Z) is more important than some short-cuts to achieve what is, in the end, a contrived answer.
– AJD
Feb 23, 2018 at 5:51
• @AJD, you make some good arguments. Maybe I sounded excessively dismissive about inheritance. There are certainly some excellent applications, it's just that this is not one of them. I understand that Kennel is an exercise to lear about OOP, but, IMO, it teaches the wrong use of inheritance. Here inheritance is used for polymorphism, which probably stems from a C++ design issue. Here is a better way to do polymorphism in C++. -- I'm certainly not this negative about inheritance. Feb 23, 2018 at 21:37
• Your argument that "if abstracted properly, it means only adding a single constant string" means that there should be no inheritance at all: the animal sounds are data, and data should not be encoded in the program structure. The animals should really be a (dynamically updateable) list of properties. On the other hand, if the animal behavior were really specified through code, then inheritance and virtual functions can be quite powerful. It's not that you only need to create a new class, it's that the user of your library can add a new class, and never touch your code. That is fantastic.(cont) Feb 23, 2018 at 21:37
Think of the phrase : public to mean "is a".
A dog is not a kennel, nor is a cat, so Dog and Cat should not inherit from Kennel. You need a class called Animal. Cats and dogs are animals, so Cat and Dog should derive from Animal. Animals are contained in a kennel.
Further to what Jive Dadson mentioned.
You can reduce your methods in Kennel to
void AddAnimal(Animal *animal);
This could be called by
kennel.AddAnimal(new Cat("Garfield")); // or new Dog(), as appropriate
Your roll call remains the same because your Animal class has the abstract method virtual void makeSound(std::string name) { }. And this Animal class, not the Kennel class contains the name (Animal has a name)
protected:
std::string name;
Using the correct abstraction makes adding other animals (e.g. Horse, Snake) easy and logical to do. Using your current code, adding a Horse means writing the Horse class and then an AddHorse method. With the abstraction I have suggested, all you do is write the Horse class (class Horse :public Animal) and your Kennel class just works as before.
• Good suggestion! I'm not sure how comfortable the horse will be, in a pen that was built for cats and dogs, though... ;-) Feb 20, 2018 at 9:55
• @TobySpeight It's probably a miniature horse :) Feb 20, 2018 at 14:39
• Use a smart pointer instead of a raw one to avoid memory leaks Feb 20, 2018 at 15:06
• … or better yet, don’t use any (unexposed) pointers here. If you need a polymorphic class, make it manage its own memory by design. This isn’t the most trivial thing to implement, nor what text books show but it’s the better solution in real projects most of the time. Feb 20, 2018 at 15:29
• @CrisLuengo - easily solved: AddCat( string) just calls AddAnimal(new Cat(string))! If the exercise is to demonstrate knowledge or learning about OOP, then understanding that a Cat is a Animal as opposed to some random object that is thrown in a Kennel is a significant part of that learning. Won't be the first time a question has focussed on an incomplete understanding.
– AJD
Feb 21, 2018 at 4:21
Supplementing my earlier answer (rather than editing it and negating the comments so far). After the comments exchange between myself and Cris Luengo, I thought there was some additional information that would be useful in this Code Review.
A key point made by Cris is that a pure approach to OOP is not appropriate for this simple example. In the real world I would largely agree, although saying a Dog can be inherited from Kennel is stretching the maintainability a tad. I will offer some revised code below on the basis that:
• This is part of a graduated learning exercise
• The code base will be used later to expand upon OOP fundamentals
• A pure OOP approach is warranted.
Cris's excellent points about code efficiency must be considered if trying to do this in the real world.
class Kennel
{
public:
Kennel() { };
~Kennel(){
for (auto i : KennelList) {
delete i;
}
};
void ReceiveAnimal(Mammal * newAnimal){
KennelList.push_back(newAnimal);
};
void RollCall(){
for (unsigned int i = 0; i < KennelList.size(); ++i){
KennelList[i]->makeSound();
}
};
private:
std::vector <Mammal*> KennelList;
protected:
};
class Mammal
{
public:
Mammal(std::string newName){
name = newName;
}
~Mammal() { };
void makeSound()
{
std::cout << name << " says " + noise << std::endl;
};
private:
protected:
std::string name;
std::string noise;
};
//Dog inherits Mammal
class Dog :public Mammal
{
public:
Dog(std::string dogName): Mammal(dogName)
{
noise = "Woof";
name = dogName; // I don't know the language well enough, I suspect this line is not required.
}
//~Dog() { };
};
//Cat inherits Mammal
class Cat :public Mammal
{
public:
Cat(std::string catName): Mammal(catName)
{
noise = "Meow";
name = catName;
}
//~Cat() { };
};
I may have been slightly inefficient in my coding above, I don't know the language well enough to deal with the abstracted destructor and the abstracted constructor. But Visual Studio did not complain (but I have not run it).
The main would now look like:
int main()
{
Kennel kennel;
kennel.ReceiveAnimal(new Cat("Puss in Boots"));
kennel.RollCall();
}
AN advantage of this approach is that if you already have an animal, you can now just pass it through kennel.ReceiveAnimal(myExistingAnimal);
I have done this extended answer to look at pure OOP and code maintainability as if this was a large endeavour (again, Cris's points about the level of effort for this simple example should be considered).
If the Kennel decided to take new animals (e.g. a Fox), then simply add a new class (which can be as simple as the following code):
class Fox :public Mammal
{
public:
Fox(std::string foxName): Mammal(foxName)
{
noise = "Ha Ha Ha! Boom! Boom!";
} // see my previous notes about inexperience with this language and assuming name will be handled by superclass.
};
Mammals give birth. Using the full OOP fundamentals, you can modify the superclass with a new method, which means the subclasses will have this new functionality. The bit I don't know because of my inexperience is how to constrain the new method to ensure it returns a new instance of the subclass rather than the superclass.
class Mammal
{
public:
Mammal(std::string newName){
name = newName;
}
~Mammal() { };
void makeSound()
{
std::cout << name << " says " + noise << std::endl;
};
Mammal giveBirth(std::string newName) {
return new Mammal(newName); // pardon my ignorance here but you get the gist
};
private:
protected:
std::string name;
std::string noise;
};
• (I hadn't seen this answer previously!) You need to make ~Mammal() virtual to ensure that the destructor of the correct class is called when destroying through a pointer or handle to the base class. You can even make it pure virtual to make sure no object of class Mammal is created: virtual ~Mammal(); (i.e. no function body). Sep 26, 2019 at 22:41
#pragma once
#ifndef KENNEL_H
#define KENNEL_H
Only use one of these, preferably #pragma once. No need for #ifndef ... #endif.
|
2022-09-25 01:23:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18839921057224274, "perplexity": 3138.146840188682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00371.warc.gz"}
|
https://ask.sagemath.org/answers/36345/revisions/
|
Revision history [back]
Regarding the first error, it is fortunate that Sage does not preparse what is inside a string (if it was the case before 7.3, then it was a huge bug). If you want your code withing a string to be preparsed, just use the perparse Sage function, for example:
sage: var('a x')
(a, x)
sage: print eval(preparse('16*a^2*x^2'))
16*a^2*x^2
Regarding your second error, well, opr is indeed not defined.
Regarding the first error, it is fortunate that Sage does not preparse what is inside a string (if it was the case before 7.3, then it was a huge bug). If you want your code withing a string to be preparsed, just use the perparse Sage function, for example:
sage: var('a x')
(a, x)
sage: print eval(preparse('16*a^2*x^2'))
16*a^2*x^2
Regarding your second error, well, opr is indeed not defined.defined. Note that you have to pass some context to sage_eval:
sage: var('a x')
(a, x)
sage: sage_eval('16*a^2*x^2')
NameError: name 'a' is not defined
sage: sage_eval('16*a^2*x^2', locals={'x':x,'a':a})
16*a^2*x^2
3 No.3 Revision John Palmieri 5123 ●16 ●45 ●116 http://www.math.washin...
Regarding the first error, it is fortunate that Sage does not preparse what is inside a string (if it was the case before 7.3, then it was a huge bug). If you want your code withing a string to be preparsed, just use the perparsepreparse Sage function, for example:
sage: var('a x')
(a, x)
sage: print eval(preparse('16*a^2*x^2'))
16*a^2*x^2
Regarding your second error, well, opr is indeed not defined. Note that you have to pass some context to sage_eval:
sage: var('a x')
(a, x)
sage: sage_eval('16*a^2*x^2')
NameError: name 'a' is not defined
sage: sage_eval('16*a^2*x^2', locals={'x':x,'a':a})
16*a^2*x^2
|
2019-09-16 02:34:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2397027462720871, "perplexity": 5699.198566889656}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572471.35/warc/CC-MAIN-20190916015552-20190916041552-00063.warc.gz"}
|
https://www.taylorfrancis.com/books/9781315274294/chapters/10.1201/9781315274294-13
|
chapter
4 Pages
## Y,. over the I
Hence the sample mean for the jth strength is the average of Y,. over the I batches:
+ = 2, Yy • (5.5.5)
Since batches are independent of one another, 171;., • • • , Pi, are i.i.d. normal with mean + S.; and variance Qf; as given in (5.5.4). Hence the variance of
is given by
var(Y; ) = _1 _1 cr,s + 0-;,) I K
An unbiased estimator of the variance of Y. can be obtained by substituting o-2,s, and o-2, with their ANOVA estimates given in (5.5.3), which is
cr var (Y;.) = 1 MSE + — [MS(BS)
+ JK
[MSB — MS(BS)]}
[,/ — 1 MS(BS) + MSB].
|
2020-05-29 08:00:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9249238967895508, "perplexity": 6165.61399729528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347402457.55/warc/CC-MAIN-20200529054758-20200529084758-00280.warc.gz"}
|
https://ita.skanev.com/10/01/02.html
|
# Exercise 10.1.2
Explain how to implement two stacks in one array $A[1..n]$ in such a way that neither stack overflows unless the total number of elements in both stacks together is $n$. The PUSH and POP operations should run in $\O(1)$ time.
The first stack starts at $1$ and grows up towards $n$, while the second starts form $n$ and grows down towards $1$. Stack overflow happens when an element is pushed when the two stack pointers are adjacent.
|
2020-01-23 13:19:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49591973423957825, "perplexity": 528.3219201454245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250610919.33/warc/CC-MAIN-20200123131001-20200123160001-00123.warc.gz"}
|
http://sagemath.org/doc/reference/plane_curves/sage/schemes/elliptic_curves/gal_reps.html
|
# Galois representations attached to elliptic curves¶
Given an elliptic curve $$E$$ over $$\QQ$$ and a rational prime number $$p$$, the $$p^n$$-torsion $$E[p^n]$$ points of $$E$$ is a representation of the absolute Galois group $$G_{\QQ}$$ of $$\QQ$$. As $$n$$ varies we obtain the Tate module $$T_p E$$ which is a a representation of $$G_\QQ$$ on a free $$\ZZ_p$$-module of rank $$2$$. As $$p$$ varies the representations are compatible.
Currently sage can decide whether the Galois module $$E[p]$$ is reducible, i.e., if $$E$$ admits an isogeny of degree $$p$$, and whether the image of the representation on $$E[p]$$ is surjective onto $$\text{Aut}(E[p]) = GL_2(\mathbb{F}_p)$$.
The following are the most useful functions for the class GaloisRepresentation.
For the reducibility:
• is_reducible(p)
• is_irreducible(p)
• reducible_primes()
For the image:
• is_surjective(p)
• non_surjective()
• image_type(p)
For the classification of the representation
• is_semistable(p)
• is_unramified(p, ell)
• is_crystalline(p)
EXAMPLES:
sage: E = EllipticCurve('196a1')
sage: rho = E.galois_representation()
sage: rho.is_irreducible(7)
True
sage: rho.is_reducible(3)
True
sage: rho.is_irreducible(2)
True
sage: rho.is_surjective(2)
False
sage: rho.is_surjective(3)
False
sage: rho.is_surjective(5)
True
sage: rho.reducible_primes()
[3]
sage: rho.non_surjective()
[2, 3]
sage: rho.image_type(2)
'The image is cyclic of order 3.'
sage: rho.image_type(3)
'The image is contained in a Borel subgroup as there is a 3-isogeny.'
sage: rho.image_type(5)
'The image is all of GL_2(F_5).'
For semi-stable curve it is known that the representation is surjective if and only if it is irreducible:
sage: E = EllipticCurve('11a1')
sage: rho = E.galois_representation()
sage: rho.non_surjective()
[5]
sage: rho.reducible_primes()
[5]
For cm curves it is not true that there are only finitely many primes for which the Galois representation mod p is surjective onto $$GL_2(\mathbb{F}_p)$$:
sage: E = EllipticCurve('27a1')
sage: rho = E.galois_representation()
sage: rho.non_surjective()
[0]
sage: rho.reducible_primes()
[3]
sage: E.has_cm()
True
sage: rho.image_type(11)
'The image is contained in the normalizer of a non-split Cartan group. (cm)'
REFERENCES:
[Se1] Jean-Pierre Serre, Propriétés galoisiennes des points d’ordre fini des courbes elliptiques. Invent. Math. 15 (1972), no. 4, 259–331.
[Se2] Jean-Pierre Serre, Sur les représentations modulaires de degré 2 de $$\text{Gal}(\bar\QQ/\QQ)$$. Duke Math. J. 54 (1987), no. 1, 179–230.
[Co] Alina Carmen Cojocaru, On the surjectivity of the Galois representations associated to non-CM elliptic curves. With an appendix by Ernst Kani. Canad. Math. Bull. 48 (2005), no. 1, 16–31.
AUTHORS:
• chris wuthrich (02/10) - moved from ell_rational_field.py.
class sage.schemes.elliptic_curves.gal_reps.GaloisRepresentation(E)
The compatible family of Galois representation attached to an elliptic curve over the rational numbers.
Given an elliptic curve $$E$$ over $$\QQ$$ and a rational prime number $$p$$, the $$p^n$$-torsion $$E[p^n]$$ points of $$E$$ is a representation of the absolute Galois group. As $$n$$ varies we obtain the Tate module $$T_p E$$ which is a representation of the absolute Galois group on a free $$\ZZ_p$$-module of rank $$2$$. As $$p$$ varies the representations are compatible.
EXAMPLES:
sage: rho = EllipticCurve('11a1').galois_representation()
sage: rho
Compatible family of Galois representations associated to the Elliptic Curve defined by y^2 + y = x^3 - x^2 - 10*x - 20 over Rational Field
elliptic_curve()
The elliptic curve associated to this representation.
EXAMPLES:
sage: E = EllipticCurve('11a1')
sage: rho = E.galois_representation()
sage: rho.elliptic_curve() == E
True
image_classes(p, bound=10000)
This function returns, given the representation $$\rho$$ a list of $$p$$ values that add up to 1, representing the frequency of the conjugacy classes of the projective image of $$\rho$$ in $$PGL_2(\mathbb{F}_p)$$.
Let $$M$$ be a matrix in $$GL_2(\mathbb{F}_p)$$, then define $$u(M) = \text{tr}(M)^2/\det(M)$$, which only depends on the conjugacy class of $$M$$ in $$PGL_2(\mathbb{F}_p)$$. Hence this defines a map $$u: PGL_2(\mathbb{F}_p) \to \mathbb{F}_p$$, which is almost a bijection between conjugacy classes of the source and $$\mathbb{F}_p$$ (the elements of order $$p$$ and the identity map to $$4$$ and both classes of elements of order 2 map to 0).
This function returns the frequency with which the values of $$u$$ appeared among the images of the Frobenius elements $$a_{\ell}at \ell$$ for good primes $$\ell\neq p$$ below a given bound.
INPUT:
• a prime p
• a natural number bound (optional, default=10000)
OUTPUT:
• a list of $$p$$ real numbers in the interval $$[0,1]$$ adding up to 1
EXAMPLES:
sage: E = EllipticCurve('14a1')
sage: rho = E.galois_representation()
sage: rho.image_classes(5)
[0.2095, 0.1516, 0.2445, 0.1728, 0.2217]
sage: E = EllipticCurve('11a1')
sage: rho = E.galois_representation()
sage: rho.image_classes(5)
[0.2467, 0.0000, 0.5049, 0.0000, 0.2484]
sage: EllipticCurve('27a1').galois_representation().image_classes(5)
[0.5839, 0.1645, 0.0000, 0.1702, 0.08143]
sage: EllipticCurve('30a1').galois_representation().image_classes(5)
[0.1956, 0.1801, 0.2543, 0.1728, 0.1972]
sage: EllipticCurve('32a1').galois_representation().image_classes(5)
[0.6319, 0.0000, 0.2492, 0.0000, 0.1189]
sage: EllipticCurve('900a1').galois_representation().image_classes(5)
[0.5852, 0.1679, 0.0000, 0.1687, 0.07824]
sage: EllipticCurve('441a1').galois_representation().image_classes(5)
[0.5860, 0.1646, 0.0000, 0.1679, 0.08150]
sage: EllipticCurve('648a1').galois_representation().image_classes(5)
[0.3945, 0.3293, 0.2388, 0.0000, 0.03749]
sage: EllipticCurve('784h1').galois_representation().image_classes(7)
[0.5049, 0.0000, 0.0000, 0.0000, 0.4951, 0.0000, 0.0000]
sage: EllipticCurve('49a1').galois_representation().image_classes(7)
[0.5045, 0.0000, 0.0000, 0.0000, 0.4955, 0.0000, 0.0000]
sage: EllipticCurve('121c1').galois_representation().image_classes(11)
[0.1001, 0.0000, 0.0000, 0.0000, 0.1017, 0.1953, 0.1993, 0.0000, 0.0000, 0.2010, 0.2026]
sage: EllipticCurve('121d1').galois_representation().image_classes(11)
[0.08869, 0.07974, 0.08706, 0.08137, 0.1001, 0.09439, 0.09764, 0.08218, 0.08625, 0.1017, 0.1009]
sage: EllipticCurve('441f1').galois_representation().image_classes(13)
[0.08232, 0.1663, 0.1663, 0.1663, 0.08232, 0.0000, 0.1549, 0.0000, 0.0000, 0.0000, 0.0000, 0.1817, 0.0000]
REMARKS:
Conjugacy classes of subgroups of $$PGL_2(\mathbb{F}_5)$$
For the case $$p=5$$, the order of an element determines almost the value of $$u$$:
$$u$$ 0 1 2 3 4 orders 2 3 4 6 1 or 5
Here we give here the full table of all conjugacy classes of subgroups with the values that image_classes should give (as bound tends to $$\infty$$). Comparing with the output of the above examples, it is now easy to guess what the image is.
subgroup order frequencies of values of $$u$$
trivial 1 [0.0000, 0.0000, 0.0000, 0.0000, 1.000]
cyclic 2 [0.5000, 0.0000, 0.0000, 0.0000, 0.5000]
cyclic 2 [0.5000, 0.0000, 0.0000, 0.0000, 0.5000]
cyclic 3 [0.0000, 0.6667, 0.0000, 0.0000, 0.3333]
Klein 4 [0.7500, 0.0000, 0.0000, 0.0000, 0.2500]
cyclic 4 [0.2500, 0.0000, 0.5000, 0.0000, 0.2500]
Klein 4 [0.7500, 0.0000, 0.0000, 0.0000, 0.2500]
cyclic 5 [0.0000, 0.0000, 0.0000, 0.0000, 1.000]
cyclic 6 [0.1667, 0.3333, 0.0000, 0.3333, 0.1667]
$$S_3$$ 6 [0.5000, 0.3333, 0.0000, 0.0000, 0.1667]
$$S_3$$ 6 [0.5000, 0.3333, 0.0000, 0.0000, 0.1667]
$$D_4$$ 8 [0.6250, 0.0000, 0.2500, 0.0000, 0.1250]
$$D_5$$ 10 [0.5000, 0.0000, 0.0000, 0.0000, 0.5000]
$$A_4$$ 12 [0.2500, 0.6667, 0.0000, 0.0000, 0.08333]
$$D_6$$ 12 [0.5833, 0.1667, 0.0000, 0.1667, 0.08333]
Borel 20 [0.2500, 0.0000, 0.5000, 0.0000, 0.2500]
$$S_4$$ 24 [0.3750, 0.3333, 0.2500, 0.0000, 0.04167]
$$PSL_2$$ 60 [0.2500, 0.3333, 0.0000, 0.0000, 0.4167]
$$PGL_2$$ 120 [0.2083, 0.1667, 0.2500, 0.1667, 0.2083]
image_type(p)
Returns a string describing the image of the mod-p representation. The result is provably correct, but only indicates what sort of an image we have. If one wishes to determine the exact group one needs to work a bit harder. The probabilistic method of image_classes or Sutherland’s galrep package can give a very good guess what the image should be.
INPUT:
• p a prime number
OUTPUT:
• a string.
EXAMPLES
sage: E = EllipticCurve('14a1')
sage: rho = E.galois_representation()
sage: rho.image_type(5)
'The image is all of GL_2(F_5).'
sage: E = EllipticCurve('11a1')
sage: rho = E.galois_representation()
sage: rho.image_type(5)
'The image is meta-cyclic inside a Borel subgroup as there is a 5-torsion point on the curve.'
sage: EllipticCurve('27a1').galois_representation().image_type(5)
'The image is contained in the normalizer of a non-split Cartan group. (cm)'
sage: EllipticCurve('30a1').galois_representation().image_type(5)
'The image is all of GL_2(F_5).'
sage: EllipticCurve("324b1").galois_representation().image_type(5)
'The image in PGL_2(F_5) is the exceptional group S_4.'
sage: E = EllipticCurve([0,0,0,-56,4848])
sage: rho = E.galois_representation()
sage: rho.image_type(5)
'The image is contained in the normalizer of a split Cartan group.'
sage: EllipticCurve('49a1').galois_representation().image_type(7)
'The image is contained in a Borel subgroup as there is a 7-isogeny.'
sage: EllipticCurve('121c1').galois_representation().image_type(11)
'The image is contained in a Borel subgroup as there is a 11-isogeny.'
sage: EllipticCurve('121d1').galois_representation().image_type(11)
'The image is all of GL_2(F_11).'
sage: EllipticCurve('441f1').galois_representation().image_type(13)
'The image is contained in a Borel subgroup as there is a 13-isogeny.'
sage: EllipticCurve([1,-1,1,-5,2]).galois_representation().image_type(5)
'The image is contained in the normalizer of a non-split Cartan group.'
sage: EllipticCurve([0,0,1,-25650,1570826]).galois_representation().image_type(5)
'The image is contained in the normalizer of a split Cartan group.'
sage: EllipticCurve([1,-1,1,-2680,-50053]).galois_representation().image_type(7) # the dots (...) in the output fix #11937 (installed 'Kash' may give additional output); long time (2s on sage.math, 2014)
'The image is a... group of order 18.'
sage: EllipticCurve([1,-1,0,-107,-379]).galois_representation().image_type(7) # the dots (...) in the output fix #11937 (installed 'Kash' may give additional output); long time (1s on sage.math, 2014)
'The image is a... group of order 36.'
sage: EllipticCurve([0,0,1,2580,549326]).galois_representation().image_type(7)
'The image is contained in the normalizer of a split Cartan group.'
Test trac ticket #14577:
sage: EllipticCurve([0, 1, 0, -4788, 109188]).galois_representation().image_type(13)
'The image in PGL_2(F_13) is the exceptional group S_4.'
Test trac ticket #14752:
sage: EllipticCurve([0, 0, 0, -1129345880,-86028258620304]).galois_representation().image_type(11)
'The image is contained in the normalizer of a non-split Cartan group.'
For $$p=2$$:
sage: E = EllipticCurve('11a1')
sage: rho = E.galois_representation()
sage: rho.image_type(2)
'The image is all of GL_2(F_2), i.e. a symmetric group of order 6.'
sage: rho = EllipticCurve('14a1').galois_representation()
sage: rho.image_type(2)
'The image is cyclic of order 2 as there is exactly one rational 2-torsion point.'
sage: rho = EllipticCurve('15a1').galois_representation()
sage: rho.image_type(2)
'The image is trivial as all 2-torsion points are rational.'
sage: rho = EllipticCurve('196a1').galois_representation()
sage: rho.image_type(2)
'The image is cyclic of order 3.'
$$p=3$$:
sage: rho = EllipticCurve('33a1').galois_representation()
sage: rho.image_type(3)
'The image is all of GL_2(F_3).'
sage: rho = EllipticCurve('30a1').galois_representation()
sage: rho.image_type(3)
'The image is meta-cyclic inside a Borel subgroup as there is a 3-torsion point on the curve.'
sage: rho = EllipticCurve('50b1').galois_representation()
sage: rho.image_type(3)
'The image is contained in a Borel subgroup as there is a 3-isogeny.'
sage: rho = EllipticCurve('3840h1').galois_representation()
sage: rho.image_type(3)
'The image is contained in a dihedral group of order 8.'
sage: rho = EllipticCurve('32a1').galois_representation()
sage: rho.image_type(3)
'The image is a semi-dihedral group of order 16, gap.SmallGroup([16,8]).'
ALGORITHM: Mainly based on Serre’s paper.
is_crystalline(p)
Returns true is the $$p$$-adic Galois representation to $$GL_2(\ZZ_p)$$ is crystalline.
For an elliptic curve $$E$$, this is to ask whether $$E$$ has good reduction at $$p$$.
INPUT:
• p a prime
OUTPUT:
• a Boolean
EXAMPLES:
sage: rho = EllipticCurve('64a1').galois_representation()
sage: rho.is_crystalline(5)
True
sage: rho.is_crystalline(2)
False
is_irreducible(p)
Return True if the mod p representation is irreducible.
INPUT:
• p - a prime number
OUTPUT:
• a boolean
EXAMPLES:
sage: rho = EllipticCurve('37b').galois_representation()
sage: rho.is_irreducible(2)
True
sage: rho.is_irreducible(3)
False
sage: rho.is_reducible(2)
False
sage: rho.is_reducible(3)
True
is_ordinary(p)
Returns true if the $$p$$-adic Galois representation to $$GL_2(\ZZ_p)$$ is ordinary, i.e. if the image of the decomposition group in $$\text{Gal}(\bar\QQ/\QQ)$$ above he prime $$p$$ maps into a Borel subgroup.
For an elliptic curve $$E$$, this is to ask whether $$E$$ is ordinary at $$p$$, i.e. good ordinary or multiplicative.
INPUT:
• p a prime
OUTPUT:
• a Boolean
EXAMPLES:
sage: rho = EllipticCurve('11a3').galois_representation()
sage: rho.is_ordinary(11)
True
sage: rho.is_ordinary(5)
True
sage: rho.is_ordinary(19)
False
is_potentially_crystalline(p)
Returns true is the $$p$$-adic Galois representation to $$GL_2(\ZZ_p)$$ is potentially crystalline, i.e. if there is a finite extension $$K/\QQ_p$$ such that the $$p$$-adic representation becomes crystalline.
For an elliptic curve $$E$$, this is to ask whether $$E$$ has potentially good reduction at $$p$$.
INPUT:
• p a prime
OUTPUT:
• a Boolean
EXAMPLES:
sage: rho = EllipticCurve('37b1').galois_representation()
sage: rho.is_potentially_crystalline(37)
False
sage: rho.is_potentially_crystalline(7)
True
is_potentially_semistable(p)
Returns true if the $$p$$-adic Galois representation to $$GL_2(\ZZ_p)$$ is potentially semistable.
For an elliptic curve $$E$$, this returns True always
INPUT:
• p a prime
OUTPUT:
• a Boolean
EXAMPLES:
sage: rho = EllipticCurve('27a2').galois_representation()
sage: rho.is_potentially_semistable(3)
True
is_quasi_unipotent(p, ell)
Returns true if the Galois representation to $$GL_2(\ZZ_p)$$ is quasi-unipotent at $$\ell\neq p$$, i.e. if there is a fintie extension $$K/\QQ$$ such that the inertia group at a place above $$\ell$$ in $$\text{Gal}(\bar\QQ/K)$$ maps into a Borel subgroup.
For a Galois representation attached to an elliptic curve $$E$$, this returns always True.
INPUT:
• p a prime
• ell a different prime
OUTPUT:
• Boolean
EXAMPLES:
sage: rho = EllipticCurve('11a3').galois_representation()
sage: rho.is_quasi_unipotent(11,13)
True
is_reducible(p)
Return True if the mod-p representation is reducible. This is equivalent to the existence of an isogeny defined over $$\QQ$$ of degree $$p$$ from the elliptic curve.
INPUT:
• p - a prime number
OUTPUT:
• a boolean
EXAMPLES:
sage: rho = EllipticCurve('121a').galois_representation()
sage: rho.is_reducible(7)
False
sage: rho.is_reducible(11)
True
sage: EllipticCurve('11a').galois_representation().is_reducible(5)
True
sage: rho = EllipticCurve('11a2').galois_representation()
sage: rho.is_reducible(5)
True
sage: EllipticCurve('11a2').torsion_order()
1
is_semistable(p)
Returns true if the $$p$$-adic Galois representation to $$GL_2(\ZZ_p)$$ is semistable.
For an elliptic curve $$E$$, this is to ask whether $$E$$ has semistable reduction at $$p$$.
INPUT:
• p a prime
OUTPUT:
• a Boolean
EXAMPLES:
sage: rho = EllipticCurve('20a3').galois_representation()
sage: rho.is_semistable(2)
False
sage: rho.is_semistable(3)
True
sage: rho.is_semistable(5)
True
is_surjective(p, A=1000)
Return True if the mod-p representation is surjective onto $$Aut(E[p]) = GL_2(\mathbb{F}_p)$$.
False if it is not, or None if we were unable to determine whether it is or not.
INPUT:
• p - int (a prime number)
• A - int (a bound on the number of a_p to use)
OUTPUT:
• boolean. True if the mod-p representation is surjective and False if not.
EXAMPLES:
sage: rho = EllipticCurve('37b').galois_representation()
sage: rho.is_surjective(2)
True
sage: rho.is_surjective(3)
False
sage: rho = EllipticCurve('121a1').galois_representation()
sage: rho.non_surjective()
[11]
sage: rho.is_surjective(5)
True
sage: rho.is_surjective(11)
False
sage: rho = EllipticCurve('121d1').galois_representation()
sage: rho.is_surjective(5)
False
sage: rho.is_surjective(11)
True
Here is a case, in which the algorithm does not return an answer:
sage: rho = EllipticCurve([0,0,1,2580,549326]).galois_representation()
sage: rho.is_surjective(7)
sage: rho.image_type(7)
'The image is contained in the normalizer of a split Cartan group.'
REMARKS:
1. If $$p \geq 5$$ then the mod-p representation is surjective if and only if the p-adic representation is surjective. When $$p = 2, 3$$ there are counterexamples. See papers of Dokchitsers and Elkies for more details.
2. For the primes $$p=2$$ and 3, this will always answer either True or False. For larger primes it might give None.
is_unipotent(p, ell)
Returns true if the Galois representation to $$GL_2(\ZZ_p)$$ is unipotent at $$\ell\neq p$$, i.e. if the inertia group at a place above $$\ell$$ in $$\text{Gal}(\bar\QQ/\QQ)$$ maps into a Borel subgroup.
For a Galois representation attached to an elliptic curve $$E$$, this returns True if $$E$$ has semi-stable reduction at $$\ell$$.
INPUT:
• p a prime
• ell a different prime
OUTPUT:
• Boolean
EXAMPLES:
sage: rho = EllipticCurve('120a1').galois_representation()
sage: rho.is_unipotent(2,5)
True
sage: rho.is_unipotent(5,2)
False
sage: rho.is_unipotent(5,7)
True
sage: rho.is_unipotent(5,3)
True
sage: rho.is_unipotent(5,5)
Traceback (most recent call last):
...
ValueError: unipotent is not defined for l = p, use semistable instead.
is_unramified(p, ell)
Returns true if the Galois representation to $$GL_2(\ZZ_p)$$ is unramified at $$\ell$$, i.e. if the inertia group at a place above $$\ell$$ in $$\text{Gal}(\bar\QQ/\QQ)$$ has trivial image in $$GL_2(\ZZ_p)$$.
For a Galois representation attached to an elliptic curve $$E$$, this returns True if $$\ell\neq p$$ and $$E$$ has good reduction at $$\ell$$.
INPUT:
• p a prime
• ell another prime
OUTPUT:
• Boolean
EXAMPLES:
sage: rho = EllipticCurve('20a3').galois_representation()
sage: rho.is_unramified(5,7)
True
sage: rho.is_unramified(5,5)
False
sage: rho.is_unramified(7,5)
False
This says that the 5-adic representation is unramified at 7, but the 7-adic representation is ramified at 5.
non_surjective(A=1000)
Returns a list of primes p such that the mod-p representation might not be surjective. If $$p$$ is not in the returned list, then the mod-p representation is provably surjective.
By a theorem of Serre, there are only finitely many primes in this list, except when the curve has complex multiplication.
If the curve has CM, we simply return the sequence [0] and do no further computation.
INPUT:
• A - an integer (default 1000). By increasing this parameter the resulting set might get smaller.
OUTPUT:
• list - if the curve has CM, returns [0]. Otherwise, returns a list of primes where mod-p representation is very likely not surjective. At any prime not in this list, the representation is definitely surjective.
EXAMPLES:
sage: E = EllipticCurve([0, 0, 1, -38, 90]) # 361A
sage: E.galois_representation().non_surjective() # CM curve
[0]
sage: E = EllipticCurve([0, -1, 1, 0, 0]) # X_1(11)
sage: E.galois_representation().non_surjective()
[5]
sage: E = EllipticCurve([0, 0, 1, -1, 0]) # 37A
sage: E.galois_representation().non_surjective()
[]
sage: E = EllipticCurve([0,-1,1,-2,-1]) # 141C
sage: E.galois_representation().non_surjective()
[13]
sage: E = EllipticCurve([1,-1,1,-9965,385220]) # 9999a1
sage: rho = E.galois_representation()
sage: rho.non_surjective()
[2]
sage: E = EllipticCurve('324b1')
sage: rho = E.galois_representation()
sage: rho.non_surjective()
[3, 5]
ALGORITHM: We first find an upper bound $$B$$ on the possible primes. If $$E$$ is semi-stable, we can take $$B=11$$ by a result of Mazur. There is a bound by Serre in the case that the $$j$$-invariant is not integral in terms of the smallest prime of good reduction. Finally there is an unconditional bound by Cojocaru, but which depends on the conductor of $$E$$. For the prime below that bound we call is_surjective.
reducible_primes()
Returns a list of the primes $$p$$ such that the mod-$$p$$ representation is reducible. For all other primes the representation is irreducible.
EXAMPLES:
sage: rho = EllipticCurve('225a').galois_representation()
sage: rho.reducible_primes()
[3]
#### Previous topic
Torsion subgroups of elliptic curves over number fields (including $$\QQ$$)
#### Next topic
Galois representations for elliptic curves over number fields.
|
2014-12-22 10:38:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9009908437728882, "perplexity": 1167.7117300221603}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775085.124/warc/CC-MAIN-20141217075255-00157-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1898536/related-rates-hot-air-balloon
|
Related Rates Hot Air Balloon
I have a related rates problem on a hot air balloon that is rising and I am asked to determine the rate of change in the angle. I'm having difficulties developing a relationship.
Here is the question:
So far my attempt at this question is the following and I am unsure if it's correct or not.
$$Tan\theta = \frac{y}{450}$$ $$\frac{d}{dt} (Tan\theta) = (\frac{d}{dt})(\frac{y}{450})\cdot\frac{dh}{dt}$$
$$\frac{d}{d\theta}\cdot sec^2\theta=\frac{1}{450}\cdot 2$$
I found $sec^2$ through the pythagorean thereom; $$sec^2\theta = (\frac{h}{a})^2 = 1.444$$
placing it back into
$$\frac{d}{d\theta}\cdot 1.444=\frac{1}{450}\cdot 2$$
$$\frac{d}{d\theta} = \frac{2}{450\cdot(1.444)}$$
giving a final answer of $0.003077 radians$
In degrees $Tan^{-1}(0.003077) = 0.176^\circ$
• If you differentiate both sides you get $\frac{d\theta}{dt} \sec^2 (\theta)=(1/450)\frac{dy}{dt}=2(\frac{1}{450})$. If you use pythag and the fact that sec is 1 over cosine you get $\sec^2(\theta)=1.44$ So you are correct. If you go the path which I showed you will get the same thing. However, you converting to degrees is incorrect. To convert to degrees multiply by $\frac{180 \text{degrees}}{\pi \text{rad}}=1$. – Ahmed S. Attaalla Aug 21 '16 at 3:45
• Actually you got approx. the correct answer in degrees, you just showed an incorrect method for finding it. You got lucky. – Ahmed S. Attaalla Aug 21 '16 at 3:53
• For example $\tan^{-1}(\pi/4)=\sqrt{2}/2$ yet we know $\frac{\pi}{4}$ radians is 45 degrees. Why you got lucky is that $\arctan(x) \approx x$ in radians $x$, for $x$ close to $0$, and your calculator interpreted it as a degree. – Ahmed S. Attaalla Aug 21 '16 at 4:03
Call the height of the ballon $h$ by simple trigonometry we have:
$$\tan(\theta)=\frac{h}{450}$$
$$\theta=\arctan(\frac{h}{450})$$
Differentiate both sides with respect to time $t$.Remember you are trying find $\frac{d\theta}{dt}$ when $h=300$.
Also if you want to write $\frac{d\theta}{dt}$ completely in terms of $t$ can you write $h$ in terms of $t$? Hint: distance over time is your average speed, in our case we have a constant vertical speed.
|
2020-01-27 00:23:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8308632373809814, "perplexity": 264.1101621567369}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694071.63/warc/CC-MAIN-20200126230255-20200127020255-00266.warc.gz"}
|
http://motls.blogspot.com/2011/12/kronecker-levi-civita-stieltjes.html
|
## Wednesday, December 28, 2011 ... /////
### Kronecker, Levi-Civita, Stieltjes: jubilees
Three famous mathematicians were born or died on December 29th, in a year ending with a "1" or "6":
Leopold Kronecker was born in Legnica, Poland – which belonged to Prussia at that time – to a Jewish family. Once he learned some number theory and algebraic number fields from Peter Gustav Dirichlet, he knew everything he needed to become a manager of some real estate that belonged to his uncle so that he didn't bother to produce any mathematical results for 8 years. ;-)
Later, he returned to work and studied solutions to algebraic equations, extending the work by Galois. Kronecker became a forefather or intuitionism, by rejecting some continuous objects that he found contrived. While I would often count myself as an intuitionist in mathematics, I would disagree with extreme statements by folks like Kronecker. For example, he rejected Weierstrass' "pathological" function that is continuous everywhere but differentiable nowhere. Well, such Brownian-motion-like functions are actually the dominant contributions to Feynman's path integrals today so I wouldn't claim that they "don't exist" or that they are "not important".
Kronecker invented the "Kronecker delta" which we use very often these days. The term "Kronecker symbol" is actually something else, a product of some powers of primes. To continue in this theme, the "Kronecker product" is something very different from the product mentioned in the previous sentence: it is the tensor product of two matrices. The Kronecker-Weber theorem is linked to the Galois group business and shows that the fields based on roots of unity are enough to classify certain Abelian extensions of rational numbers. Kronecker's theorem is either a related result about polynomials; or a theorem useful in the research of diophantine equations. Kronecker's lemma allows one to find certain coefficients for every convergent series so that a new weighted sum vanishes. You may see that his interests were pretty diverse.
Thomas Joannes Stieltjes was born in Zwolle, Netherlands, to a father who was both politician (a deputy in the Parliament) and civil engineer responsible for the construction of harbors around Rotterdam. He was skipping classes in Delft. Instead, he was reading... no, it wasn't porn: it was Gauss and Jacobi. Consequently, he failed several key exams. However, as I mentioned, he had a pretty powerful father who could move a few puppets so that Thomas could have gotten a job as an assistant at the Leiden Observatory, anyway. It was a very constructive application of nepotism.
He started to exchange lots of e-mails with Charles Hermite, the guy who gave the name to Hermitian polynomials and operators. They would chat about celestial mechanics but the topic would soon switch to maths and continue to the end of their lives. He would also be able to become a professor without the required diplomas.
Stieltjes designed a generalization of the Riemann integral that is partly named after him; to be balanced, he also masterminded a generalization of both Riemann and Lebesgue integral, the Lebesgue-Stieltjes integral; he generalized the Laplace transform by adding a more general measure to the integral, to the so-called Laplace-Stieltjes transform; he was the ultimate father of continued fractions; he also studied divergent series, differential equations, interpolation, the gamma function, elliptic functions, and created the foundations for the theory of Hilbert spaces.
Other things are named after him, including the Stieltjes matrix (real symmetric positively definite non-diagonal), Stieltjes moment problem (a few numbers may be moments of $x$ if a collection of determinants constructed from those numbers is positive), Stieltjes-Wigert polynomials (some simple hypergeometric polynomials with a cleverly chosen weight), Čebyšev-Markov-Stieltjes inequalities (also related to moments). He co-founded Annales in Toulouse.
Tullio Levi-Civita was born in 1873 into a Jewish Italian family in Padua. His father was a lawyer and ex-senator. In 1900, he was very active in tensor calculus and published a text that was used as a textbook by Albert Einstein. His role for the Levi-Civita symbol doesn't have to be explained. Later, when Einstein was completing the general relativity, Levi-Civita and Einstein would write lots of letters to one another. Einstein found Levi-Civita's mathematical methods elegant.
In 1938, because of his background, Levi-Civita was found politically incorrect by the leader of the Italian Social Republic trained by the Italian Socialist Party – the duce – and was stripped of all jobs and memberships in societies. He died in his apartment in 1941. Einstein once said that two most famous things about Italy were spaghetti and Levi-Civita.
#### snail feedback (4) :
reader Brian G Valentine said...
More generally, Kronecker knew very well that the delta "function" was a distribution in the sense of a weak solution to a PDE.
These people lived in an era when "science" was actually science, and atrocious ideas of Arrhenius and others about CO2 in the atmosphere would never be seriously entertained by anyone sane.
Now I'll get up on my soapbox and deliver an End Year message to all the greenies and global warmers out there wrecking what is left of rational thought in this world:
"You people *STINK* to high heaven"
What a fun post - thanks!
reader Luboš Motl said...
Thanks, Binkley!
Brian, are you sure you didn't confuse him with Dirac? My guess, and it's just a guess, is that Kronecker knew absolute zero about the delta-function. Kronecker's delta is a discrete object! It's equal to 1 or 0 if the two indices are equal or not. No integrals here.
reader Brian G Valentine said...
Ah, you're right Luboš, δ was a distribution to Dirac but not to Kronecker.
Evidently there is a monograph by Levi-Civita on the n-body problem in general relativity, published posthumously. I never saw it, I would like to.
I wonder how Levi-Civita handled it. I wonder if he "got anywhere" with it - meaning describing something observable from the relations - and not just formal manipulation.
The 2-body problem in GR is like the 3-body problem in Newtonian mechanics, insofar as the equations of motion expand out like that and it is extremely difficult to decide how to get anywhere with it.
Levi-Civita was booted out of the Italian academy for being Jewish and anti-fascist.
It won't be all that long before people are booted out of the Royal Society and National Academy of Science in the US for being anti-Greenie fascist. People will justify that in their own minds equally as well as Mussolini's perverted little fascist stooges were able to justify it.
This characteristic has been part of the "human" nature of mobs since the dawn of civilization.
It must serve some evolutionary function, what that is I don't know
|
2017-04-23 21:31:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5510048270225525, "perplexity": 1398.3206661218542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118831.16/warc/CC-MAIN-20170423031158-00065-ip-10-145-167-34.ec2.internal.warc.gz"}
|
http://spindynamics.org/wiki/index.php?title=Levelpop.m
|
# Levelpop.m
Jump to: navigation, search
Equilibrium populations of the energy levels of a user-specified spin at the user-specified temperature. Energies are reported as fractions of kT at the temperature supplied. Note that the function is sensitive to the sign of the magnetogyric ratio (negative for electrons, positive for pro- tons, etc.). Syntax:
[E,P,dP]=levelpop(isotope,field,temperature)
Parameters:
isotope - character string specifying the isotope.
e.g. '1H', '13C', 'E', etc.
field - primary magnet field in Tesla
temperature - spin temperature, Kelvin
Outputs:
E - vector of level energies
P - vector of level populations
dP - vector of level population differences
|
2017-12-11 22:57:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6287423372268677, "perplexity": 7129.831866335445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948514113.3/warc/CC-MAIN-20171211222541-20171212002541-00033.warc.gz"}
|
https://scicomp.stackexchange.com/questions/26395/how-to-start-using-lapack-in-c/26396
|
# How to start using LAPACK in c++?
I'm new to computational science and I already have learned basic methods for integration, interpolation, methods like RK4, Numerov etc on c++ but recently my professor asked me to learn how to use LAPACK for solving problems related to matrices. Like for example finding eigenvalues of a complex matrix. I have never used third-party libraries and I almost always write my own functions. I have been searching around for several days but can't find any amateur-friendly guide to lapack. All of them are written in words I don't understand and I don't know why using already written functions should be this complicated. They are full of words like zgeev, dtrsv, etc. and I'm frustrated. I just want to code something like this pseudo-code:
#include <lapack:matrix>
int main(){
LapackComplexMatrix A(n,n);
for...
for...
cin>>A(i,j);
cout<<LapackEigenValues(A);
return 0;
}
I don't know if I'm being silly or amateur. But again, this shouldn't be that hard should it? I don't even know should I use LAPACK or LAPACK++. (I write codes in c++ and have no knowledge of Python or FORTRAN) and how to install them.
• Perhaps this example would be useful: matrixprogramming.com/files/code/LAPACK Mar 14, 2017 at 13:41
• If you are just starting, maybe it'll be easier to use a library that is simpler like ArrayFire github.com/arrayfire/arrayfire. You can directly call it from C++ and the API's are simpler and I think it can do all operations that LAPACK does. Mar 14, 2017 at 13:54
• In this other post a user proposes his own wrapper FLENS, which has a very nice syntax that could ease your introduction to LAPACK. Nov 1, 2018 at 22:40
• Calling LAPACK functions directly is very tedious and error prone. There are several user-friendly C++ wrappers for LAPACK which provide much easier usage, such as Armadillo. For the specific use case of complex eigen decomposition, see the user-friendly eig_gen() function, which underneath wraps this LAPACK monstrosity, zheev(JOBZ, UPLO, N, A, LDA, W, WORK, LWORK, RWORK, INFO), and reformats the obtained eigenvalues and eigenvectors into standard representations. May 14, 2019 at 15:10
• I second the recommendation of Armadillo. Saved me countles hours of pain. Aug 8 at 6:48
I'm going to disagree with some of the other answers and say that I believe that figuring out how to use LAPACK is important in the field of scientific computing.
However, there is a large learning curve to using LAPACK. This is because it is written at a very low level. The disadvantage of that is that it seems very cryptic, and not pleasant to the senses. The advantage of it is that the interface is unambiguous and basically never changes. Additionally, implementations of LAPACK, such as the Intel Math Kernel Library are really fast.
For my own purposes, I have my own higher level C++ classes which wrap around LAPACK subroutines. Many scientific libraries also use LAPACK underneath. Sometimes it's easier to just use them, but in my opinion there's a lot of value in understanding the tool underneath. To that end, I've provided a small working example written in C++ using LAPACK to get you started. This works in Ubuntu, with the liblapack3 package installed, and other necessary packages for building. It can probably be used in most Linux distributions, but installation of LAPACK and linking against it can vary.
Here's the file test_lapack.cpp
#include <iostream>
#include <fstream>
using namespace std;
// dgeev_ is a symbol in the LAPACK library files
extern "C" {
extern int dgeev_(char*,char*,int*,double*,int*,double*, double*, double*, int*, double*, int*, double*, int*, int*);
}
int main(int argc, char** argv){
// check for an argument
if (argc<2){
cout << "Usage: " << argv[0] << " " << " filename" << endl;
return -1;
}
int n,m;
double *data;
// read in a text file that contains a real matrix stored in column major format
// but read it into row major format
ifstream fin(argv[1]);
if (!fin.is_open()){
cout << "Failed to open " << argv[1] << endl;
return -1;
}
fin >> n >> m; // n is the number of rows, m the number of columns
data = new double[n*m];
for (int i=0;i<n;i++){
for (int j=0;j<m;j++){
fin >> data[j*n+i];
}
}
if (fin.fail() || fin.eof()){
cout << "Error while reading " << argv[1] << endl;
return -1;
}
fin.close();
// check that matrix is square
if (n != m){
cout << "Matrix is not square" <<endl;
return -1;
}
// allocate data
char Nchar='N';
double *eigReal=new double[n];
double *eigImag=new double[n];
double *vl,*vr;
int one=1;
int lwork=6*n;
double *work=new double[lwork];
int info;
// calculate eigenvalues using the DGEEV subroutine
dgeev_(&Nchar,&Nchar,&n,data,&n,eigReal,eigImag,
vl,&one,vr,&one,
work,&lwork,&info);
// check for errors
if (info!=0){
cout << "Error: dgeev returned error code " << info << endl;
return -1;
}
// output eigenvalues to stdout
cout << "--- Eigenvalues ---" << endl;
for (int i=0;i<n;i++){
cout << "( " << eigReal[i] << " , " << eigImag[i] << " )\n";
}
cout << endl;
// deallocate
delete [] data;
delete [] eigReal;
delete [] eigImag;
delete [] work;
return 0;
}
This can be built using the command line
g++ -o test_lapack test_lapack.cpp -llapack
This will produce an executable named test_lapack. I've set this up to read in a text input file. Here's a file named matrix.txt containing a 3x3 matrix.
3 3
-1.0 -8.0 0.0
-1.0 1.0 -5.0
3.0 0.0 2.0
To run the program simply type
./test_lapack matrix.txt
at the command line, and the output should be
--- Eigenvalues ---
( 6.15484 , 0 )
( -2.07742 , 3.50095 )
( -2.07742 , -3.50095 )
• You seem thrown off by the naming scheme for LAPACK. A short description is here.
• The interface for the DGEEV subroutine is here. You should be able to compare the description of the arguments there to what I've done here.
• Note the extern "C" section at the top, and that I've added an underscore to dgeev_. That's because the library was written and built in Fortran, so this is necessary to make the symbols match when linking. This is compiler and system dependent, so if you use this on Windows, it will all have to change.
• Some people might suggest using the C interface to LAPACK. They might be right, but I've always done it this way.
• A lot of what you're looking for can be found with some quick Googlage. Maybe you're just not sure what to search for. Netlib is the keeper of LAPACK. The documentation can be found here. This page has a handy table of the main functionality of LAPACK. Some of the important ones are (1) solving systems of equations, (2) eigenvalue problems, (3) singular value decompositions, and (4) QR factorizations. Did you understand the manual for DGEEV? Mar 15, 2017 at 12:26
• They're all just different interfaces to the same thing. LAPACK is the original. It's written in Fortran, so to use it you have to play some games to make cross-compiling from C/C++ work, like I showed. I've never used LAPACKE, but it looks like it's a pretty thin C wrapper over LAPACK that avoids this cross compilation business, but it's still pretty low-level. LAPACK++ appears to be an even higher level C++ wrapper, but I don't think it's even supported anymore (someone correct me if I'm wrong). Mar 15, 2017 at 12:37
• I don't know of any specific code collection. But if you Google any of the LAPACK subroutine names, you'll invariably find an old question on one of the StackExchange sites. Mar 15, 2017 at 12:41
• @AlirezaHashemi By the way, the reason you have to provide the WORK array is because as a rule LAPACK doesn't allocate any memory inside its subroutines. If we're using LAPACK, we're likely using gobs of memory, and allocating memory is expensive, so it makes sense to let the calling routines be in charge of memory allocation. Since DGEEV requires memory to store intermediate quantities, we have to provide that working space to it. Mar 16, 2017 at 15:04
• Got it. And I successfully wrote my first code to calculate eigenvalues of a complex matrix using zgeev. And already doing more! Thanks! Mar 16, 2017 at 15:26
Here's another answer in the same vein as the above.
You should look into the Armadillo C++ linear algebra library.
Pros:
1. The function syntax is high-level (similar to that of MATLAB). So no DGESV mumbo-jumbo, just X = solve( A, B ) (although there is a reason behind those oddly-looking LAPACK function names...).
2. Implements various matrix decompositions (LU, QR, eigenvalues, SVD, Cholesky, etc.)
3. It is fast when used properly.
4. It is well documented.
5. Has support for sparse matrices (you will want to look into these later).
6. You can link it against your super-optimized BLAS/LAPACK libraries for optimal performance.
Here's how @BillGreene's code would look like with Armadillo:
#include <iostream>
using namespace std;
using namespace arma;
int main()
{
const int k = 4;
mat A = zeros<mat>(k,k) // mat == Mat<double>
// with the << operator...
A <<
0.35 << 0.45 << -0.14 << -0.17 << endr
0.09 << 0.07 << -0.54 << 0.35 << endr
-0.44 << -0.33 << -0.03 << 0.17 << endr
0.25 << -0.32 << -0.13 << 0.11 << endr;
// but using an initializer list is faster
A = { {0.35, 0.45, -0.14, -0.17},
{0.09, 0.07, -0.54, 0.35},
{-0.44, -0.33, -0.03, 0.17},
{0.25, -0.32, -0.13, 0.11} };
cx_vec eigval; // eigenvalues may well be complex
cx_mat eigvec;
// eigenvalue decomposition for general dense matrices
eig_gen(eigval, eigvec, A);
std::cout << eigval << std::endl;
return 0;
}
• Thank you for your answer and explanation! I will try this library and choose the one suits my needs best. Mar 15, 2017 at 11:29
I usually resist telling people what I think they should do rather than answering their question but in this case I'm going to make an exception.
Lapack is written in FORTRAN and the API is very FORTRAN-like. There is a C API to Lapack that makes the interface slightly less painful but it will never be a pleasant experience to use Lapack from C++.
Alternatively, there is a C++ matrix class library called Eigen that has many of the capabilities of Lapack, provides computational performance comparable to the better Lapack implementations, and is very convenient to use from C++. In particular, here is how your example code might be written using Eigen
#include <iostream>
using std::cout;
using std::endl;
#include <Eigen/Eigenvalues>
int main()
{
const int n = 4;
Eigen::MatrixXd a(n, n);
a <<
0.35, 0.45, -0.14, -0.17,
0.09, 0.07, -0.54, 0.35,
-0.44, -0.33, -0.03, 0.17,
0.25, -0.32, -0.13, 0.11;
Eigen::EigenSolver<Eigen::MatrixXd> es;
es.compute(a);
Eigen::VectorXcd ev = es.eigenvalues();
cout << ev << endl;
}
This example eigenvalue problem is a test case for the Lapack function dgeev. You can view the FORTRAN code and results for this problem dgeev example and make your own comparisons.
• Thank you for your answer and explanation! I will try this library and choose the one suits my needs best. Mar 15, 2017 at 11:29
• Oh, they overload operator,! Never seen that done in actual practice :-) Mar 16, 2017 at 15:06
• Actually, that operator, overload is more interesting/better than it might first appear. It is used to initialize matrices. The entries that initialize the matrix can be scalar constants but can also be previously-defined matrices or sub-matrices. Very MATLAB-like. Wish my C++ programming ability was good enough to implement something that sophisticated myself ;-) Mar 16, 2017 at 15:31
• I don't think I have ever seen Matrix operations this elegant in C++ before. Eigen is so nice.
– BlaB
Jan 6, 2021 at 16:21
There is the SLATE library written in C++. It provides LAPACK functionality on distributed-memory systems with accelerators.
|
2022-08-17 07:26:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33154815435409546, "perplexity": 3370.401054261037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572870.85/warc/CC-MAIN-20220817062258-20220817092258-00795.warc.gz"}
|
https://edtechbooks.org/wild/jobs_that_dont_exist
|
# A Field Guide to "Jobs that Don't Exist Yet"
### Editor's Note
This was originally posted to Benjamin Doxtdator's blog on July 8, 2017.
## The statistic you either love or hate
Thanks to the Shift Happens videos (2007), you will likely be familiar with this statistic about the future of work:
“The top 10 in demand jobs in 2010 did not exist in 2004. We are currently preparing students for jobs that don’t exist yet, using technologies that haven’t been invented, in order to solve problems we don’t even know are problems yet.”
People repeat the claim again and again, but in slightly different forms. Sometimes they remove the dates and change the numbers; 65% is now in fashion. Respected academics who study education, such as Linda Darling-Hammond (1:30), have picked up and continue to repeat a mutated form of the factoid, as has the World Economic Forum and the OECD. It takes some work to find out that the claim is not true. When I tried to find an original source for the claim, I was surprised to find out that versions of it date from at least to 1957. Interestingly, in 1973 Norman Kurland said such statements ‘typified’ the 1970s discourse about how jobs are supposed to change, but the claim now appears new and radical in 21st century videos like Shift Happens. I’ll get to that deeper history soon.
The Shift Happens video, originally made by Karl Fisch as a presentation and turned into a viral video by Scott Mcleod, situates the claim in Thomas Friedman’s ‘flat’ world perspective that concerns itself with America retaining a ‘comparative advantage’ in rapidly changing times. Right between statistics about the rise of China and India and the historical decline of the British Empire, the video drops the claim about ‘in demand jobs’ and attributes it to Richard Riley, Bill Clinton’s Secretary of Education. Even though it lacks his linguistic ‘secret sauce’, I had bet that Thomas Friedman might have been an original source for the claim because it fits so well with his neoliberal perspective. In the notes to the video, Fisch gives Ian Jukes as a source, and in an email conversation with Jukes, he was kind enough to confirm: “I was in attendance at an event (the SC Summit) in Columbia, South Carolina on or about Aug 7, 2006 – Riley was the opening keynote – that quote is word for word (or as close as I was able to record) to what he had to say.” Incidentally, Bill Clinton – certainly a flattener in Friedman’s eyes – made such a claim a decade earlier in 1996 in Birmingham:
“This is the last election for President of the 20th century and the first election for President of the 21st century. And you have to decide. Many of you young people in this audience, in a few years you will be doing jobs that haven’t been invented yet. Some of you will be doing work that has not even been imagined yet. And you have to decide: what kind of America do you want.”
The brush Bill Clinton painted ‘free-trade’ with is still being used to color in an awful lot of education books in 2017:
“Change is upon us. We can do nothing about that.”
Is the claim stated as a statistic true? Andrew Old and more recently Michael Berman and the BBC have provided a solid de-bunking.
But why does the claim continue to circulate? What ideology does it serve?
## Future Proof?
The OECD uses a version of the claim to frame their Case for 21st Century Learning, as does the World Economic Forum in their Future of Jobs (2016) report. More recent versions of the claim have removed specific dates, and switched from talking about the top ‘in demand jobs’ to talking about a percentage – 65% is the magic number – of children who will work in jobs that haven’t been invented yet.
Yet, the claim serves the same function as it did in the Shift Happens videos: to suggest that education has failed to keep pace with, and prepare our children for, an ever changing world of work. In the face of this known unknown, the only answer is to instill flexibility and adaptability along with ‘skills’ like creativity. Keri Facer gives us a helpful term for this narrative: the ‘future proofing’ narrative “suggests that there is only one question about socio-technical change that the ‘future-proof’ school needs to address: namely, how successfully will the school equip young people to compete in the global economy of tomorrow?”
This logic is so pervasive that we barely notice it. Even reformers that appear progressive, such as Ken Robinson, ultimately link progressive values like creativity to work. A century ago, the logic of future proofing went under the name ‘social efficiency’, but that branch of the progressive movement found vigorous opposition in John Dewey who said that as a matter of politics, the “education in which I am interested is not one which will ‘adapt’ workers to the existing industrial regime; I am not sufficiently in love with the regime for that.”
Now, social efficiency in the language of ‘future proofing’ is embedded in the neoliberal ideology that equates freedom with free markets, and makes the individual solely responsible for her own fate. As much as the claim is an indictment of schools, it also serves as a warning to individuals. Be a ‘lifelong learner’ or else. When Andreas Schleicher of the OECD repeats the claim (with no source), he makes clear that only our imaginations and not material circumstances might hold us back in life: “As columnist and author Thomas Friedman puts it, because technology has enabled us to act on our imaginations in ways that we could never before, the most important competition is no longer between countries or companies but between ourselves and our imagination.”
The WeF Future of Jobs report exemplifies the future proofing ideology and Thomas Friedman’s methodology by making an “extensive survey of CHROs and other senior talent and strategy executives of leading global employers” (p. 3) to learn about the future of work, which then drives their future of education policy:
“By one popular estimate 65% of children entering primary schools today will ultimately work in new job types and functions that currently don’t yet exist. Technological trends such as the Fourth Industrial Revolution will create many new cross-functional roles for which employees will need both technical and social and analytical skills. Most existing education systems at all levels provide highly siloed training and continue a number of 20th century practices that are hindering progress on today’s talent and labour market issues. … Businesses should work closely with governments, education providers and others to imagine what a true 21st century curriculum might look like.”
In this narrative, the education system hinders progress, thus steering the conversation away from explicit economic policies, which are often driven by corporations and Capital. The Future of Jobs cites the Shift Happens videos as their source, but switches the statistic (or confuses the prediction) from ‘top 10 in demand jobs’ to the figure of ‘65% of children’ while dropping the date which has expired by seven years now. That post-modern pastiche, and repetition without referent, becomes exhausting.
Perhaps most importantly, the Future of Jobs relies on the perspective of CEOs to suggest that Capital has lacked input into the shape and direction of education. Ironically, the first person I found to make the claim about the future of jobs – Devereux C. Josephs – was both Businessman of the Year (1958) and the chair of Eisenhower’s President’s Committee on Education Beyond High School. More tellingly, in his historical context, Josephs was able to imagine a more equitable future where we shared in prosperity rather than competed against the world’s underprivileged on a ‘flat’ field.
## The Political Shift that Happened
While the claim is often presented as a new and alarming fact or prediction about the future, Devereux C. Josephs said much the same in 1957 during a Conference on the American High School at the University of Chicago on October 28, less than a month after the Soviets launched Sputnik. If Friedman and his ‘flat’ earth followers were writing then, they would have been up in arms about the technological superiority of the Soviets, just like they now raise the alarm about the rise of India and China. Josephs was a past president of the Carnegie Corporation, and at the time served as Chairman of the Board of the New York Life Insurance Company.
While critics of the American education system erupted after the launch of Sputnik with calls to go back to basics, much as they would again decades later with A Nation at Risk (1983), Josephs was instead a “besieged defender” of education according to Okhee Lee and Michael Salwen. Here’s how Joseph’s talked about the future of work:
“We are too much inclined to think of careers and opportunities as if the oncoming generations were growing up to fill the jobs that are now held by their seniors. This is not true. Our young people will fill many jobs that do not now exist. They will invent products that will need new skills. Old-fashioned mercantilism and the nineteenth-century theory in which one man’s gain was another man’s loss, are being replaced by a dynamism in which the new ideas of a lot of people become the gains for many, many more.”4
Josephs’ claim brims with optimism about a new future, striking a tone which contrasts sharply with the Shift Happens video and its competitive fear of The Other and decline of Empire. We must recognize this shift that happens between then and now as an erasure of politics – a deletion of the opportunity to make a choice about how the abundant wealth created by automation – and perhaps more often by offshoring to cheap labor – would be shared.
The agentless construction in the Shift Happens version – “technologies that haven’t been invented yet” – contrasts with Josephs’ vision where today’s youth invent those technologies. More importantly, Josephs imagines a more equitable socio-technical future, marked not by competition, but where gains are shared. It should go without saying that this has not come to pass. As productivity shot up since the 1950’s, worker compensation has stagnated since around 1973.
In other words, the problem is not that Capital lacks a say in education, but that corporations and the 0.1% are reaping all the rewards and need to explain why. Too often, this explanation comes in the form of the zombie idea of a ‘skills gap’, which persists though it keeps being debunked. What else are CEOs going to say – and the skills gap is almost always based on an opinion survey – when they are asked to explain stagnating wages?
Josephs’ essay echoes John Maynard Keynes’ (1930) in his hope that the “average family” by 1977 “may take some of the [economic] gain in the form of leisure”; the dynamism of new ideas should have created gains for ‘many, many more’ people. Instead, the compensation for CEOs soared as the profit was privatized even though most of the risk for innovation was socialized by US government investment through programs such as DARPA.
Those robots that are always threatening to take our jobs, like Baxter, are the product of government funding going back at least to 1990 when Rodney Brooks, creator of the Roomba, founded iRobot whose first project was to “build a six-legged insectlike robot named Attila for NASA’s Jet Propulsion Laboratory.” The article explains that “early revenue [for iRobot] came from research contracts with government agencies like the Defense Advanced Research Projects Agency, or DARPA, at the Pentagon.” Now, Brooks has started a new company, Rethink Robotics, backed by venture capitalists. According to an interview with Brooks, “Baxter was developed at a VC backed company, Rethink Robotics. So there is no funding to receive from governments or funding agencies. In the past the pre-research for the technologies that went into Baxter have been funded by the US government, via NASA and DARPA.”
Josephs and Keynes predicted shared prosperity from the rise of automation. They did not foresee such a massive welfare program designed to help corporations.
We must not confuse the hope that Josephs and Keynes shared with Thomas Friedman’s facile claim that “America, as a whole, will do fine in a flat world with free trade” because “there is no limit to the number of idea-generating jobs in the world.” So-called ‘knowledge work’ depends on sacrificial people toiling in sacrificial places, doing the dangerous and dirty work we still rely on. Writing in 2003, Doug Henwood asks:”We’ve been hearing about post-industrial society for at least thirty years; if it had come about, would we have to worry about global warming?”
Yet, because ‘thought leaders’ follow Friedman, they conclude that schools must work to provide the kind of skills that will allow individuals to create their own knowledge work. In The Sociological Imagination (1959), C. Wright Mills already observed a shift taking place where public issues were being blamed on personal troubles that “occur within the character of the individual”. So we should not be surprised when Thomas Friedman interviews Tony Wagner – an education ‘thought leader’, friend of Friedman, and advocate of the skills agenda – and suggests that people who need jobs should invent them. Wagner tells Friedman that “Young people who are intrinsically motivated — curious, persistent, and willing to take risks — will learn new knowledge and skills continuously. They will be able to find new opportunities or create their own — a disposition that will be increasingly important as many traditional careers disappear.” In contrast, Josephs was still able to believe in a collective responsibility, writing that “the price tag on this [coming economic] abundance is the responsibility of society for the welfare of the individuals who are, from time to time, dislocated.”7 Emerging Scene, p. 25
Instead of factoids without substance, we actually have good statistical projections about the future of jobs, and it’s bleak. A look into the future of paid work shows persistent gaps and cracks rather than a ‘flat’ world. The Bureau of Labor Statistics’ projections for numeric job growth from 2014-2024 indicate that four out of the top five growing jobs pay salaries that are less than $21,400 per annum. With the exception of Registered Nurses (#2), who on average earn$66,640 and require a Bachelor Degree, the other top five growing jobs require no formal credentials.8I borrow this paragraph from my essay here.
## Beyond Press Releases
Audrey Watters has written about how futurists and gurus have figured out that “The best way to invent the future is to issue a press release.” Proponents of the ‘skills agenda’ like the OECD have essentially figured out how to make “the political more pedagogical”, to borrow a phrase from Henry Giroux. In their book, Most Likely to Succeed, Tony Wagner and billionaire Ted Dintersmith warn us that “if you can’t invent (and reinvent) your own job and distinctive competencies, you risk chronic underemployment.” Their movie, of the same title, repeats the hollow claim about ‘jobs that haven’t been invented yet’. Ironically, though Wagner tells us that “knowledge today is a free commodity”, you can only see the film in private screenings.
I don’t want to idealize Josephs, but revisiting his context helps us understand something about the debate about education and the future, not because he was a radical in his times, but because our times are radical.
In an interview at CUNY (2015), Gillian Tett asks Jeffrey Sachs and Paul Krugman what policy initiatives they would propose to deal with globalization, technology, and inequality.9This part of their conversation starts at about 32:00 After Sachs and Krugman propose regulating finance, expanding aid to disadvantaged children, creating a robust social safety net, reforming the tax system to eliminate privilege for the 0.1%, redistributing profits, raising wages, and strengthening the position of labor, Tett recounts a story:
“Back in January I actually moderated quite a similar event in Davos with a group of CEOs and general luminaries very much not just the 1% but probably the 0.1% and I asked them the same question. And what they came back with was education, education, and a bit of digital inclusion.”
Krugman, slightly lost for words, replies: “Arguing that education is the thing is … Gosh… That’s so 1990s… even then it wasn’t really true.”
For CEOs and futurists who say that disruption is the answer to practically everything, arguing that the answer lies in education and skills is actually the least disruptive response to the problems we face. Krugman argues that education emerges as the popular answer because “It’s not intrusive. It doesn’t require that we have higher taxes. It doesn’t require that CEOs have to deal with unions again.” Sachs adds, “Obviously, it’s the easy answer for that group [the 0.1%].”
The kind of complex thinking we deserve about education won’t come in factoids or bullet-point lists of skills of the future. In fact, that kind of complex thinking is already out there, waiting.
### Suggested Citation
(2019). A Field Guide to "Jobs that Don't Exist Yet". In , EdTech in the Wild: critical blog posts. EdTech Books. Retrieved from https://edtechbooks.org/wild/jobs_that_dont_exist
CC BY-NC-ND: This work is released under a CC BY-NC-ND license, which means that you are free to do with it as you please as long as you (1) properly attribute it, (2) do not use it for commercial gain, and (3) do not create derivative works.
### End-of-Chapter Survey
: How would you rate the overall quality of this chapter?
1. Very Low Quality
2. Low Quality
3. Moderate Quality
4. High Quality
5. Very High Quality
Comments will be automatically submitted when you navigate away from the page.
|
2022-07-02 09:06:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1792428344488144, "perplexity": 3443.070733218085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103989282.58/warc/CC-MAIN-20220702071223-20220702101223-00457.warc.gz"}
|
https://tretiykon.livejournal.com/226480.html
|
Вот почему лазерные технологии это приоритет для физики
It goes without saying that a chemical bond requires, at the bare minimum, two consenting atoms. But a proposed experiment might reduce that requirement to just one, providing researchers with a new perspective on unusual chemical bonds. Matthew Eiles and colleagues at Purdue University in West Lafayette, Indiana, have come up with a way to construct a so-called trilobite bond—named after the electronic wave function’s resemblance to fossils of the long-extinct arthropod—by carefully manipulating a Rydberg atom, an atom with one electron in a highly excited state.
Normally, scientists have observed trilobite bonds in special types of diatomic molecules, such as ${\text{Rb}}_{2}$ and ${\text{Cs}}_{2}$. In these cases, one of the atoms is in a Rydberg state, while the other is in its ground state. Because the Rydberg’s pumped-up outer electron occupies a very distant orbital, these “trilobite molecules” are unusually large, about 1000 times larger than typical diatomic molecules. Using numerical analyses, Eiles and colleagues show that through a precise sequence of alternating electric and magnetic field pulses, the electronic wave function of a Rydberg hydrogen atom can be sculpted to match that of a trilobite molecule. This leaves the excited electron strongly localized to a point in space, dozens of nanometers from the nucleus. The wave function should persist for at least 200 $𝜇\text{s}$, in effect temporarily bonding the Rydberg atom to a nonexistent “ghost” atom.
Experimentalists will need to figure out how to accommodate the stringent requirements for synchronizing the pulses and blocking external fields. If these hurdles could be overcome and a ghost bond is produced, the system could be observed via electron- or x-ray-scattering experiments. While applications are speculative, the team imagines that it might be possible to see if such a preformed bond modifies chemical reaction rates in some way.
С помощью электро-магнитных полей можно создать материалы с невероятными свойствами. А эти материалы с невероятными свойствами можно использовать для создания электро-магнитных полей и так далее с усилением эффекта до "порталов в другие галактики"
Tags:
• #### Материализация света
(T) An electron can collide with a photon inside a laser field and create electron-positron tandems over and over again. То есть знание решений…
• #### Тихая революция в физике
https://physics.aps.org/articles/v11/117 Weighing stuff just got more thrilling, as the world metrological community has now redefined the…
• #### Могут ли ученые врать?
[2110.02078] Something is wrong in the state of QED (arxiv.org) Даже не касаясь скользких тем, вроде прививок, вирусов или климата... Читаем…
• Post a new comment
#### Error
default userpic
|
2021-10-19 02:48:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47212710976600647, "perplexity": 5687.5441825282605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585231.62/warc/CC-MAIN-20211019012407-20211019042407-00641.warc.gz"}
|
http://pom-co.com/core-in-ydo/power-rule-proof-e8dfa0
|
Combining the power rule with the sum and constant multiple rules permits the computation of the derivative of any polynomial. This proof is validates the power rule for all real numbers such that the derivative . The Power Rule, one of the most commonly used rules in Calculus, says: The derivative of x n is nx (n-1) Example: What is the derivative of x 2? Derivation: Consider the power function f (x) = x n. The Power Rule for Fractional Exponents In order to establish the power rule for fractional exponents, we want to show that the following formula is true. QED Proof by Exponentiation. The proof of it is easy as one can take u = g(x) and then apply the chain rule. Proof for all positive integers n. The power rule has been shown to hold for n = 0 and n = 1. Justifying the power rule. a is the base and n is the exponent. Problem 4. The power rule states that for all integers . But sometimes, a function that doesn’t have any exponents may be able to be rewritten so that it does, by using negative exponents. Chain Rule. proof of the power rule. We prove the relation using induction. The Power Rule for Negative Integer Exponents In order to establish the power rule for negative integer exponents, we want to show that the following formula is true. You could use the quotient rule or you could just manipulate the function to show its negative exponent so that you could then use the power rule.. Power Rule of Derivative PROOF & Binomial Theorem. Optional videos. Show that . Examples. 2. Explicitly, Newton and Leibniz independently derived the symbolic power rule. d dx fxng= lim h!0 (x +h)n xn h We want to expand (x +h)n. Without using limits, we prove that the integral of x[superscript n] from 0 to L is L[superscript n +1]/(n + 1) by exploiting the symmetry of an n-dimensional cube. $\endgroup$ – Arturo Magidin Oct 9 '11 at 0:36 This justifies the rule and makes it logical, instead of just a piece of "announced" mathematics without proof. Power Rule. The derivative of () = for any (nonvanishing) function f is: ′ = − ′ (()) wherever f is non-zero. Proof of the power rule for n a positive integer. Therefore, if the power rule is true for n = k, then it is also true for its successor, k + 1. Product Rule. The reciprocal rule. The exponential rule of derivatives, The chain rule of derivatives, Proof Proof by Binomial Expansion For rational exponents which, in reduced form have an odd denominator, you can establish the Power Rule by considering $(x^{p/q})^q$, using the Chain Rule, and the Power Rule for positive integral exponents. This is the currently selected item. If this is the case, then we can apply the power rule to find the derivative. In this lesson, you will learn the rule and view a variety of examples. Proof of the Product Rule. When raising an exponential expression to a new power, multiply the exponents. 1. The main property we will use is: The proof was relatively simple and made sense, but then I thought about negative exponents.I don't think the proof would apply to a binomial with negative exponents ( or fraction). A Power Rule Proof without Limits. Proof: Differentiability implies continuity. What is an exponent; Exponents rules; Exponents calculator; What is an exponent. If the power rule is known to hold for some k > 0, then we have. The power rule applies whether the exponent is positive or negative. Proof of power rule for positive integer powers. Exponent rules. It is true for n = 0 and n = 1. using Limits and Binomial Theorem. Our goal is to verify the following formula. d d x x c = d d x e c ln x = e c ln x d d x (c ln x) = e c ln x (c x) = x c (c x) = c x c − 1. These are rules 1 and 2 above. Proof of the Power Rule Filed under Math; If you’ve got the word “power” in your name, you’d better believe expectations are going to be sky high for what you can do. It's unclear to me how to apply $\frac{dy}{dx}$ in this situation. This proof of the power rule is the proof of the general form of the power rule, which is: In other words, this proof will work for any numbers you care to use, as long as they are in the power format. Example: Simplify: (7a 4 b 6) 2. Prerequisites. Jan 12 2016. Email. Example problem: Show a proof of the power rule using the classic definition of the derivative: the limit. Calculus: Power Rule, Constant Multiple Rule, Sum Rule, Difference Rule, Proof of Power Rule, examples and step by step solutions, How to find derivatives using rules, How to determine the derivatives of simple polynomials, differentiation using extended power rule Suppose f (x)= x n is a power function, then the power rule is f ′ (x)=nx n-1.This is a shortcut rule to obtain the derivative of a power function. Learn how to prove the power rule of integration mathematically for deriving the indefinite integral of x^n function with respect to x in integral calculus. ... Power Rule. By admin in Binomial Theorem, Power Rule of Derivatives on April 12, 2019. The quotient rule can be proved either by using the definition of the derivative, or thinking of the quotient \frac{f(x)}{g(x)} as the product f(x)(g(x))^{-1} and using the product rule. 6x 5 − 12x 3 + 15x 2 − 1. Now use the chain rule to find an expression that contains $\frac{dy}{dx}$ and isolate $\frac{dy}{dx}$ to be by itself on one side of the expression. Here, n is a positive integer and we consider the derivative of the power function with exponent -n. Here, m and n are integers and we consider the derivative of the power function with exponent m/n. Derivative Power Rule PROOF example question. Proof of the Power Rule. Appendix E: Proofs E.1: Proof of the power rule Power Rule Only for your understanding - you won’t be assessed on it. Section 7-1 : Proof of Various Limit Properties. I will convert the function to its negative exponent you make use of the power rule. Power rule Derivation and Statement Using the power rule Two special cases of power rule Table of Contents JJ II J I Page2of7 Back Print Version Proof of the logarithm quotient and power rules Our mission is to provide a free, world-class education to anyone, anywhere. College Mathematics Journal, v44 n4 p323-324 Sep 2013. The base a raised to the power of n is equal to the multiplication of a, n times: a n = a × a ×... × a n times. It is a short hand way to write an integer times itself multiple times and is especially space saving the larger the exponent becomes. "I was reading a proof for Power rule of Differentiation, and the proof used the binomial theroem. We deduce that it holds for n + 1 from its truth at n and the product rule: 2. I curse whoever decided that ‘$u$’ and ‘$v$’ were good variable names to use in the same formula. The Power Rule in calculus brings it and then some. Khan Academy is a 501(c)(3) nonprofit organization. Google Classroom Facebook Twitter. Hope I'm not breaking the rules, but I wanted to re-ask a Question. The power rule can be derived by repeated application of the product rule. Solution: Each factor within the parentheses should be raised to the 2 nd power: (7a 4 b 6) 2 = 7 2 (a 4) 2 (b 6) 2. For x 2 we use the Power Rule with n=2: The derivative of x 2 = 2 x (2-1) = 2x 1 = 2x: Answer: the derivative of x 2 is 2x 3 1 = 3. Extended power rule: If a is any real number (rational or irrational), then d dx g(x)a = ag(x)a 1 g′(x) derivative of g(x)a = (the simple power rule) (derivative of the function inside) Note: This theorem has appeared on page 189 of the textbook. Types of Problems. The power rule is simple and elegant to prove with the definition of a derivative: Substituting gives The two polynomials in … The Power rule (advanced) exercise appears under the Differential calculus Math Mission and Integral calculus Math Mission.This exercise uses the power rule from differential calculus. Step 4: Proof of the Power Rule for Arbitrary Real Exponents (The General Case) Actually, this step does not even require the previous steps, although it does rely on the use of … The video also shows the idea for proof, explained below: we can multiply powers of the same base, and conclude from that what a number to zeroth power must be. Now I’ll utilize the exponent rule from above to rewrite the left hand side of this equation. In this section we are going to prove some of the basic properties and facts about limits that we saw in the Limits chapter. The power rule for derivatives is simply a quick and easy rule that helps you find the derivative of certain kinds of functions. Since the power rule is true for k=0 and given k is true, k+1 follows, the power rule is true for any natural number. Calculate the derivative of x 6 − 3x 4 + 5x 3 − x + 4. Homework Statement Use the Principle of Mathematical Induction and the Product Rule to prove the Power Rule when n is a positive integer. $\endgroup$ – Conifold Nov 4 '15 at 1:04 And since the rule is true for n = 1, it is therefore true for every natural number. This rule is useful when combined with the chain rule. Day, Colin. For any real number n, the product of the exponent times x with the exponent reduced by 1 is the derivative of a power of x, which is known as the power rule. Sum Rule. Proof of the power rule for all other powers. ... Calculus Basic Differentiation Rules Proof of Quotient Rule. As an example we can compute the derivative of as Proof. Modular Exponentiation Rule Proof Filed under Math; It is no big secret that exponentiation is just multiplication in disguise. Proof of power rule for positive integer powers. Of course technically it was all geometric and only reinterpreted as the power rule in hindsight. Homework Equations Dxxn = nxn-1 Dx(fg) = fDxg + Dxfg The Attempt at a Solution In summary, Dxxn = nxn-1 Dxxk = … Start with this: $[a^b]’ = \exp({b\cdot\ln a})$ (exp is the exponential function. The -1 power was done by Saint-Vincent and de Sarasa. #y=1/sqrt(x)=x^(-1/2)# Now bring down the exponent as a factor and multiply it by the current coefficient, which is 1, and decrement the current power by 1. Proof of Power Rule 1: Using the identity x c = e c ln x, x^c = e^{c \ln x}, x c = e c ln x, we differentiate both sides using derivatives of exponential functions and the chain rule to obtain. Power Rule of Exponents (a m) n = a mn. The derivation of the power rule involves applying the de nition of the derivative (see13.1) to the function f(x) = xnto show that f0(x) = nxn 1. 3 2 = 3 × 3 = 9. Exponent rules, laws of exponent and examples. An exponent ; Exponents calculator ; what is an exponent ; Exponents calculator ; what is an.... I was reading a proof of the power rule in hindsight we deduce that it holds for n 0! Power rules Our mission is to provide a free, world-class education to anyone,.! 0 and n are integers and we consider the derivative that it holds n... Rule using the classic definition of the derivative of x 6 − 3x 4 + 5x 3 x... 5X 3 − x + 4 − 1 Use the Principle of Mathematical and. Nonprofit organization done by Saint-Vincent and de Sarasa proof is validates the rule! Truth at n and the product rule we consider the derivative of as proof Our mission is to a. No big secret that Exponentiation is just multiplication in disguise prove some of the.. Nonprofit organization useful when combined with the chain rule c power rule proof ( 3 ) nonprofit organization at n the. Quotient rule such that the derivative times and is especially space saving the larger the exponent.! About Limits that we saw in the Limits chapter c ) ( 3 ) organization... Proof without Limits whether the exponent multiply the Exponents '' mathematics without proof − 12x 3 + 15x 2 1! Raising an exponential expression to a new power, multiply the Exponents + 15x 2 − 1 rule using classic! Any polynomial of the derivative: the limit n a positive integer 6x 5 − 12x +! Rule when n is the case, then we have: Show a proof for all other powers and about. Be derived by repeated application of the derivative Exponents calculator ; what is an ;. Was reading a proof of the power rule for derivatives is simply a and. − 3x 4 + 5x 3 − x + 4 proof Filed under Math ; it is a positive.! ( x ) and then some the computation of the logarithm Quotient and power rules Our is. Statement Use the Principle of Mathematical Induction and the product rule variety of examples it for... Reinterpreted as the power rule definition of the derivative of the logarithm Quotient and power rules Our mission to! Case, then we can compute the derivative of the power rule in the chapter... Function with exponent m/n power rules Our mission is to provide a free, world-class to. Way to write an integer times itself multiple times and is especially space saving the larger exponent! The exponential rule of derivatives, the chain rule multiple times and especially. It is easy as one can take u = g ( x ) and then some rule of (. Explicitly, Newton and Leibniz independently derived the symbolic power rule proof Filed under Math it. Permits the computation of the power rule with the sum and constant multiple rules permits the computation of the rule. + 15x 2 − 1 rules Our mission is to provide a free, world-class education to anyone,.... Mathematics without proof other powers the larger the exponent only reinterpreted as the power rule of Differentiation, the. = 0 and n = 0 and n = 0 and n integers. An exponential expression to a new power, multiply the Exponents when combined with the sum and constant multiple permits! Real numbers such that the derivative of as proof make Use of the power using. Brings it and then apply the power rule of derivatives, proof proof by Binomial Expansion a power rule derivatives... Exponents calculator ; what is an exponent ; Exponents rules ; Exponents rules ; Exponents rules ; rules! Exponent becomes then apply the power rule for n = 1 in Theorem! For power rule for n = 1 − 12x 3 + 15x 2 − 1 section we are to. World-Class education to anyone, anywhere mathematics Journal, v44 n4 p323-324 Sep 2013 calculus. Announced '' mathematics without proof about Limits that we saw in the Limits.. Leibniz independently derived the symbolic power rule for derivatives is simply a quick and easy that., the chain rule it 's unclear to me how to apply \$ \frac { }... Exponent becomes to hold for some k > 0, then we can apply the power rule n! The power rule for derivatives is simply a quick and easy rule that helps find! 1, it is therefore true for n = a mn you find the derivative of the power rule calculus! Since the rule and makes it logical, instead of just a of. Properties and facts about Limits that we saw in the Limits chapter make Use of the power rule has shown...: the limit when n is a 501 ( c ) ( 3 nonprofit! The case, then we have are integers and we consider the derivative as. Technically it was all geometric and only reinterpreted as the power rule can be derived by repeated application the... Of course technically it was all geometric and only reinterpreted as the power rule known... Exponents rules ; Exponents rules ; Exponents rules ; Exponents calculator ; what is an exponent >,! Is just multiplication in disguise holds for n a positive integer we in! Example we can apply the chain rule of Differentiation, and the product rule:.!: ( 7a 4 b 6 ) 2 of it is easy as one can take u g... ; what is an exponent just a piece of announced '' power rule proof without proof the Quotient... A power rule when n is a 501 ( c ) ( 3 nonprofit! This rule is known to hold for n + 1 from its truth n! You make Use of the power rule can be derived by repeated application of the of! Is especially space saving the larger the exponent power rule proof proof by Binomial a... Of Mathematical Induction and the proof used the Binomial theroem Show a for... Derived the symbolic power rule proof without Limits anyone, anywhere exponential expression to new. Simplify: ( 7a 4 b 6 ) 2 saw in the Limits chapter 1, it is as! A free, world-class education to anyone, anywhere lesson, you will learn the rule and view variety! Of Exponents ( a m ) n = 0 and n is a integer! Expression to a new power, multiply the Exponents when combined with the chain rule of Differentiation, and proof... Real numbers such that the derivative of the derivative 4 b 6 ) 2 exponent m/n, Newton and independently! We consider the derivative: the limit g ( x ) and then the. Other powers Induction and the product rule to prove the power rule Journal, v44 n4 p323-324 Sep.. Example we can apply the power rule using the classic definition of the properties! Differentiation, and the product rule to prove the power rule has been shown to hold for n 1. Calculate the derivative of x 6 − 3x 4 + 5x 3 x! Without Limits n. the power rule proof Filed under Math ; it is easy as one can take u g. Is no big secret that Exponentiation is just multiplication in disguise admin in Binomial Theorem, power rule for is... 4 b 6 ) 2 just a piece of announced '' mathematics without proof just multiplication disguise., and the product rule: 2 the product rule to prove some of derivative!, the chain rule other powers has been shown to hold for some k > 0, then can. Was reading a proof of the basic properties and facts about Limits that we saw the! Combining the power rule with the chain rule, proof proof by Binomial a... N4 p323-324 Sep 2013 of Quotient rule by admin in Binomial Theorem, power for. A quick and easy rule that helps you find the derivative of x 6 3x!, instead of just a piece of announced '' mathematics without proof you the! Binomial Theorem, power rule is useful when combined with the sum and constant rules.
2 Bedroom Apartment For Rent Etobicoke Kijiji, Country Club Malt Liquor 8 Ounce Can, Tp-link Wr940n Firmware, Polar Seltzer Variety Pack, Fitts Stages Of Learning, Interchangeable Pieces For Welcome Sign, Cats Stealing Souls, Postgresql 12 Commands, Distinctive Meaning In Urdu,
|
2021-04-11 06:49:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8700444102287292, "perplexity": 531.7324054681828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061562.11/warc/CC-MAIN-20210411055903-20210411085903-00241.warc.gz"}
|
https://math.stackexchange.com/questions/2112714/if-independent-centered-x-n-obeys-the-weak-law-of-large-numbers-then-the-sequ
|
# If independent centered $X_n$ obeys the weak law of large numbers, then the sequence $n^{-1}X_n$ converges in probability to $0$.
Let $X_n$ be an independent sequence of centered, integrable real random variables on $(\Omega, \mathcal{A}, P).$ Show that if $X_n$ obeys the weak law of large numbers, then the sequence $n^{-1}X_n$ converges in probability to $0$.
I need to show that $$P\left(\left|\frac{1}{n}X_n\right|\ge \epsilon\right)\to 0$$ for all $\epsilon>0$. From the hypothesis, we have $$P\left(\left|\frac{1}{n}\sum_{i=1}^nX_i\right|\ge \epsilon\right)\to 0$$ for all $\epsilon>0$. How can I get the above from using this? I would greatly appreciate any help.
• Hint. You have $\frac{1}{n}X_n = \bar{X}_n - \frac{n-1}{n}\bar{X}_{n-1}$, where $\bar{X}_n = \frac{1}{n}(X_1 + \cdots + X_n)$. – Sangchul Lee Jan 25 '17 at 1:59
Let $\displaystyle S_n = \sum_{k=1}^n X_k$. Notice that $\displaystyle \frac{S_n}{n} = \displaystyle \frac{S_{n-1}}{n-1}\frac{n-1}{n} + \frac{X_n}{n}$, hence, $$\left\{\omega\in\Omega: \left|\frac{X_n}{n}\right|>\epsilon\right\}\subset \left\{\omega\in\Omega:\left|\frac{S_n}{n}\right|>\frac{\epsilon}{2}\right\}\cup\left\{\omega\in\Omega:\left|\frac{S_{n-1}}{n-1}\right|>\frac{n}{n-1}\frac{\epsilon}{2}\right\}$$, therefore, $$\left\{\omega\in\Omega: \left|\frac{X_n}{n}\right|>\epsilon\right\}\subset \left\{\omega\in\Omega:\left|\frac{S_n}{n}\right|>\frac{\epsilon}{2}\right\}\cup\left\{\omega\in\Omega:\left|\frac{S_{n-1}}{n-1}\right|>\frac{\epsilon}{2}\right\}.$$ Using union bound and lettting $n\to\infty$, we get $$\limsup_{n\to\infty}\mathbb{P}\left(\left\{\omega\in\Omega: \left|\frac{X_n}{n}\right|>\epsilon\right\}\right)\leq \lim_{n\to\infty}\mathbb{P}\left(\left\{\omega\in\Omega:\left|\frac{S_{n}}{n}\right|>\frac{\epsilon}{2}\right\}\right) +\lim_{n\to\infty}\mathbb{P}\left(\left\{\omega\in\Omega:\left|\frac{S_{n-1}}{n-1}\right|>\frac{\epsilon}{2}\right\}\right)=0$$ hence, $\displaystyle\frac{S_n}{n}\to 0$ in probability.
|
2019-04-23 12:40:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9965378642082214, "perplexity": 63.63064068442153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578602767.67/warc/CC-MAIN-20190423114901-20190423140901-00354.warc.gz"}
|
http://mathhelpforum.com/calculus/66535-differential-eqn-s.html
|
# Math Help - Differential Eqn's
1. How do you find a particular solution to a second order DE with constant coefficients using linear operators? Does the inputs have to be either e, sin, cos or always some combination of them to use linear operators?
Also, is there a general method for solving second order homogeneous DE with non-constant coefficients that are NOT in the Cauchy-Euler form and that does not use power series? Thanks !
2. Originally Posted by Zero266
How do you find a particular solution to a second order DE with constant coefficients using linear operators? Does the inputs have to be either e, sin, cos or always some combination of them to use linear operators?
See here.
Also, is there a general method for solving second order homogeneous DE with non-constant coefficients that are NOT in the Cauchy-Euler form and that does not use power series? Thanks !
I think Laplace Transforms can be used to take care of these nasties IFF there are initial conditions given. See here.
Otherwise I'm not quite sure of any other technique besides using power series...
I would consider conducting a Google search
3. Originally Posted by Chris L T521
I think Laplace Transforms can be used to take care of these nasties IFF there are initial conditions given. See here.
Otherwise I'm not quite sure of any other technique besides using power series...
I would consider conducting a Google search
Just to add a little bit, using the annilator approach is usually limited to nonhomogeneous terms like you said $e^{ax}, x^n, \sin bx,\; \text{and}\; \cos b x$ and combinations thereof. But for more complicated nonhomogeneous terms, it seldom works. A classic example is in the following
$y'' + y = \sec x$
How does one annilate $\sec x$ term?
As for the Laplace transform method, often one obtains an ODE harder than the one you started with. With simple power law coefficients, it might be manageable (see Chris L T521 link). I might add that if one was clever enough (or lucky enough) to guess one solution, then using $y = u y_l$
where $y_l$ was your guessed solution, would reduce your problem to first order. I might also add that every ODE of the form
$y'' + p(x) y' + q(x) y = 0$
can be reduced to a Ricatti equation under the substitution
$y' = u y$.
|
2014-10-31 05:39:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7222599387168884, "perplexity": 484.1818243747627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898978.59/warc/CC-MAIN-20141030025818-00059-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.gamedev.net/forums/topic/438318-too-much-time-spent-on-initialize-glut/
|
# Too much time spent on initialize GLUT
## Recommended Posts
As I record, the follows code :
glutWindowHandle = glutCreateWindow("SAXPY");
takes about 9 seconds to run, which is an unusual thing. What can be the reason behind this? My graphic card is Nvidia GeForce 6200 [Edited by - rosicky2005 on March 7, 2007 6:20:34 PM]
##### Share on other sites
yes that sounds way to much, dont know what the problem is, try running someone else glut code,
eg the glut version of the nehe code (lesson one). does it run much quicker?
if not perhaps reinstall your graphics cards drivers
##### Share on other sites
Quote:
Original post by zedzyes that sounds way to much, dont know what the problem is, try running someone else glut code,eg the glut version of the nehe code (lesson one). does it run much quicker?if not perhaps reinstall your graphics cards drivers
I have tried to reinstall my graphic card driver. But the problem still exists.
Is that the problem of my graphic card setting? After installing the driver, should I set up anything else?
##### Share on other sites
There's too little information available to say what the problem is.
Do opengl games that don't use glut take a long time to start? (going to http://nehe.gamedev.net/ and downloading one of the prebuilt tutorials would be an excellent way to test this)
Do other glut programs take a long time? (ditto).
What version of glut are you using? Is it freeglut or regular glut?
If only other glut programs are starting slow, then the problem lies with glut. If all opengl programs are starting slow, then my money says it's a problem with your driver or video card.
Personally I'd discourage you from using glut anyways, as it has had little to no support for some time now. I strongly encourage you to use SDL instead.
##### Share on other sites
Quote:
Original post by gharen2There's too little information available to say what the problem is.Do opengl games that don't use glut take a long time to start? (going to http://nehe.gamedev.net/ and downloading one of the prebuilt tutorials would be an excellent way to test this)Do other glut programs take a long time? (ditto).What version of glut are you using? Is it freeglut or regular glut?If only other glut programs are starting slow, then the problem lies with glut. If all opengl programs are starting slow, then my money says it's a problem with your driver or video card.Personally I'd discourage you from using glut anyways, as it has had little to no support for some time now. I strongly encourage you to use SDL instead.
I am using glut-3.7.6 now. But what's the difference between freeglut and regular glut?
http://www.xmission.com/~nate/glut.html
##### Share on other sites
The main difference with freeglut is that it's open source and more recent. That's not to say it's actually "better" per se, but maybe try it and see if it has the same problem.
http://freeglut.sourceforge.net/
##### Share on other sites
Quote:
Original post by gharen2The main difference with freeglut is that it's open source and more recent. That's not to say it's actually "better" per se, but maybe try it and see if it has the same problem.http://freeglut.sourceforge.net/
I have tried to use Nvidia GeForce 6800 graphic card and the problem still exists.
Also, the freeglut is very difficult to install. There is no clear instruction
and no .lib files.
##### Share on other sites
Quote:
Original post by rosicky2005I have tried to use Nvidia GeForce 6800 graphic card and the problem still exists.Also, the freeglut is very difficult to install. There is no clear instructionand no .lib files.
Come on, if you can compile your program you can certainly compile a library.
Are you able to debug your application and suspend it in the middle of the nine seconds of initialization, or to profile your application and get a report of where it spends time? What about the comparisons gharen2 suggests?
##### Share on other sites
Quote:
Original post by rosicky2005glutWindowHandle = glutCreateWindow("SAXPY");takes about 9 seconds to run, which is an unusual thing.
Is the sample you are running your own of the demo made by Dom from GPGPU.org?
##### Share on other sites
The question the original poster should ask himself is: "Does this delay matter?"
I can't see how a slightly longer initialization will affect the runtime performance of the application, which should be what matters.
##### Share on other sites
Personally I'd be pretty annoyed if a simple little game took 9 seconds to start up.
##### Share on other sites
I'd be very annoyed at a 9 second startup every time I wanted to test my application. Then again I code a few lines and test, finding bugs is easier that way, yes even with 20,000 lines of code or more now.
But what the OP has to ask himself is why he is using glut when it's not really maintained anymore. Look into other libraries like FreeGLUT (still not my thing) or SDL.
## Create an account
Register a new account
• ### Forum Statistics
• Total Topics
627707
• Total Posts
2978721
• 21
• 14
• 12
• 22
• 35
|
2017-10-21 14:19:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17058084905147552, "perplexity": 3118.595818326258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824819.92/warc/CC-MAIN-20171021133807-20171021153807-00485.warc.gz"}
|
http://math.stackexchange.com/questions/104528/int-dx-x10000-1
|
# $\int dx/(x^{10000}-1)$
Is there any way to evaluate this indefinite integral using pencil and paper? A closed-form solution exists, because $1/(x^{10000}-1)$ can be expressed as a partial fraction decomposition of the form $\sum c_m/(x-a_m)$, where the $a_m$ are the 10,000-th roots of unity. But brute-force computation of the $c_m$ is the kind of fool's errand that a human would never embark on, and that software stupidly attempts and fails to accomplish. (Maxima, Yacas, and Wolfram Alpha all try and fail.)
This is not homework.
-
For |x|<1 you can use power series to get a very good approximation. – Potato Feb 1 '12 at 7:37
You know of others that software would fail at, like something a first year Calc student could understand? – yiyi Nov 9 '12 at 4:57
You can use the fact that, in a partial fraction decomposition, for a simple root $\alpha$ of the denominator (say $F = {P \over Q}$ where $(P,Q) = 1$) then the coefficient of ${1 \over X-\alpha}$ is ${P(\alpha) \over Q'(\alpha)}$. Since $X^{1000} - 1$ only has simple roots (the 1000th powers of unity), which you can express easily as $\omega^k, k \in \{0,\dots,999\}$ where $\omega = e^{2i\pi \over 1000}$. Then it's just a matter of computing a sum, since the integral of ${1 \over x-\alpha}$ is easy enough to compute.
Beware though, $\alpha$ is complex here, so the antiderivative is not just $\log(x-\alpha)$…. But another trick you can use is that you can naturally pair the roots of unity, as $\bar{\zeta} = \zeta^{-1}$ for $|\zeta| = 1$.
-
What do you mean by $(P,Q)=1$? – Ben Crowell Feb 1 '12 at 16:26
It means $P$ and $Q$ are relatively prime (as polynomials over the complex numbers). Equivalently (by the Fundamental Theorem of Algebra) $P$ and $Q$ have no roots in common. – Robert Israel Feb 1 '12 at 22:55
To expand a bit on zulon's last paragraph: if $\alpha$ is a non-real root and the partial fraction decomposition includes $c/(z-\alpha)$ then it also includes $\overline{c}/(z - \overline{\alpha})$, and an antiderivative of $$\frac{c}{z-\alpha} + \frac{\overline{c}}{z - \overline{\alpha}} = \frac{2 \text{Re}(c)(z-\text{Re}(\alpha)) - 2 \text{Im}(c)\text{Im}(\alpha)}{(z-\alpha)(z-\overline{\alpha})}$$ is $\text{Re}(c) \ln((z - \text{Re}(\alpha))^2 + \text{Im}(\alpha)^2) - 2 \text{Im}(c) \arctan\left(\frac{z-\text{Re}(\alpha)}{\text{Im}(\alpha)}\right)$ – Robert Israel Feb 1 '12 at 23:14
An advantage of this over the form $c \ln(z-\alpha) + \overline{c} \ln(z - \overline{\alpha})$ is that (once you have the real and imaginary parts of $c$ and $\alpha$) it doesn't explicitly involve complex quantities when $z$ is real. From the point of view of complex analysis, both forms are equally valid, but (if you use the principal branches) their branch cuts are in different places. – Robert Israel Feb 1 '12 at 23:24
Two great answers! I'm marking this one as accepted because it taught me something new about partial fractions. – Ben Crowell Feb 2 '12 at 0:14
For $|x|<1$ we have $1/(x^n - 1) = - \sum_{k=0}^\infty x^{nk}$, so an antiderivative of this is $- \sum_{k=0}^\infty \frac{x^{nk+1}}{nk+1}$. This can be written as a hypergeometric function: $- x \ {}_2F_1\left(\frac{1}{n},1; 1+\frac{1}{n}; x^n\right)$.
-
Cool! I'm curious about your thought process. Did you write out the series and then look it up in a table to identify it as a hypergeometric? – Ben Crowell Feb 2 '12 at 0:16
|
2015-11-30 06:17:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9017022848129272, "perplexity": 278.26104431237616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398461113.77/warc/CC-MAIN-20151124205421-00115-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://carma.newcastle.edu.au/event.php?n=341
|
CARMA SEMINAR Speaker: Kevin Hare, University of Waterloo Title: An explicit counter-example to the Lagarias-Wang finiteness conjecture Location: Room V129, Mathematics Building (Callaghan Campus) The University of Newcastle Time and Date: 4:00 pm, Thu, 28th Feb 2013 Abstract: The joint spectral radius of a finite set of real $d \times d$ matrices is defined to be the maximum possible exponential rate of growth of long products of matrices drawn from that set. A set of matrices is said to have the finiteness property if there exists a periodic product which achieves this maximal rate of growth. J. C. Lagarias and Y. Wang conjectured in 1995 that every finite set of real $d \times d$ matrices satisfies the finiteness property. However, T. Bousch and J. Mairesse proved in 2002 that counterexamples to the finiteness conjecture exist, showing in particular that there exists a family of pairs of $2 \times 2$ matrices which contains a counterexample. Similar results were subsequently given by V. D. Blondel, J. Theys and A. A. Vladimirov and by V. S. Kozyakin, but no explicit counterexample to the finiteness conjecture was given. This talk will discuss an explicit counter-example to this conjecture. [Permanent link]
|
2021-03-07 22:08:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8362177610397339, "perplexity": 650.9388128789443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378872.82/warc/CC-MAIN-20210307200746-20210307230746-00335.warc.gz"}
|
https://en.wikipedia.org/wiki/%ce%a3-finite
|
# σ-finite measure
(Redirected from Σ-finite)
In mathematics, a positive (or signed) measure μ defined on a σ-algebra Σ of subsets of a set X is called finite if μ(X) is a finite real number (rather than ∞). The measure μ is called σ-finite if X is the countable union of measurable sets with finite measure. A set in a measure space is said to have σ-finite measure if it is a countable union of sets with finite measure.
A different but related notion is s-finiteness. A measure ${\displaystyle m}$ is s-finite if and only if there exists a sequence of finite measures ${\displaystyle (m_{n})_{n}:m=\sum _{n\in \mathbb {N} }m_{n}}$, that is: m is the countable sum of finite measures.
## Examples
### Lebesgue measure
For example, Lebesgue measure on the real numbers is not finite, but it is σ-finite. Indeed, consider the intervals [kk + 1) for all integers k; there are countably many such intervals, each has measure 1, and their union is the entire real line.
### Counting measure
Alternatively, consider the real numbers with the counting measure; the measure of any finite set is the number of elements in the set, and the measure of any infinite set is infinity. This measure is not σ-finite, because every set with finite measure contains only finitely many points, and it would take uncountably many such sets to cover the entire real line. But, the set of natural numbers ${\displaystyle \mathbb {N} }$ with the counting measure is σ -finite.
### Locally compact groups
Locally compact groups which are σ-compact are σ-finite under Haar measure. For example, all connected, locally compact groups G are σ-compact. To see this, let V be a relatively compact, symmetric (that is V = V−1) open neighborhood of the identity. Then
${\displaystyle H=\bigcup _{n\in \mathbb {N} }V^{n}}$
is an open subgroup of G. Therefore H is also closed since its complement is a union of open sets and by connectivity of G, must be G itself. Thus all connected Lie groups are σ-finite under Haar measure.
### Negative examples
Any non-trivial measure taking only the two values 0 and ${\displaystyle \infty }$ is clearly non σ-finite. One example in ${\displaystyle \scriptstyle \mathbb {R} }$ is: for all ${\displaystyle \scriptstyle A\subset \mathbb {R} }$, ${\displaystyle \scriptstyle \mu (A)=\infty }$ if and only if A is not empty; another one is: for all ${\displaystyle \scriptstyle A\subset \mathbb {R} }$, ${\displaystyle \scriptstyle \mu (A)=\infty }$ if and only if A is uncountable, 0 otherwise. Incidentally, both are translation-invariant.
## Properties
The class of σ-finite measures has some very convenient properties; σ-finiteness can be compared in this respect to separability of topological spaces. Some theorems in analysis require σ-finiteness as a hypothesis. Usually, both the Radon–Nikodym theorem and Fubini's theorem are stated under an assumption of σ-finiteness on the measures involved. However, as shown in Segal's paper Equivalences of measure spaces (Am. J. Math. 73, 275 (1953)) they require only a weaker condition, namely localisability.
Though measures which are not σ-finite are sometimes regarded as pathological, they do in fact occur quite naturally. For instance, if X is a metric space of Hausdorff dimension r, then all lower-dimensional Hausdorff measures are non-σ-finite if considered as measures on X.
### Equivalence to a probability measure
Any σ-finite measure μ on a space X is equivalent to a probability measure on X: let Vn, n ∈ N, be a covering of X by pairwise disjoint measurable sets of finite μ-measure, and let wn, n ∈ N, be a sequence of positive numbers (weights) such that
${\displaystyle \sum _{n=1}^{\infty }w_{n}=1.}$
The measure ν defined by
${\displaystyle \nu (A)=\sum _{n=1}^{\infty }w_{n}{\frac {\mu (A\cap V_{n})}{\mu (V_{n})}}}$
is then a probability measure on X with precisely the same null sets as μ.
### Relation to s-finiteness
If m is a σ-finite measure, then it is s-finite. However, the converse is not true.
|
2016-12-08 14:32:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9489178657531738, "perplexity": 368.9252811453025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541529.0/warc/CC-MAIN-20161202170901-00452-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://cran.microsoft.com/snapshot/2018-04-29/web/packages/rmsfuns/vignettes/rmsfuns.html
|
# Introduction
This package contains several helper functions for use in data manipulation, folder creation and viewing purposes. See examples of such functions below.
## build_path
This function builds the entire folder path provided by the user. If the path does not exist, it builds it without error. It is effectively a user-friendly wrapper to the base function dir.create.
library(rmsfuns)
build_path("C:/Temp/data")
Can also be used to build a vector of paths:
library(rmsfuns)
Path <- build_path(paste0("C:/Temp/data/", c("SubFolder1", "SubFolder2", "SubFolder3"))
print(Path)
## ViewXL
This function makes it easy to quickly view any R object or dataframe in excel. A random file is created in R’s temporary folder location (see tempdir() to find your location). The excel file location can also be overridden using the FilePath command. IMPORTANT: if using a mac, set mac = TRUE in the command (equal to FALSE by default).
library(rmsfuns)
df <- data.frame(date =
seq(as.Date("2012-01-01"),
as.Date("2015-08-18"),"day"),
x = rnorm(1326, 10,2))
ViewXL(df)
# ViewXL(df, mac = TRUE) if using a mac
To clean the R temporary file folder (done periodically if using ViewXL often - especially with large excel files), use CleanTempFolder:
library(rmsfuns)
CleanTempFolder()
## dateconverter
The dateconverter function makes it easy to create a date vector in R. It offers a simple wrapper using xts functionality to create a vector of dates between a given Start and End date, and then correcting for the chosen frequency transformation.
It can do the following transformations between given Start and End Dates:
alldays ; calendarEOM ; weekdays ; weekdayEOW ; weekdayEOM ; weekdayEOQ ; weekdayEOY
library(rmsfuns)
dates <- dateconverter(as.Date("2000-01-01"), as.Date("2017-01-01"), "alldays")
dates <- dateconverter(as.Date("2000-01-01"), as.Date("2017-01-01"), "weekdays")
dates <- dateconverter(as.Date("2000-01-01"), as.Date("2017-01-01"), "calendarEOM")
dates <- dateconverter(as.Date("2000-01-01"), as.Date("2017-01-01"), "weekdayEOW")
dates <- dateconverter(as.Date("2000-01-01"), as.Date("2017-01-01"), "weekdayEOM")
dates <- dateconverter(as.Date("2000-01-01"), as.Date("2017-01-01"), "weekdayEOQ")
dates <- dateconverter(as.Date("2000-01-01"), as.Date("2017-01-01"), "weekdayEOY")
## PromptAsTime
To change R’s prompt to reflect the time, use the PromptAsTime function. This can be used as a simple means of timing long calculations without using sys.time() commands. This can be very useful if running, e.g., many functions overnight, and later viewing the time taken on multiple calculations.
To set the timer on, type:
PromptAsTime(TRUE)
The time for each command will now be shown in Rstudio’s prompt.
This is particularly useful for when you want to see, after running a code script in Rstudio, what the duration of each line was. E.g., run the following in your Rstudio console:
PromptAsTime(TRUE)
x <- 100
Sys.sleep(3)
x*x
print(x)
PromptAsTime(FALSE)
You can then see in the prompt that the Sys.sleep(3) call lasted 3 seconds.
Packages <- c("xts", "dplyr")
load_pkg(Packages)
|
2023-01-30 19:26:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22591587901115417, "perplexity": 7678.825815919894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00577.warc.gz"}
|
https://cdsweb.cern.ch/collection/Published%20Articles?ln=ru
|
Published Articles
Последние добавления:
2015-06-30
08:09
Getting Humans to do Quantum Optimization - User Acquisition, Engagement and Early Results from the Citizen Cyberscience Game Quantum Moves / Lieberoth, Andreas ; Pedersen, Mads Kock ; Marin, Andreea Catalina ; Planke, Tilo ; Sherson, Jacob Friis The game Quantum Moves was designed to pit human players against computer algorithms, combining their solutions into hybrid optimization to control a scalable quantum computer. In this midstream report, we open our design process and describe the series of constitutive building stages going into a quantum physics citizen science game. [...] arXiv:1506.08761.- 2015 - 26 p. - Published in : Human Computation 1(2) 219-244 (2014) External link: Preprint
2015-06-30
08:09
Disorder-induced light trapping enhanced by pulse collisions in one-dimensional nonlinear photonic crystals / Novitsky, Denis We use numerical simulations to study interaction of co- and counter-propagating pulses in disordered multilayers with noninstantaneous Kerr nonlinearity. We propose a statistical argument for existence of the disorder-induced trapping which implies the dramatic rise of the probability of realization with low output energy in the structure with a certain level of disorder. [...] arXiv:1506.08607.- 2015 - 7 p. - Published in : Opt. Commun. 353 (2015) 56-62 External link: Preprint
2015-06-30
08:09
The cuttlefish Sepia officinalis (Sepiidae, Cephalopoda) constructs cuttlebone from a liquid-crystal precursor / Checa, Antonio G ; Cartwright, Julyan H E ; Sánchez-Almazo, Isabel ; Andrade, José P ; Ruiz-Raya, Francisco Cuttlebone, the sophisticated buoyancy device of cuttlefish, is made of extensive superposed chambers that have a complex internal arrangement of calcified pillars and organic membranes. It has not been clear how this structure is assembled. [...] arXiv:1506.08290.- 2015 - Published in : Scientific Reports 5 (2015) 11513 External link: Preprint
2015-06-30
08:09
DGDFT: A Massively Parallel Method for Large Scale Density Functional Theory Calculations / Hu, Wei ; Lin, Lin ; Yang, Chao We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) [J. Comput. [...] arXiv:1506.08147.- 2015 - 34 p. External link: Preprint
2015-06-30
08:09
A Study on the Effect of Exit Widths and Crowd Sizes in the Formation of Arch in Clogged Crowds / Castro, Francisco Enrique Vicente G ; Pabico, Jaderick P The arching phenomenon is an emergent pattern formed by a $c$-sized crowd of intelligent, goal-oriented, autonomous, heterogeneous individuals moving towards a $w$-wide exit along a long $W$-wide corridor, where $W>w$. We collected empirical data from microsimulations to identify the combination effects of~$c$ and~$w$ to the time~$T$ of the onset of and the size~$S$ of the formation of the arch. [...] arXiv:1506.08133.- 2015 - 9 p. External link: Preprint
2015-06-30
08:09
Dynamic of astrophysical jets in the complex octonion space / Weng, Zi-Hua The paper aims to consider the strength gradient force as the dynamic of astrophysical jets, explaining the movement phenomena of astrophysical jets. J. [...] arXiv:1506.08058.- 2015 - 19 p. - Published in : Int. J. Mod. Phys. D 24 (2015) 1550072 External link: Preprint
2015-06-30
08:09
Mechanical properties of branched actin filaments / Razbin, Mohammadhosein ; Falcke, Martin ; Benetatos, Panayotis ; Zippelius, Annette Cells moving on a two dimensional substrate generate motion by polymerizing actin filament networks inside a flat membrane protrusion. New filaments are generated by branching off existing ones, giving rise to branched network structures. [...] arXiv:1506.08051.- 2015 - Published in : 2015 Phys. Biol. 12 046007 External link: Preprint
2015-06-30
08:09
Large-sensitive-area superconducting nanowire single-photon detector at 850 nm with high detection efficiency / Li, Hao ; Zhang, Lu ; You, Lixing ; Yang, Xiaoyan ; Zhang, Weijun ; Liu, Xiaoyu ; Chen, Sijing ; Wang, Zhen ; Xie, Xiaoming Satellite-ground quantum communication requires single-photon detectors of 850-nm wavelength with both high detection efficiency and large sensitive area. We developed superconducting nanowire single-photon detectors (SNSPDs) on one-dimensional photonic crystals, which acted as optical cavities to enhance the optical absorption, with a sensitive-area diameter of 50 um. [...] arXiv:1506.07922.- 2015 - 8 p. External link: Preprint
2015-06-30
08:09
Superconducting nanowire single-photon detectors at a wavelength of 940 nm / Zhang, W J ; Li, H ; You, L X ; He, Y H ; Zhang, L ; Liu, X Y ; Yang, X Y ; Wu, J J ; Guo, Q ; Chen, S J et al. We develop single-photon detectors comprising single-mode fiber-coupled superconducting nanowires, with high system detection efficiencies at a wavelength of 940 nm. The detector comprises a 6.5-nm-thick, 110-nm-wide NbN nanowire meander fabricated onto a Si substrate with a distributed Bragg reflector for enhancing the optical absorptance. [...] arXiv:1506.07921.- 2015 - 12 p. External link: Preprint
2015-06-30
08:09
Few-photon imaging at 1550 nm using a low-timing-jitter superconducting nanowire single-photon detector / Zhou, H ; He, Y ; You, L ; Chen, S ; Zhang, W ; Wu, J ; Wang, Z ; Xie, X We demonstrated a laser depth imaging system based on the time-correlated single-photon counting technique, which was incorporated with a low-jitter superconducting nanowire single-photon detector (SNSPD), operated at the wavelength of 1550 nm. A sub-picosecond time-bin width was chosen for photon counting, resulting in a discrete noise of less than one/two counts for each time bin under indoor/outdoor daylight conditions, with a collection time of 50 ms. [...] arXiv:1506.07920.- 2015 - 7 p. External link: Preprint
Ищем также:
|
2015-07-02 09:49:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5901956558227539, "perplexity": 10361.932248630468}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095494.6/warc/CC-MAIN-20150627031815-00056-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/378/2/d/a/
|
# Properties
Label 378.2.d.a Level 378 Weight 2 Character orbit 378.d Analytic conductor 3.018 Analytic rank 0 Dimension 4 CM no Inner twists 4
# Related objects
## Newspace parameters
Level: $$N$$ = $$378 = 2 \cdot 3^{3} \cdot 7$$ Weight: $$k$$ = $$2$$ Character orbit: $$[\chi]$$ = 378.d (of order $$2$$, degree $$1$$, minimal)
## Newform invariants
Self dual: no Analytic conductor: $$3.01834519640$$ Analytic rank: $$0$$ Dimension: $$4$$ Coefficient field: $$\Q(\zeta_{12})$$ Coefficient ring: $$\Z[a_1, \ldots, a_{7}]$$ Coefficient ring index: $$1$$ Twist minimal: yes Sato-Tate group: $\mathrm{SU}(2)[C_{2}]$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of a primitive root of unity $$\zeta_{12}$$. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q -\zeta_{12}^{3} q^{2} - q^{4} + ( -2 \zeta_{12} + \zeta_{12}^{3} ) q^{5} + ( -3 + \zeta_{12}^{2} ) q^{7} + \zeta_{12}^{3} q^{8} +O(q^{10})$$ $$q -\zeta_{12}^{3} q^{2} - q^{4} + ( -2 \zeta_{12} + \zeta_{12}^{3} ) q^{5} + ( -3 + \zeta_{12}^{2} ) q^{7} + \zeta_{12}^{3} q^{8} + ( -1 + 2 \zeta_{12}^{2} ) q^{10} -3 \zeta_{12}^{3} q^{11} + ( -4 + 8 \zeta_{12}^{2} ) q^{13} + ( \zeta_{12} + 2 \zeta_{12}^{3} ) q^{14} + q^{16} + ( -8 \zeta_{12} + 4 \zeta_{12}^{3} ) q^{17} + ( -2 + 4 \zeta_{12}^{2} ) q^{19} + ( 2 \zeta_{12} - \zeta_{12}^{3} ) q^{20} -3 q^{22} + 6 \zeta_{12}^{3} q^{23} -2 q^{25} + ( 8 \zeta_{12} - 4 \zeta_{12}^{3} ) q^{26} + ( 3 - \zeta_{12}^{2} ) q^{28} -6 \zeta_{12}^{3} q^{29} + ( 3 - 6 \zeta_{12}^{2} ) q^{31} -\zeta_{12}^{3} q^{32} + ( -4 + 8 \zeta_{12}^{2} ) q^{34} + ( 5 \zeta_{12} - 4 \zeta_{12}^{3} ) q^{35} -2 q^{37} + ( 4 \zeta_{12} - 2 \zeta_{12}^{3} ) q^{38} + ( 1 - 2 \zeta_{12}^{2} ) q^{40} + ( -4 \zeta_{12} + 2 \zeta_{12}^{3} ) q^{41} -2 q^{43} + 3 \zeta_{12}^{3} q^{44} + 6 q^{46} + ( 4 \zeta_{12} - 2 \zeta_{12}^{3} ) q^{47} + ( 8 - 5 \zeta_{12}^{2} ) q^{49} + 2 \zeta_{12}^{3} q^{50} + ( 4 - 8 \zeta_{12}^{2} ) q^{52} + 3 \zeta_{12}^{3} q^{53} + ( -3 + 6 \zeta_{12}^{2} ) q^{55} + ( -\zeta_{12} - 2 \zeta_{12}^{3} ) q^{56} -6 q^{58} + ( -4 \zeta_{12} + 2 \zeta_{12}^{3} ) q^{59} + ( 4 - 8 \zeta_{12}^{2} ) q^{61} + ( -6 \zeta_{12} + 3 \zeta_{12}^{3} ) q^{62} - q^{64} -12 \zeta_{12}^{3} q^{65} + 2 q^{67} + ( 8 \zeta_{12} - 4 \zeta_{12}^{3} ) q^{68} + ( 1 - 5 \zeta_{12}^{2} ) q^{70} + 12 \zeta_{12}^{3} q^{71} + ( 7 - 14 \zeta_{12}^{2} ) q^{73} + 2 \zeta_{12}^{3} q^{74} + ( 2 - 4 \zeta_{12}^{2} ) q^{76} + ( 3 \zeta_{12} + 6 \zeta_{12}^{3} ) q^{77} + 8 q^{79} + ( -2 \zeta_{12} + \zeta_{12}^{3} ) q^{80} + ( -2 + 4 \zeta_{12}^{2} ) q^{82} + ( 2 \zeta_{12} - \zeta_{12}^{3} ) q^{83} + 12 q^{85} + 2 \zeta_{12}^{3} q^{86} + 3 q^{88} + ( -12 \zeta_{12} + 6 \zeta_{12}^{3} ) q^{89} + ( 4 - 20 \zeta_{12}^{2} ) q^{91} -6 \zeta_{12}^{3} q^{92} + ( 2 - 4 \zeta_{12}^{2} ) q^{94} -6 \zeta_{12}^{3} q^{95} + ( -7 + 14 \zeta_{12}^{2} ) q^{97} + ( -5 \zeta_{12} - 3 \zeta_{12}^{3} ) q^{98} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$4q - 4q^{4} - 10q^{7} + O(q^{10})$$ $$4q - 4q^{4} - 10q^{7} + 4q^{16} - 12q^{22} - 8q^{25} + 10q^{28} - 8q^{37} - 8q^{43} + 24q^{46} + 22q^{49} - 24q^{58} - 4q^{64} + 8q^{67} - 6q^{70} + 32q^{79} + 48q^{85} + 12q^{88} - 24q^{91} + O(q^{100})$$
## Character values
We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/378\mathbb{Z}\right)^\times$$.
$$n$$ $$29$$ $$325$$ $$\chi(n)$$ $$-1$$ $$-1$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
377.1
0.866025 + 0.500000i −0.866025 + 0.500000i 0.866025 − 0.500000i −0.866025 − 0.500000i
1.00000i 0 −1.00000 −1.73205 0 −2.50000 + 0.866025i 1.00000i 0 1.73205i
377.2 1.00000i 0 −1.00000 1.73205 0 −2.50000 0.866025i 1.00000i 0 1.73205i
377.3 1.00000i 0 −1.00000 −1.73205 0 −2.50000 0.866025i 1.00000i 0 1.73205i
377.4 1.00000i 0 −1.00000 1.73205 0 −2.50000 + 0.866025i 1.00000i 0 1.73205i
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
Char Parity Ord Mult Type
1.a even 1 1 trivial
3.b odd 2 1 inner
7.b odd 2 1 inner
21.c even 2 1 inner
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 378.2.d.a 4
3.b odd 2 1 inner 378.2.d.a 4
4.b odd 2 1 3024.2.k.j 4
7.b odd 2 1 inner 378.2.d.a 4
9.c even 3 1 1134.2.m.e 4
9.c even 3 1 1134.2.m.f 4
9.d odd 6 1 1134.2.m.e 4
9.d odd 6 1 1134.2.m.f 4
12.b even 2 1 3024.2.k.j 4
21.c even 2 1 inner 378.2.d.a 4
28.d even 2 1 3024.2.k.j 4
63.l odd 6 1 1134.2.m.e 4
63.l odd 6 1 1134.2.m.f 4
63.o even 6 1 1134.2.m.e 4
63.o even 6 1 1134.2.m.f 4
84.h odd 2 1 3024.2.k.j 4
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
378.2.d.a 4 1.a even 1 1 trivial
378.2.d.a 4 3.b odd 2 1 inner
378.2.d.a 4 7.b odd 2 1 inner
378.2.d.a 4 21.c even 2 1 inner
1134.2.m.e 4 9.c even 3 1
1134.2.m.e 4 9.d odd 6 1
1134.2.m.e 4 63.l odd 6 1
1134.2.m.e 4 63.o even 6 1
1134.2.m.f 4 9.c even 3 1
1134.2.m.f 4 9.d odd 6 1
1134.2.m.f 4 63.l odd 6 1
1134.2.m.f 4 63.o even 6 1
3024.2.k.j 4 4.b odd 2 1
3024.2.k.j 4 12.b even 2 1
3024.2.k.j 4 28.d even 2 1
3024.2.k.j 4 84.h odd 2 1
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(378, [\chi])$$:
$$T_{5}^{2} - 3$$ $$T_{13}^{2} + 48$$
## Hecke Characteristic Polynomials
$p$ $F_p(T)$
$2$ $$( 1 + T^{2} )^{2}$$
$3$
$5$ $$( 1 + 7 T^{2} + 25 T^{4} )^{2}$$
$7$ $$( 1 + 5 T + 7 T^{2} )^{2}$$
$11$ $$( 1 - 13 T^{2} + 121 T^{4} )^{2}$$
$13$ $$( 1 - 2 T + 13 T^{2} )^{2}( 1 + 2 T + 13 T^{2} )^{2}$$
$17$ $$( 1 - 14 T^{2} + 289 T^{4} )^{2}$$
$19$ $$( 1 - 8 T + 19 T^{2} )^{2}( 1 + 8 T + 19 T^{2} )^{2}$$
$23$ $$( 1 - 10 T^{2} + 529 T^{4} )^{2}$$
$29$ $$( 1 - 22 T^{2} + 841 T^{4} )^{2}$$
$31$ $$( 1 - 35 T^{2} + 961 T^{4} )^{2}$$
$37$ $$( 1 + 2 T + 37 T^{2} )^{4}$$
$41$ $$( 1 + 70 T^{2} + 1681 T^{4} )^{2}$$
$43$ $$( 1 + 2 T + 43 T^{2} )^{4}$$
$47$ $$( 1 + 82 T^{2} + 2209 T^{4} )^{2}$$
$53$ $$( 1 - 97 T^{2} + 2809 T^{4} )^{2}$$
$59$ $$( 1 + 106 T^{2} + 3481 T^{4} )^{2}$$
$61$ $$( 1 - 14 T + 61 T^{2} )^{2}( 1 + 14 T + 61 T^{2} )^{2}$$
$67$ $$( 1 - 2 T + 67 T^{2} )^{4}$$
$71$ $$( 1 + 2 T^{2} + 5041 T^{4} )^{2}$$
$73$ $$( 1 + T^{2} + 5329 T^{4} )^{2}$$
$79$ $$( 1 - 8 T + 79 T^{2} )^{4}$$
$83$ $$( 1 + 163 T^{2} + 6889 T^{4} )^{2}$$
$89$ $$( 1 + 70 T^{2} + 7921 T^{4} )^{2}$$
$97$ $$( 1 - 47 T^{2} + 9409 T^{4} )^{2}$$
|
2020-05-26 07:00:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9914937615394592, "perplexity": 12964.39571561996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390448.11/warc/CC-MAIN-20200526050333-20200526080333-00238.warc.gz"}
|
https://mathematica.stackexchange.com/questions/134662/how-do-i-defer-evaluate-in-conditional-expressions-when-the-arguments-are-list-e
|
# How do I defer evaluate in conditional expressions when the arguments are list elements?
I'm trying to take a conditional expression like "3 > 1" as an argument for a function but without its evaluation before being passed. HoldForm works fine in situations like this:
*In[1]:=* 3 > 1
*Out[1]=* True
*In[2]:=* 3 > 1 // HoldForm
*Out[2]=* 3 > 1
However, it doesn't work if I use lists....
*In[3]:=* a = {3, 2, 1};
b = {1, 2, 3};
*In[4]:=* a[[1]] > b[[1]]
*Out[4]=* True
*In[5]:=* a[[1]] > b[[1]] // HoldForm
*Out[5]=* a[[1]] > b[[1]]
I would like to have '3 > 1' as my output for Out[5]. Using Evaluate[a[[1]]], etc., does not work. Thanks for your help.
"I'm trying to take a conditional expression like "3 > 1" as an argument for a function but without its evaluation before being passed" -- that sounds like you need a hold Attribute on your function, e.g. HoldFirst.
For your last example you appear to want a way to evaluate Part but keep Greater unevaluated. If you are using Mathematica 10 or later Inactivate may be a good choice:
a = {3, 2, 1};
b = {1, 2, 3};
SetAttributes[f1, HoldFirst]
f1[bool_] := Inactivate[bool, Greater | Less | GreaterEqual | LessEqual | Inequality]
f1[a[[1]] > b[[1]]]
FullForm[%]
3 > 1
Inactive[Greater][3,1]
Another approach is to use HoldForm and specifically evaluate Part:
SetAttributes[f2, HoldFirst]
f2[bool_] := HoldForm[bool] /. p_Part :> RuleCondition[p]
f2[a[[1]] > b[[1]]]
3 > 1 (* HoldForm *)
For an explanation of RuleCondition see:
If you wish to use this output as input that will fully evaluate you can substitute Defer for HoldForm.
• @Kendall You're welcome; I am glad I could help. If this fully satisfies your question please consider Accepting it. – Mr.Wizard Jan 4 '17 at 13:28
|
2019-08-24 07:55:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23566409945487976, "perplexity": 4256.507593932646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319915.98/warc/CC-MAIN-20190824063359-20190824085359-00374.warc.gz"}
|
http://physics.stackexchange.com/questions/63101/infinite-reflection-of-light-and-the-conservation-of-energy-momentum
|
# Infinite reflection of light and the conservation of energy / momentum
First off, I confess I'm no physicist, but I have been asking people with a more extensive knowledge this one question, without a definitive answer so far.
Basically, I'm playing around with the idea of photons having mass as this part of the wiki page shows, it's not an entirely new concept... Especially the latest (mangled) sentence is of interest to me.
I have been reading up about light, the duality of it, photons and it being the only known gauge boson (massless thingamajig) and what have you. As a result, I'm even more confused, so I thought I'd ask my question here.
Setup:
Suppose we took two perfect mirrors, stretching out into infinity, perfectly parallel, facing each other. If I were to shoot a photon onto either one of these mirrors at a 45deg. angle, what would happen.
Hypotheses:
As far as I can know (or guess, or can imagine) either one of three things can happen:
• The photon just bounces back and forth into infinity at that leisurely pace of $c$.
• Calling on the particle part of light's duality: Every action causes an equal and opposite reaction. Upon colliding with the surface of the mirrors, energy is needed for the photon to change direction. As everything strides towards entropy, I'd assume there is some heat being released (photon's energy onto the mirror)?
If that's the case, at some point the photon's electromagnetic "charge", ie energy-reserves should run out. What do I end up with? Slightly warmer mirrors and a massless, empty shell of a photon at the end? What is a photon that no longer has any energy anyway? Is that the famous dark-matter... or am I going all too scifi-crazy now? Because somewhere I did read that light, being massless, obviously has no rest-matter either, nor does it have an electric charge of its own. That causes me to think of a photon as some sort of carrier, an empty satchel and because it's not exactly huge, it can but contain a finite amount of energy (I think).
• Last thing I can think of: because of my photon's bouncing, and my being at a terrible loss trying to grasp the formula's and theories about light's physical properties I've gotten the (perhaps silly) idea that the constant changing of the direction of propagation could affect the wavelength, essentially generating something more like gamma-rays. Again, I don't know what this entails for my mirror-setup, but when news breaks of an impending nuclear disaster, I don't think a mirror completely deflects gamma rays. In other words, I don't even think it unlikely if somebody told me that photon would just bugger off.
I hope someone can make sense of the bizarre meanders of a non-physicist's mind, but I would like to know the answer to a question I came up with about 10 years ago.
So far I've gotten the answers:
• Oh, I'd have to check on that one.
• Of course, they talk about the duality of light, but light is, essentially pure energy. they've developed this dual-character as a working model. Much like everything "'t is but a theory" (I particularly disliked this answer for some reason)
• Do you know how they spot a black hole? (I replied: No) Because there is light, but none around it. All light is drawn to the black hole. (this was followed by an awkward silence, and a smug nod. Which met with a confused and monkey like gaze from my part)
Any more confusing ideas are always welcome.
Edit/recap:
Thanks to all of you for the info. In response to the comments, the kernel of the question is this: If I were able to follow the afore mentioned proton in this setup, what changes, if any, will I see along the line? Heat being generated? The photon "disintegrating" or dissipating, nothing (just endlessly bouncing back and forth...?
Reading the wiki on Total Internal Reflection, I noticed that this occurs with soundwaves, too. I immediately thought of that horrid screeching feedback noise you can get if you hold a mic to a speaker. I guess I sort of translated that phenomenon into the photon changing wavelengths.
Funny, but true: I remember as a child asking my father if you were able to create an infinite broadcast of sorts using two transmitters and two receivers playing a sound back and forth to them. In some way or another, I've always wondered about stuff like this as it turns out...
Mirror mass:
I suppose the mirrors would have to have infinite mass for them to stretch out into infinity. Though after some more checking, that complicates things considering $E = pc$. I've added that to my many light-related bookmarks, and I'll get back to you on that.
-
+1 because I think the infinite reflection question is a neat concept. Your question is all over the place though and I think you should edit it down to just the kernel of what you want to know. – Brandon Enright May 2 '13 at 23:52
Your infinite parallel mirrors setup is very similar to fiber optics and total internal reflection probably has a lot to say about the answer: en.wikipedia.org/wiki/Total_internal_reflection – Brandon Enright May 2 '13 at 23:53
– dmckee May 3 '13 at 0:03
Do the mirrors have infinite mass? – joshphysics May 3 '13 at 0:05
@dmckee: Thanks for the link, I now know I have about a thousand more wiki pages to read/decipher ;) – Elias Van Ootegem May 3 '13 at 10:21
First, of course there's no perfect mirror. But let's assume there was one.
Next, the question is: Is the bouncing off the mirrors elastic or inelastic. If the photon is absorbed and re-emitted with the same frequency, then the bouncing is elastic and no energy is lost by the photon. It would then go on forever and ever.
But what if it does lose energy with each bounce? Well, your two mirrors form a cavity and if we appeal to the wave-aspect of light, only waves with wavelengths that "fit" into the cavity are allowed, so there'd be a minimum allowed wavelength, $\lambda_0$ with $\lambda_0 = L/2$ where $L$ is the distance between your mirrors and since energy and wavelength of a photon are intimately related, this means that the photon in your cavity has a minimum energy below which it cannot fall.
If you add the concept of heat / temperature / entropy to the mix, what you will get is that the walls (mirrors) are in thermal equilibrium with the photons in your cavity: Some of the energy is then stored in the walls and some in the photons. In fact, considering the situation of taking a cavity at some temperature and looking at the nature of the light that comes out of it (if you poke a tiny hole in it) is one of the phenomena that led to the discovery of quantum physics.
Some misconceptions: A photon has no "electromagnetic charge", it is a massless, chargeless particle. Now what if its energy "runs out"? Then it just ceases to exist. There is no photon without energy.
-
qualify "ceases to exist" by maybe :it is so low in the ifra red spctrum that it is absorbed in a vibrational qm transition of a molecule. from momentum conservation it will be losing energy at each bounce, lowering wavelength, ulsess as Josh is asking the mirrors have infinite mass? – anna v May 3 '13 at 4:15
This might sound stupid, but I know a photon is massless and has no charge, but at the same time it's said to be "an elementary particle, the quantum of light and all other forms of electromagnetic radiation, and the force carrier for the electromagnetic force". So it has no charge, but carries energy... is energy even... what? who? how? *_- – Elias Van Ootegem May 3 '13 at 10:25
Also: if a photon ceases to exist if the energy "runs out", shouldn't you be left with an empty gauge boson particle? – Elias Van Ootegem May 3 '13 at 10:27
No, a photon cannot "run out" of energy independent of its other properties. Technically, if a photon "loses" energy, what really happens is that a photon of some initial energy $E_1$ is absorbed/destroyed and a new photon of some new energy $E_2$ is emitted. "Running out of energy" then just means that a new photon is never emitted. That can happen if a photon is absorbed by a crystal and the energy then re-emitted as lattice vibrations instead of a new photon. – Lagerbaer May 3 '13 at 15:10
@Lagerbaer: I'm sorry to be this thick, I got hung up on that gauge boson being matter, and matter should be conserved at all time... I have found out, now, that matter is not perfectly conserved, though I've also learned that -even though they're massless- photons still add mass. Anyway, I've got enough material out of this to study this matter (no pun intended) for a couple of days/weeks... meanwhile: Thanks for the info! – Elias Van Ootegem May 3 '13 at 16:30
|
2015-11-30 06:18:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7927354574203491, "perplexity": 542.9391589800081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398461113.77/warc/CC-MAIN-20151124205421-00185-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://web2.0calc.com/questions/the-solids-are-similar-find-the-volume-v-of-the-red-solid
|
+0
# The solids are similar. Find the volume V of the red solid.
0
122
1
The solids are similar. Find the volume V of the red solid.
Mar 10, 2021
#1
+498
0
The height of the blue pyramid is: $$\frac{5292 \cdot 3}{21^2} = \frac{15876}{441} = 36$$
$$\frac{21}{7} = 3$$ which means the height of the red pyramid is just $$36\div 3 = 12$$
This means the area of the red pyramid is:
$$\frac{1}{3} \cdot 7^2 \cdot 12 = 4 \cdot 49 = 196$$
|
2021-04-20 17:37:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9669773578643799, "perplexity": 897.7362633018174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039476006.77/warc/CC-MAIN-20210420152755-20210420182755-00482.warc.gz"}
|
http://mathhelpforum.com/advanced-statistics/97251-mgf.html
|
1. mgf
$\displaystyle p=(X=k)=\left\{\begin{array}{cc}p(1-p)^{k-1}&\mbox{k=1,2,3...} \\ 0 & \mbox{ elsewhere } \end{array}\right. \\ \mbox{ where } 0<p<1 \\$
how do I show that
i)$\displaystyle M_x(t)=\frac{pe^t}{1-(1-p)e^t}$
ii)use the mgf to find the mean and variance of X and
iii) find P(X + Y = 2)?
I'm lost with mgf's, I've searched the web but just definitions are given so I'm hoping to find examples
2. Hello,
The MGF of a random variable X is :
$\displaystyle M_X(t)=\mathbb{E}(e^{tX})$
For discrete distributions, we have $\displaystyle \mathbb{E}(g(X))=\sum_{k=1}^\infty g(k) \mathbb{P}(X=k)$
So here, we have :
\displaystyle \begin{aligned}M_X(t) &=\sum_{k=1}^\infty e^{tk} \mathbb{P}(X=k) \\ &=\sum_{k=1}^\infty e^{tk}p(1-p)^{k-1} \\ &=e^t \sum_{k=1}^\infty e^{t(k-1)} p(1-p)^{k-1} \end{aligned}
Now change the indice so that it starts at k=0 :
\displaystyle \begin{aligned} M_X(t) &=e^t \sum_{k=0}^\infty e^{tk} p(1-p)^k \\ &=pe^t \sum_{k=0}^\infty (e^t(1-p))^k \end{aligned}
And this is just a geometric series
3. Thanks but how would I attempt parts (ii) and (iii)
4. Originally Posted by bigdoggy
$\displaystyle p=(X=k)=\left\{\begin{array}{cc}p(1-p)^{k-1}&\mbox{k=1,2,3...} \\ 0 & \mbox{ elsewhere } \end{array}\right. \\ \mbox{ where } 0<p<1 \\$
how do I show that
i)$\displaystyle M_x(t)=\frac{pe^t}{1-(1-p)e^t}$
ii)use the mgf to find the mean and variance of X and
iii) find P(X + Y = 2)?
[snip]
Originally Posted by bigdoggy
Thanks but how would I attempt parts (ii) [snip]
It's called moment generating function for a reason ..... Your notes must say how to use it to calculate E(X) and E(X^2) ....?
Originally Posted by bigdoggy
[snip]
iii) find P(X + Y = 2)?
I'm lost with mgf's, I've searched the web but just definitions are given so I'm hoping to find examples
What's Y? And are X and Y independent?
5. As for the steps for the mgf :
$\displaystyle e^{tk}=(e^t)^k$ (rule of exponents)
And for the final step, remember that $\displaystyle \sum_{n=0}^\infty x^n=\frac{1}{1-x}$ if $\displaystyle |x|<1$
6. Thanks for you're help moo and mr f...sorry for double posting Iam quite new to posting the questions and I thought a question about mgf and then the solving of the gp validated another thread...lesson learnt!
Mr f...sorry, X & Y are random and independent...
7. Originally Posted by bigdoggy
Thanks for you're help moo and mr f...sorry for double posting Iam quite new to posting the questions and I thought a question about mgf and then the solving of the gp validated another thread...lesson learnt!
Mr f...sorry, X & Y are random and independent...
I also asked what's Y? Re-phrasing: what distribution does Y follow? Things go much easier if you post the whole question.
8. X&Y are independent random variables with common distribution given by:
$\displaystyle p=(X=k)=\left\{\begin{array}{cc}p(1-p)^{k-1}&\mbox{k=1,2,3...} \\ 0 & \mbox{ elsewhere } \end{array}\right. \\ \mbox{ where } 0<1<p \\$
That's the info given...
9. Originally Posted by bigdoggy
X&Y are independent random variables with common distribution given by:
$\displaystyle p=(X=k)=\left\{\begin{array}{cc}p(1-p)^{k-1}&\mbox{k=1,2,3...} \\ 0 & \mbox{ elsewhere } \end{array}\right. \\ \mbox{ where } 0<1<p \\$
That's the info given...
The choices are
X = 0, Y = 2.
X = 1, Y = 1.
X = 2, Y = 0.
It should be simple to calculate the probability of these events. Remember to multiply for 'and' and add for 'or' ....
10. Originally Posted by mr fantastic
The choices are
X = 0, Y = 2.
X = 1, Y = 1.
X = 2, Y = 0.
It should be simple to calculate the probability of these events. Remember to multiply for 'and' and add for 'or' ....
P(X=1)*P(Y=1) = 2p, since k=1,2,... can't have P(X=0) nor P(Y=0)?
|
2018-04-25 16:46:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9727529287338257, "perplexity": 2143.478447386698}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947931.59/warc/CC-MAIN-20180425154752-20180425174752-00384.warc.gz"}
|
http://wiki-125336.winmicro.org/taylor-polynomial-remainder-error.html
|
# Taylor Polynomial Remainder Error
## Contents
It's a first degree polynomial, take a then it has a linear approximation at the point a. ERROR BOUND - Duration: 34:31. it must hold for every positive integerk. http://wiki-125336.winmicro.org/tax-error-letters-2011.html
Transcript The interactive transcript Taylor Series Remainder Theorem in the approximation. is right over here. But if you took a derivative here, this views 378 Like this video?
## Taylor Series Remainder Theorem
bound for the error of a Taylor polynomial approximation. Now let's think classical real analysis, Wadsworth, ISBN978-0-534-98012-2. Well it's going to be the N plus oneth derivative of our function minus Taylor Remainder Theorem Proof know that these derivates are going to be the same at a. Sign in Share More Report N plus oneth derivative of an Nth degree polynomial.
Slcmath@pc 12,948 views 7:01 Taylor's Series of a Polynomial By using this site, you agree to real estate right over here. So what I wanna do
## Lagrange Remainder Khan
Jim Fowler 17,190 views 11:43 Taylor's Remainder Theorem | MIT 18.01SC Single Variable Calculus, Fall 2010 - Duration: 7:09.
## of both sides of this equation right over here.
And what I wanna do is I wanna approximate f of this video to a playlist. We wanna bound
The function f is infinitely
## Taylor Series Error Estimation Calculator
make your opinion count. Close This video is unavailable. DrPhilClark 38,929 views 9:33 Finding a Taylor Polynomial Working... Let r>0 such that the closed is equal to f of a.
## Taylor Remainder Theorem Proof
Kline, Morris (1998), Calculus: An
## Taylor Remainder Theorem Khan
all of these other terms are going to be zero. Indeed, there are several versions of it applicable in different situations, and some of are equal to each other.
pop over to these guys a is equal to f of a. This kind of behavior is easily Remove allDisconnect Loading... Remainder Example - Duration: 11:13. And sometimes you might see a subscript, a big N there to
## Lagrange Remainder Proof
to look like this.
Theorem - Finding the Remainder, Ex 1 patrickJMT SubscribeSubscribedUnsubscribe601,050601K Loading... Need to report the video? http://wiki-125336.winmicro.org/taylor-polynomial-error-term.html interval with f(k) continuous on the closed interval between a and x. Theorem - Finding the Remainder, Ex 3 patrickJMT SubscribeSubscribedUnsubscribe601,050601K Loading...
## Lagrange Remainder Problems
Approximation of Functions by Taylor Polynomials - Duration: 1:34:10. So let as an error function. I could write a N here, I could write an us to estimate how well it approximates cosine.
Privacy Policy & Safety Send feedback Try something new! This version covers the Lagrange and Cauchy forms of the remainder function at a.
## Remainder Estimation Theorem Maximum Absolute Error
this video to a playlist.
Suppose that f is (k + 1)-times Taylor Polynomials and Error - Duration: 6:15. In particular, if f is once complex differentiable on the open set not available right now. Sign in to http://wiki-125336.winmicro.org/tcgetattr-input-output-error-ssh.html order k=1,...,16 centered at x=0 (red) and x=1 (green). And once again, I the subscripts over there like that.
A sloppy but good and simple estimate on $\sin c$ Lagrange error bound for Taylor polynomials.. Show more Language: English Content location: United Differential Operators, Volume 1, Springer, ISBN978-3-540-00662-6. Rating is available when by coloring: cyan=0, blue=π/3, violet=2π/3, red=π, yellow=4π/3, green=5π/3. Here only the convergence of the power series is considered, and it might well views 133 Like this video?
|
2019-03-23 01:23:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5551043748855591, "perplexity": 2500.8045285811563}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202704.58/warc/CC-MAIN-20190323000443-20190323022443-00441.warc.gz"}
|
https://puzzling.stackexchange.com/questions/71669/its-a-serious-game-when-six-packs-are-at-stake
|
# It's a serious game when six-packs are at stake
I'm playing a small-stakes game of a casino poker variant called Okay You Can Stop Holding 'Em. The rules are simple: the player and the house are both dealt two cards face down, there is a round of betting, then one community card is dealt face up on the table, followed by another round of betting. Three of a kind is the best hand, then a pair, then high card. There are no straights or flushes in this game. A single 52-card standard deck is used.
Now, the house/dealer has an enormous tell that everyone in the room is aware of. He always makes a big raise on the first betting round if he is holding exactly a pair of aces, kings or queens, and only with these hands.
So, cards are dealt. I bet, the house makes a big raise. Alarm bells go ding ding ding. I call.
The community card is a king. I have one king in my hand for a pair of kings, meaning that I lose if the house holds three kings or a pair of aces, and win if he holds a pair of queens.
I show the king to my friend sitting next to me and give him a wink. He says: "I bet you a six-pack that you have the best hand".
I love getting six-packs and hate giving them away. Should I take this bet?
• What's the second card in your hand? – ffao Sep 7 '18 at 21:32
• @ffao I think that’s something you have to include in the probability calculations, which frankly was a pain. I’d be interested for the community to find my mistakes though, since I think I’ve made a couple... – El-Guest Sep 7 '18 at 22:01
I believe it depends on what your second card is. If it's an Ace, the chances of the dealer having 2 Queens is higher than the chance of him having 2 Kings or 2 Aces. Otherwise, The chance of him having 2 Queens is the same as him having 2 aces, and then add the chance to have 2 kings means it's more likely the dealer wins.
• This is the intended answer, but I don't think it's immediately clear from the post why the second statement is true. – jafe Sep 8 '18 at 13:45
Okay so you know that
The dealer has either AA, KK, or QQ. You also have a K, and there’s a K on the table. This means that there are 48 cards which your second card might be.
We can break things up into cases:
Case 1: There’s a $\frac{40}{52} = \frac{10}{13}$ chance that you have a 2-J. In this case, there are $C_2^4 = 6$ ways that the dealer could have QQ, times 6 ways that you and the community card could be KK; 6 ways that the dealer could have KK times two ways that you and the community card could be KK; and 6 ways that the dealer could have AA times 6 ways that you and the community card could be KK. You lose $\frac{36 + 12}{36+12+36} = \frac{48}{84} = \frac{4}{7}$ of the time.
Case 2: There is a $\frac{4}{52}=\frac{1}{13}$ chance that you have a Q. In this case, there are 6 ways the dealer could have QQ times 2 ways that you could have the remaining Q times 4 ways for the community K. There are 6 ways the dealer could have KK times 2 ways for the community K times 4 ways for your Q. There are 6 ways the dealer could have AA times 4 ways for the community K times 4 ways for your Q. You lose $\frac{3}{5}$ of the time.
Case 3: There is a $\frac{4}{52}=\frac{1}{13}$ chance that you have a K. In this case, there are 6 ways the dealer could have QQ times 4 ways that you could have a K times 3 ways for the community K. There are 6 ways the dealer could have KK times 2 ways for the community K times 1 way for your K. There are 6 ways the dealer could have AA times 4 ways for the community K times 3 ways for your K. You lose $\frac{14}{26}=\frac{7}{13}$ of the time.
Case 4: There is a $\frac{4}{52}=\frac{1}{13}$ chance that you have a A. In this case, there are 6 ways the dealer could have QQ times 4 ways that you could have the remaining A times 4 ways for the community K. There are 6 ways the dealer could have KK times 4 ways for the community K times 4 ways for your A. There are 6 ways the dealer could have AA times 4 ways for the community K times 2 ways for your A. You lose $\frac{2}{5}$ of the time.
When you put this all together, you will lose
$\frac{10}{13}\frac{4}{7} + \frac{1}{13}\frac{3}{5} + \frac{1}{13}\frac{7}{13} + \frac{1}{13}\frac{2}{5} = \frac{1}{13} + \frac{40}{91} + \frac{7}{169} \approx 55.8\%$ of the time.
We conclude that
You should take the bet, since your preference is that you’d rather win the beer than win the hand.
• The chance you have a K can't be the same as the chance you have a Q given that you already know where two of the Ks are. There are lots of similar mistakes all around. The exact value is kind of hard to compute, but for the yes/no answer I think Chris's argument suffices. (I misread your answer to have the opposite conclusion earlier, sorry). – ffao Sep 7 '18 at 23:17
• @ffao fair enough, thanks for the feedback. I need to stay away from the stats questions, they get me nothing but downvotes! I figured I had covered it the other way round — since the community card is pulled after the second card K I thought I had taken care of it by imagining a new deck of 52 being dealt out. – El-Guest Sep 7 '18 at 23:42
Yes, as stated, there are 3 situations, in 2 of them you'll lose, the other you win. 1/3 vs 2/3 is not good odds. So your friend is likely to be wrong, and you win the six pack, but lose the hand.
• This assumes the outcomes are equally likely, which may or may not be true. (Haven't confirmed yet) – Quintec Sep 7 '18 at 20:57
• okay but as the card on table isn't A or Q, the chances of them having A pair or Q pair is equal. Then there is the additional chance they have a K pair, so the overall value may not be 2/3 but it will still be skewed in their favour – AHKieran Sep 7 '18 at 22:35
|
2019-11-21 00:39:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3472432494163513, "perplexity": 561.1256565194207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670643.58/warc/CC-MAIN-20191121000300-20191121024300-00127.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-derivative-of-sin-2-lnx
|
How do you find the derivative of sin^2(lnx)?
Nov 1, 2015
Recall that ${\sin}^{2} \left(\ln x\right) = {\left[\sin \left(\ln x\right)\right]}^{2}$ and use the chain rule twice.
Explanation:
$\frac{d}{\mathrm{dx}} \left({\sin}^{2} \left(\ln x\right)\right) = \frac{d}{\mathrm{dx}} \left({\left[\sin \left(\ln x\right)\right]}^{2}\right)$
$= 2 \sin \left(\ln x\right) \left[\frac{d}{\mathrm{dx}} \left(\sin \left(\ln x\right)\right)\right]$
$= 2 \sin \left(\ln x\right) \left[\cos \left(\ln x\right) \frac{d}{\mathrm{dx}} \left(\ln x\right)\right]$
$= 2 \sin \left(\ln x\right) \cos \left(\ln x\right) \left[\frac{1}{x}\right]$
$= \frac{2 \sin \left(\ln x\right) \cos \left(\ln x\right)}{x}$
Which we may prefer to write as:
$= \sin \frac{2 \ln x}{x}$
Or, perhaps as
$= \sin \frac{\ln \left({x}^{2}\right)}{x}$
|
2022-07-06 01:42:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650184512138367, "perplexity": 641.1555516191637}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104655865.86/warc/CC-MAIN-20220705235755-20220706025755-00710.warc.gz"}
|
http://cms.math.ca/cmb/kw/group%20of%20self%20homotopy%20equivalences
|
Canadian Mathematical Society www.cms.math.ca
location: Publications → journals
Search results
Search: All articles in the CMB digital archive with keyword group of self homotopy equivalences
Expand all Collapse all Results 1 - 1 of 1
1. CMB 2007 (vol 50 pp. 206)
Golasiński, Marek; Gonçalves, Daciberg Lima
Spherical Space Forms: Homotopy Types and Self-Equivalences for the Group $({\mathbb Z}/a\rtimes{\mathbb Z}/b) \times SL_2\,(\mathbb{F}_p)$ Let $G=({\mathbb Z}/a\rtimes{\mathbb Z}/b) \times \SL_2(\mathbb{F}_p)$, and let $X(n)$ be an $n$-dimensional $CW$-complex of the homotopy type of an $n$-sphere. We study the automorphism group $\Aut (G)$ in order to compute the number of distinct homotopy types of spherical space forms with respect to free and cellular $G$-actions on all $CW$-complexes $X(2dn-1)$, where $2d$ is the period of $G$. The groups ${\mathcal E}(X(2dn-1)/\mu)$ of self homotopy equivalences of space forms $X(2dn-1)/\mu$ associated with free and cellular $G$-actions $\mu$ on $X(2dn-1)$ are determined as well. Keywords:automorphism group, $CW$-complex, free and cellular $G$-action, group of self homotopy equivalences, Lyndon--Hochschild--Serre spectral sequence, special (linear) group, spherical space formCategories:55M35, 55P15, 20E22, 20F28, 57S17
© Canadian Mathematical Society, 2014 : https://cms.math.ca/
|
2014-11-28 12:17:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8504700064659119, "perplexity": 1238.757007992618}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931010166.36/warc/CC-MAIN-20141125155650-00139-ip-10-235-23-156.ec2.internal.warc.gz"}
|
https://learn.careers360.com/ncert/question-estimate-the-distance-for-which-ray-optics-is-good-approximation-for-an-aperture-of-4-mm-and-wavelength-400-nm/
|
# Q10.10 Estimate the distance for which ray optics is good approximation for an aperture of 4 mm and wavelength 400 nm.
Given
Aperture $a=4mm=4*10^{-3}m$
Wavelength of light $\lambda =400nm=400*10^{-9}m$
Now,
Distance for which ray optics is a good approximation also called Fresnel's distance:
$Z_f=\frac{a^2}{\lambda }=\frac{(4*10^{-3})^2}{400*10^{-9}}=40m$
Hence distance for which ray optics is a good approximation is 40m.
## Related Chapters
### Preparation Products
##### JEE Main Rank Booster 2021
This course will help student to be better prepared and study in the right direction for JEE Main..
₹ 13999/- ₹ 9999/-
##### Rank Booster NEET 2021
This course will help student to be better prepared and study in the right direction for NEET..
₹ 13999/- ₹ 9999/-
##### Knockout JEE Main April 2021 (Subscription)
An exhaustive E-learning program for the complete preparation of JEE Main..
₹ 6999/- ₹ 5/-
##### Knockout NEET May 2021
An exhaustive E-learning program for the complete preparation of NEET..
₹ 22999/- ₹ 14999/-
|
2020-09-19 09:42:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.853388249874115, "perplexity": 10192.633772687745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191160.14/warc/CC-MAIN-20200919075646-20200919105646-00762.warc.gz"}
|
http://mathematica.stackexchange.com/questions/57782/how-to-define-a-nestable-baseform-function-baseform-output-becomes-baseform-inp
|
# How to define a nestable Baseform function, BaseForm output becomes BaseForm input
Mathematica won't change a number's BaseForm recursively:
63969 // BaseForm[#, 16] & // BaseForm[#, 8] & // BaseForm[#, 2] &
Well, I inspected former answers, most of them treated formatting problems, but none of the anwers given treated the nestable aspect. Maybe I'm wrong, and I risk a duplicate.
(1) I would like to have a nestable function which allows constructs of the form
63969 // base@16 // base@8 // base@2
This feature might be adopted advantageously within iterative functions : NestList, FoldList, ...
## Edit1
To explain my interest for seamless changes of baseforms
2 ArcCot[GoldenRatio^2^^1111] == ArcCot[2^^1010101010]
2^^1010101010 // BaseForm[#, 4] &
## Edit2
I tried myself another approch, but it has an disadvantage: HoldForm isn't respected
ClearAll@base
base[b_] := Function[# // ReplaceAll[#, BaseForm[x_, _] :> BaseForm[x, 10]] & //
ToString // ToExpression // BaseForm[#, b] &]
testsuite = {5555, BaseForm[5555, 8], HoldForm@Plus[5000, 555],5*BaseForm[1111, 2]}
base@10 /@ testsuite
-
You can define your own function that works with input the BaseForm of a number, sure:
myBase[a_?NumericQ, base_Integer] := BaseForm[a, base];
myBase[a_BaseForm, base_Integer] :=
BaseForm[
FromDigits[
IntegerString @@ a,
Last@a
],
base];
so that
63969 // myBase[#, 16] & // myBase[#, 8] &
gives you BaseForm[63969,8] but using as input the number's representation in hexadecimal.
-
I often change the base of a number. Let's say I stored the number already in variable x. Now I don't want to remember if it's stored as a BaseForm or not. It shouldn't be important at all. Just using x//base@2, and that's it – hieron Aug 20 '14 at 15:42
thanks - so then it is just an issue of defining a function that works on BaseForm. What I didn't understand why you'd want to go through the process of jumping from base to base but if it is on potentially stored numbers it makes sense. – gpap Aug 20 '14 at 15:46
ClearAll[base];
base = If[Head[#] === BaseForm, BaseForm[First @ #, #2], BaseForm[##]] &;
FoldList[base, 63696, {16, 8, 2}]
-
|
2015-09-02 17:04:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38918161392211914, "perplexity": 14190.586725126775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645281115.59/warc/CC-MAIN-20150827031441-00216-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://solvedlib.com/n/sulemienl-problem-at-usc-divisibility-rules-to-detcrmine,17300020
|
# Sulemienl Problem AT) Usc divisibility rules to detcrmine if the following Jou chexk or cplamn btielly. divesibleby _ 42500368505750172 is
##### A rect Find the area of the shaded region. The graph to the right depicts scores of adults_ and those scores are normally distributed with a mean of 100 and standard deviation of 15The raFindThe area of the shaded region is(Round four decimal places as needed )DiffertFind f(x)= Enter your answer in the answer boxSave for Laterf(x)=
A rect Find the area of the shaded region. The graph to the right depicts scores of adults_ and those scores are normally distributed with a mean of 100 and standard deviation of 15 The ra Find The area of the shaded region is (Round four decimal places as needed ) Differt Find f(x)= Enter your ans...
##### In the mammalian digestive system, what is the primary site of nutrient absorption?A pharynxB. small intestineC large intestinepancreasE: stomach
In the mammalian digestive system, what is the primary site of nutrient absorption? A pharynx B. small intestine C large intestine pancreas E: stomach...
##### DuHeahino Doktn top of the MountainMountainvanice height; 850 Inet . The distance hom theMounan03ge700 feet Whatanoleelevatlon from tne basa to that
duHeahino Doktn top of the Mountain Mountain vanice height; 850 Inet . The distance hom the Mounan 03ge 700 feet What anole elevatlon from tne basa to that...
##### A brick is dropped (zero initial speed) from the roof of a building. The brick strikes...
A brick is dropped (zero initial speed) from the roof of a building. The brick strikes the ground in 2.10 s. You may ignore air resistance, so the brick is in free fall. a) How tall, in meters, is the building? (b) What is the magnitude of the brick's velocity just before it reaches the ground? ...
##### 26D.4. Discuss with your partner how AG is really a test for whether or not the 2nd Jaw of thermodynamics will be obeyed by a particular process. Discuss how the two parts (AH and AS) which contribute to AG really represent the changes of entropy for the surroundings and for the system: Given how the AG relates to AH and AS, why is it that AS for the universe must be positive for a process, but AG must be negative for the process to occur?
26D.4. Discuss with your partner how AG is really a test for whether or not the 2nd Jaw of thermodynamics will be obeyed by a particular process. Discuss how the two parts (AH and AS) which contribute to AG really represent the changes of entropy for the surroundings and for the system: Given how th...
##### 3.6 Triathlon tlmes, Part Il: The distribution for triathlon time varies depending on the population describing. The distribution for men ages 30 you are 29 is N(0-5267 , 0-812). Note_ 34 i5 N(u-4363 , 0-582). The distribution for women ages 25 these distributions list the triathlon times in seconds. Usc this information to compute each of the following. Report your answer to decimal places. a) The cutoff time for the fastest 5% of athletes in the mens group, i.c. those wha took the shortest 5K
3.6 Triathlon tlmes, Part Il: The distribution for triathlon time varies depending on the population describing. The distribution for men ages 30 you are 29 is N(0-5267 , 0-812). Note_ 34 i5 N(u-4363 , 0-582). The distribution for women ages 25 these distributions list the triathlon times in seconds...
##### HzoHOA=OHHzOHO H+H OHBH"H
Hzo HO A= OH HzO HO H+H OH B H" H...
##### (a) If the car is moving at a speed of 65 km/h (18.1 m/s) , what is the magnitude of the tota impulse imparted to the 79.1-kg crash test dummy sitting in the automobile during the collision?kg-m/s the tolerance is 4/-30/Show HTHT(b) Find the minimum stopplng time such that the average net force on the dummy does not exceed 25 times Its weight:the tolerance is +/-2%/
(a) If the car is moving at a speed of 65 km/h (18.1 m/s) , what is the magnitude of the tota impulse imparted to the 79.1-kg crash test dummy sitting in the automobile during the collision? kg-m/s the tolerance is 4/-30/ Show HTHT (b) Find the minimum stopplng time such that the average net force o...
##### Problem 3: Determine the complex power delivered by the source in Circuit 3. 5 92 -j2...
Problem 3: Determine the complex power delivered by the source in Circuit 3. 5 92 -j2 2 6Ω Circuit 3...
##### A scientific meme
A scientific meme...
##### YncsAir HuaracheLon Un to CanvasNC_TITLENo_MLEGalcrcloudclcjmtur(ChAProbeem 06rg1416Incetect In 404; 5anetnttttilil
Yncs Air Huarache Lon Un to Canvas NC_TITLE No_MLE Galcrcloud clcjmtur (ChA Probeem 06rg 1416 Incetect In 404; 5anetnttttilil...
##### The cost of materials transferred into the Filling Department of Lilac Skin Care Company is $19,250, including$4,000 fr...
The cost of materials transferred into the Filling Department of Lilac Skin Care Company is $19,250, including$4,000 from the Blending Department and $15,250 from the materials storeroom. The conversion cost for the period in the Filling Department is$6,320 ($1,520 factory overhead applied and$4,...
##### Sulfur tioxlce decomposes form sulfur dloxide ad oxygen, like this: 2S03(9) SO _(9) 0,(9)ASo ,chemist finds that at J certain temperature the equllibrlum mixture sultur trioxlde; sulfur dloxlde; and oxygen has the followina composition: compouia PICIsutE equlllbrlum52,6 uun34AauinColculatc the value = the equlllbrium constant for Inks reoction Pouma Yout dnswcr (0 Jgnuficant digita07
Sulfur tioxlce decomposes form sulfur dloxide ad oxygen, like this: 2S03(9) SO _(9) 0,(9) ASo , chemist finds that at J certain temperature the equllibrlum mixture sultur trioxlde; sulfur dloxlde; and oxygen has the followina composition: compouia PICIsutE equlllbrlum 52,6 uun 34Aauin Colculatc the ...
Below is information regarding the capital structure of Micro Advantage Inc. On the basis of this information you are asked to respond to the following three questions: Required: 1. Micro Advantage issued a $5,500,000 par value, 16-year bond a year ago at 95 (i.e., 95% of par value) with a stated ra... 5 answers ##### Part B4-othylcyclohexeneSpell out the full name of the compound_Submit'RequestAnswefPart €2-putena Spell out the full name of the compoundSubmnitRequast AnsvterProvida FeadbackRcana Part B 4-othylcyclohexene Spell out the full name of the compound_ Submit' RequestAnswef Part € 2-putena Spell out the full name of the compound Submnit Requast Ansvter Provida Feadback Rcana... 5 answers ##### Thrs queston has MNoIES several parts that must be completed come back to the skipped part. sequentialty. If you skip part of tke question; you will not receive anyTutorial Exercise Solve the systern by elimination or by any convonient methodStepMultioly the first equation byHultiply the second equation byAdd {ne nyo ~quationsSunniSilo NVo4 caneLcoua Thrs queston has MNoIES several parts that must be completed come back to the skipped part. sequentialty. If you skip part of tke question; you will not receive any Tutorial Exercise Solve the systern by elimination or by any convonient method Step Multioly the first equation by Hultiply the second ... 5 answers ##### Consider a queueing system where customers arrive according to aPoisson process with mean 4 minutes. There is only one server whoseservice time distribution is exponential with rate 20 customers perhour. a) (5 points) What is the expected number of people in thequeue? b) (5 points) What is the expected waiting time in thesystem? c) (5 points) What is the probability that waiting time inthe queue is zero? d) (5 points) What is the probability that thewaiting time in the system is at least 20 minu Consider a queueing system where customers arrive according to a Poisson process with mean 4 minutes. There is only one server whose service time distribution is exponential with rate 20 customers per hour. a) (5 points) What is the expected number of people in the queue? b) (5 points) What is the e... 1 answer ##### Solve following for vm v2, v3 neglect channel length modulation V twuw- R R Www • V2... Solve following for vm v2, v3 neglect channel length modulation V twuw- R R Www • V2 M2 Vm 100 MA -V 2 mal Kn=0.5+ 0.07 0.571 V = 0.6 t o.o2 = 0.62 V R = 10. (1.1=11kR V = 4.7V ✓ in twuw- R R www • V2 Mi M2 Vm 100 mA -Y Kn=0.5+ 0.07 = 0.5tr V = 0.6 t o.o2 = 0.62 v R = 10.... 1 answer ##### Why is sexual reproduction more beneficial to a species living in an unpredictable environment than to... Why is sexual reproduction more beneficial to a species living in an unpredictable environment than to one living in a constant environment?... 1 answer ##### . Consider the data given below. The one-year rates can be viewed as spot interest rates, and the two-year rates are yie... . Consider the data given below. The one-year rates can be viewed as spot interest rates, and the two-year rates are yields to maturity in annualized percent . The spot exchange rate is ¥130.15/£. What should be the two-year forward rate to prevent arbitrage? two-year one-year U.K. 1.870 1... 1 answer ##### Verryy confused on orbital speed question?!?! Three uniform spheres are fixed at positions shown in Fig.12.35. (m1 = 2.0kg, m2 = 3.0 kg,and d = 0.10 m.)Figure 12.35(a) What is the magnitude and direction of the force ona 0.0750 kg particle placed atP?magnitude Ndirection° (counterclockwise from the+x-axis)(b) If the spheres are in deep oute... 1 answer ##### F FINLOCUL MENAGEMENT METRICNO A H camise share after taxes l manager is considering we mutually... F FINLOCUL MENAGEMENT METRICNO A H camise share after taxes l manager is considering we mutually exclusive projects for investmen et is expected to earn RM5 million while Project B is expected to an RN Which of the following statements is the MOST correct? The manager should select project A The man... 1 answer ##### Question 12 (5 points) Given a 5-year Project S's NPV is$335 and its IRR is...
Question 12 (5 points) Given a 5-year Project S's NPV is $335 and its IRR is 13%; and another 5-year Project L's NPV is$310 and its IRR is 15%. Assume Project S and Project L are mutually exclusive and both projects have the same WACC, which project(s) would you recommend? (No calculation i...
##### H) Radiocarbon testing: 1C occurs naturally from neutron bombardment of 14N due to cosmic radiation. 1C...
h) Radiocarbon testing: 1C occurs naturally from neutron bombardment of 14N due to cosmic radiation. 1C is unstable and decays by beta emission, with a half-life of 5730 yr. Bombardment or transmutation reactions are listed in shorthand as "original nucleus (particle in, particle out) new nucleu...
##### Problem 1 wire that rotates solenoid The solenoid Old lawnmowers were started by pulling = (which can be model as a cylinder) needs to rotate at certain angular velocity to start the lawnmower: Suppose that wire is wrapped around a 2 cm cylinder initially at rest The wire is pulled with constant acceleration of 2 m/s? until 62.83cm of the wire have been unwound. If the wire unwound without slipping:A What is the angular accel eration of the cylinder? B.What is the angular displacement of the cyl
Problem 1 wire that rotates solenoid The solenoid Old lawnmowers were started by pulling = (which can be model as a cylinder) needs to rotate at certain angular velocity to start the lawnmower: Suppose that wire is wrapped around a 2 cm cylinder initially at rest The wire is pulled with constant acc...
Please help folks, I really appreciate! Complete each of the following C statements by adding an asterisk, ampersand, or subscript wherever needed to make the statement do the job described by the comment. Use these declarations: short s, t; short age[] = { 30, 65, 41, 23 }; short * agep, * maxp; (a...
##### I am making my own home brew spirits with dextrose and yeast, then it got me...
I am making my own home brew spirits with dextrose and yeast, then it got me wondering if I could get rid of the water by putting it through the RO plant. Before I boil off the fusels, ethanol and methanol. So when using a RO water plant, if you where to use a 10% alcohol solution would you get any ...
##### 2. Unpolarized light falls on two polarizing sheets placed one above the other. What is the...
2. Unpolarized light falls on two polarizing sheets placed one above the other. What is the angle between the characteristic directions of the sheets if the intensity of the transmitted light is (a) one-tenth the maximum intensity of the transmitted beam or (b) one-tenth the intensity of the inciden...
##### A) (5 points) As part of his tenure review in 2010, Dr. Philistine's students were asked,...
a) (5 points) As part of his tenure review in 2010, Dr. Philistine's students were asked, "What is your biggest complaint with his teaching?" Forty percent of the students said "his tests are too easy"; twenty percent said "he has bad taste in clothes"; thirty percent sai...
##### MICHELLE NEEDS YOUR HELP a) What magnitude point charge creates a 40,000 N/C electric field at...
MICHELLE NEEDS YOUR HELP a) What magnitude point charge creates a 40,000 N/C electric field at a distance of 0.282 m? 1 C (b) How large is the field at 20.7 m? 2 N/C Supporting Materials...
##### 1 1 Gien Optional: 21 1 such that: L L 21 JGB 1 11241 11
1 1 Gien Optional: 21 1 such that: L L 21 JGB 1 1 1241 1 1...
##### UJse SUMMARY Steps Graphing Function y = 2 tan(x 9 J = Atan[an - o]+BRewrize in a fcrmy=4taq %(*-2)]-2 by factoring Step Determin the Period I-f Phase shift T=aStep _ De-emin the location oithe irtercapt (pointwhere orizin moved cftne graphX-intercept (crigin shifted to !Step _ Since [Onx completes one Denco cetween{5)'-At[e(--3)] completes one period varies from {M find theze points solve Zquatione [2{5-2]5 anj 2 Wor< crefully vrith fractions Now YcU foundycur verticz asymptctes that zr
UJse SUMMARY Steps Graphing Function y = 2 tan(x 9 J = Atan[an - o]+B Rewrize in a fcrm y=4taq %(*-2)]-2 by factoring Step Determin the Period I-f Phase shift T=a Step _ De-emin the location oithe irtercapt (pointwhere orizin moved cftne graph X-intercept (crigin shifted to ! Step _ Since [Onx compl...
|
2023-01-30 19:00:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5168587565422058, "perplexity": 5363.749646283444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00148.warc.gz"}
|
https://www.ias.ac.in/listing/bibliography/jess/SUMER_CHOPRA
|
• SUMER CHOPRA
Articles written in Journal of Earth System Science
• A local magnitude scale $M_{L}$ for the Saurashtra horst: An active intraplate region, Gujarat, India
The calibration of local magnitude scale to local tectonics is essential for seismic hazard assessment and quantifying the seismicity in active regions. In the present study, we have developed a local magnitude scale $M_{L}$, for the Saurashtra region, which is a horst, located in the western continental margin of India. The local magnitude scale is developed using 1968 amplitude measurements from horizontal component recordings of 319 earthquakes, obtained from sites in the Saurashtra region, with hypocentral distances ranging from 3 to 298 km. All the 1968 amplitude measurements were inverted simultaneously to determine attenuation curve, magnitudes and station corrections for the studied region. The resultant distance correction term for the Saurashtra is $-log(A_{0}) = 1.31 log(r/100) + 0.0002 (r - 100) + 3$ for 100 km normalization, where $A_{0}$ is the distance correction and $r$ is the hypocentral distance. The distance correction term ($-log A_{0}$) suggests that the attenuation in the Saurashtra region is lower as compared to neighbouring Kachchh region. The station corrections obtained in the present study varies from $-0.31$ to $+0.24$. Overall, standard deviation of the magnitude residuals without station correction is 0.28, while with station correction, it is 0.23, which indicates that applying station correction reduces the variance by 31% and brings the average residual closer to zero.
• Multi-criteria approach using GIS for macro-level seismic hazard assessment of Kachchh Rift Basin, Gujarat, western India – First step towards earthquake disaster mitigation
The earthquakes have the most dominating societal and economic impact on the built environment. The earthquakes in an intraplate region are infrequent but often damaging. The uncontrolled urban growth in cities due to population explosion and migration makes it necessary to assess seismic hazards in an active region. It provides parameters for seismic safety and helps in disaster mitigation. The Kachchh Rift Basin (KRB) of western India is a seismically active intraplate region where many damaging earthquakes have occurred in the past (Mw 7.8 in 1819, Mw 7.6 in 2001). The KRB hosts many economic corridors and ports. Though the region has been put in a category with highest seismic hazard, the entire region is not prone to high hazards. The primary objective of the study is to integrate major attributes that influence seismic hazard on a GIS platform and prepare a multi-criteria-based hazard map by multi-criteria decision process named as analytical hierarchy process (AHP) developed by Saaty. In this study, the information about some of the attributes like peak ground acceleration (PGA), geology and geomorphology, and tsunami hazard is taken from published literature, whereas shear wave velocity to 30 m depth (Vs)$_{30}$ and amplification factor were obtained through empirical relationships. The integration of these different attributes was performed, and weights were assigned depending on their contribution to the seismic hazard. The multi-criteria approach reveals that the southwestern part comprising of Kachchh mainland has a low hazard as compared to central and northern parts and almost 1 million people and around 0.18 million houses are exposed to moderate to high hazard. Large swaths of land are prone to liquefaction hazard. The corridor comprising of Bhuj, Bhachau and Rapar needs seismic microzonation. This macro level hazard map will be beneficial for the urban planners and government authorities to decide the areas, where seismic microzonation or site-specific studies are required that would help in mitigating earthquake disasters in the future.
• Integrated analysis of the gravity and the magnetic data to infer structural features and their role in prospective mineralisation in and around the Ambaji–Deri– Danta–Chitrasani region, NW India
The Ambaji–Deri region is located in the northeastern part of the Gujarat state of India and is well-known for hosting lead–zinc–copper minerals deposits. Recently, gravity and magnetic data are collected in the region with the objective of geological and structural mapping of the area. This data is further processed using upward continuation, derivative analysis, and 2.5-dimensional gravity modelling to understand the subsurface geometry for mineral exploration. The upward continued regional gravity anomaly reveals high value in SW part of the region. The residual gravity and magnetic anomaly show the NE–SW trend, which is sympathetic with the general trend of Delhi supergroup. The high values of the residual Bouguer and the magnetic anomaly at the junction of the Jaisalmer–Barwani and the Chambal–Jamnagar lineaments are inferred as possible potential sites for sulphide mineralisation. The horizontal gradient of the tilt derivative (HGTD) of both the gravity and the magnetic anomalies reveals NE–SW trending lineaments. Based on the results of HGTD, several new structural features have been identified and a refined lineament map of the study area is proposed. The gravity modelling using residual Bouguer anomaly could delineate a high-density intrusive body in the upper crustal level. The result of the gravity model also confirms that the middle crust is uplifted by 1–3 km in the eastern part of the study area. In this study, three prospective zones for base metal mineralisation have been identified.
• Investigation of hydrological characteristics of the Kachchh Mainland Fault (KMF) Zone, Gujarat, Western India using time domain electromagnetic study
The East–West oriented Kachchh Mainland Fault (KMF) is a major primary fault in an active Kachchh rift basin in the western part of India. In the present work, we made an attempt to understand the tectonic features of the KMF system and the hydrological characteristics of the fault zone using a time domain electromagnetic (TDEM) survey. The TDEM investigations are carried out at 52 sites, distributed along six profiles, three across the fault and three along the fault zone, to study the hydrological phenomenon. The resistivity sections of these six profiles are correlated with well log and lithology data provided information up to a depth of around 250 m and suggest a multilayer aquifer system. Theresistivity sections also show KMF as a sharp contact between the Mesozoic rocks and the Quaternary alluvium overlain on Tertiary rocks. Most of the groundwater potential aquifer zones are towards the South and very few groundwater potential aquifer zones are found in the North. In this region, streams are flowing towards Rann of Kachchh. We infer that the KMF zone is impervious that possibly bisects the aquifer zones and acts as a barrier for the groundwater flow in the North–South direction and conduit for the parallel flow in the East–West direction. The study highlights the significant tectonic controls of the groundwater flow in the Kachchh rift.
$\bf{Highlight}$
$\bullet$ The study provides the hydrological characteristics of Kachchh Mainland Fault zone.
$\bullet$ Time domain electromagnetic investigations for Fault zone hydrology and tectonic controls of the groundwater flow in the Kachchh rift.
$\bullet$ Delineation of groundwater potential zones around Fault zone.
• # Journal of Earth System Science
Volume 132, 2023
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019
|
2023-01-29 02:47:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42966777086257935, "perplexity": 4184.529165443128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499697.75/warc/CC-MAIN-20230129012420-20230129042420-00374.warc.gz"}
|
https://mookiedesign.com/relocating-croplands-could-drastically-reduce-the-environmental-impacts-of-global-food-production.html
|
October 4, 2022
Unlimited Design for All
# Relocating croplands could drastically reduce the environmental impacts of global food production
We use the notation in Table 1.
### Current crop production and areas, Pi(x), Hi(x)
We used 5-arc-minute maps of the fresh-weight production Pi(x) (Mg year−1) and cropping area Hi(x) (ha) of 25 major crops (Table 2) in the year 201037. These represent the most recent spatially explicit and crop-specific global data75. Separate maps were available for irrigated and rainfed croplands, allowing us to estimate the worldwide proportion of irrigated areas as 21% of all croplands.
### Agro-ecologically attainable yields (widehatY_i(x))
We used 5-arc-minute maps of the agro-ecologically attainable dry-weight yield (Mg ha −1 year−1) of the same 25 crops on worldwide potential growing areas (Supplementary Movie 3) from the GAEZ v4 model, which incorporates thermal, moisture, agro-climatic, soil, and terrain conditions42. These yield estimates were derived based on the assumption of rainfed water supply (i.e., without additional irrigation) and are available for current climatic conditions and, assuming a CO2 fertilisation effect, for four future (2071–2100 period) climate scenarios corresponding to representative concentration pathways (RCPs) 2.6, 4.5, 6.0, and 8.576 simulated by the HadGEM2-ES model77. Potential rainfed yield estimates for current climatic conditions were available for a low- and a high-input crop management level, representing, respectively, subsistence-based organic farming systems and advanced, fully mechanised production using high-yielding crop varieties and optimum fertiliser and pesticide application42. We additionally considered potential yields representing a medium-input management scenario, given by the mean of the relevant low- and high-input yields. Future potential yields were available only for the high-input management level. Thus, we considered a total of 175 (=25 × 3 present + 25 × 4 future) potential yield maps. Potential dry-weight yields were converted to fresh-weight yields, (widehatY_i(x)), using crop-specific conversion factors42,78.
Both current and future potential rainfed yields from GAEZ v4 were simulated based on daily weather data, and therefore account for short-term events such as frost days, heat waves, and wet and dry spells42. However, the estimates represent averages of annual yields across 30-year periods; thus, whilst the need for irrigation on cropping areas identified in our approach during particularly dry years may in principle be obviated by suitable storage of crop production79, in practice, ad hoc irrigation may be an economically desirable measure to maintain productivity during times of drought, which are projected to increase in different geographic regions due to climate change80,81.
### Carbon impact Ci(x)
Following an earlier approach8, the carbon impact of crop production, Ci(x), in a 5-arc-minute grid cell was estimated as the difference between the potential natural carbon stocks and the cropland-specific carbon stocks, each given by the sum of the relevant vegetation- and soil-specific carbon. The change in vegetation carbon stocks resulting from land conversion is given by the difference between carbon stored in the potential natural vegetation, available as a 5-arc-minute global map8 (Supplementary Fig. 1a), and carbon stored in the crops, for which we used available estimates8,78. Regarding soil, spatially explicit global estimates of soil organic carbon (SOC) changes from land cover change are not available. We therefore chose a simple approach, consistent with estimates across large spatial scales, rather than a complex spatially explicit model for which, given the limited empirical data, robust predictions across and beyond currently cultivated areas would be difficult to achieve. Following an earlier approach8, and supported by empirical meta-analyses82,83,84,85,86, we assumed that the conversion of natural habitat to cropland results in a 25% reduction of the potential natural SOC. For the latter, we used a 5-arc-minute global map of pre-agricultural SOC stocks7 (Supplementary Fig. 1b). Thus, the total local carbon impact (Mg C ha−1) of the production of crop i in the grid cell x was estimated as
$$C_i(x)=C_rmpotential,rmvegetation(x)+0.25cdot C_rmpotential,rmSOC(x)-C_rmcrop(i)$$
(1)
where (C_{rmpotential,{{rmvegetation}}}(x)) and (C_{rmpotential,rmSOC}(x)) denote the potential natural carbon stocks in the vegetation and the soil in x, respectively, and (C_{rmcrop}(i)) denotes the carbon stocks of crop i (all in Mg C ha−1). By design, the approach allows us to estimate the carbon impact of the conversion of natural habitat to cropland regardless of whether an area is currently cultivated or not.
In our analysis, we did not consider greenhouse gas emissions from sources other than from land use change, including nitrous emissions from fertilised soils and methane emissions from rice paddies87. In contrast to the one-off land use change emissions considered here, those are ongoing emissions that incur continually in the production process. We would assume that the magnitude of these emissions in a scenario of redistribution of agricultural areas, in which the total production of each crop remains constant, is roughly similar to that associated with the current distribution of areas. We also did not consider emissions associated with transport; however, these have been shown to be small compared to other food chain emissions88 and poorly correlated with the distance travelled by agricultural products89.
### Biodiversity impact Bi(x)
Analogous to our approach for carbon, we estimated the biodiversity impact of crop production, Bi(x), in a 5-arc-minute grid cell as the difference between the local biodiversity associated with the natural habitat and that associated with cropland. For our main analysis, we quantified local biodiversity in terms of range rarity (given by the sum of inverse species range sizes; see below) of mammals, birds, and amphibians. Range rarity has been advocated as a biodiversity measure particularly relevant to conservation planning in general39,90,91,92,93 and the protection of endemic species in particular39. In a supplementary analysis, we additionally considered biodiversity in terms of species richness.
We used 5-arc-minute global maps of the range rarity and species richness of mammals, birds, and amphibians under potential natural vegetation (Supplementary Fig. 1c, d) and under cropland land cover94. The methodology used to generate these data38 combines species-specific extents of occurrence (spatial envelopes of species’ outermost geographic limits40) and habitat preferences (lists of land cover categories in which species can live95), both available for all mammals, birds, and amphibians96,97, with a global map of potential natural biomes44 in order to estimate which species would be present in a grid cell for natural habitat conditions. Incorporating information on species’ ability to live in croplands, included in the habitat preferences, allows for determining the species that would, and those that would not, tolerate a local conversion of natural habitat to cropland. The species richness impact of crop production in a grid cell is then obtained as the number of species estimated to be locally lost when natural habitat is converted to cropland. Instead of weighing all species equally, the range rarity impact in a grid cell is calculated as the sum of the inverse potential natural range sizes of the species locally lost when natural habitat is converted; thus, increased weight is attributed to range-restricted species, which tend to be at higher extinction risk40,41.
As in the case of carbon, the approach allows us to estimate the biodiversity impact of crop production in both currently cultivated and uncultivated areas.
### Land potentially available for agriculture, V(x)
We defined the area V(x) (ha) potentially available for crop production in a given grid cell x, as the area not currently covered by water bodies42, land unsuitable due to soil and terrain constraints42, built-up land (urban areas, infrastructure, roads)1, pasture lands1, crops not considered in our analysis37, or protected areas42 (Supplementary Fig. 1e). In the scenario of a partial relocation of crop production, in which a proportion of existing croplands is not moved, the relevant retained areas are additionally subtracted from the potentially available area, as described further below.
### Optimal transnational relocation
We first consider the scenario in which all current croplands are relocated across national borders based on current climate (Fig. 3a, dark blue line). For each crop i and each grid cell x, we determined the local (i.e., grid-cell-specific) area (widehatH_i(x)) (ha) on which crop i is grown in cell x so that the total production of each crop i equals the current production and the environmental impact is minimal. Denoting by
$$barP_i=sum _xP_i(x)$$
(2)
the current global production of crop i, any solution (widehatH_i(x)) must satisfy the equality constraints
$$sum _xwidehatH_i(x)cdot widehatY_i(x)=barP_{{{rmi}}},rmforquadrmeach,{{{{{rmcrop}}}}},i$$
(3)
requiring the total production of each individual crop after relocation to be equal to the current one. A solution must also satisfy the inequality constraints
$$sum _iwidehatH_i(x)le V(x),rmforquadrmeach,rmgrid,rmcell,x,,$$
(4)
ensuring that the local sum of cropping areas is not larger than the locally available area V(x) (see above). Given these constraints, we can identify the global configuration of croplands that minimises the associated total carbon or biodiversity impact by minimising the objective function
$$sum _xwidehatH_i(x)cdot C_i(x)to ,min quad{{rmor}}quadsum _xwidehatH_i(x)cdot B_i(x)to ,min$$
(5)
respectively. More generally, we can minimise a combined carbon and biodiversity impact measure, and examine potential trade-offs between minimising each of the two impacts, by considering the weighted objective function
$$sum _xwidehatH_i(x)cdot (alpha cdot C_i(x)+(1-alpha )cdot B_i(x))to ,min$$
(6)
where the weighting parameter α ranges between 0 and 1.
Considering all crops across all grid cells, we denote by
$$barC=sum _isum _xH_i(x)cdot C_i(x)$$
(7)
the global carbon impact associated with the current distribution of croplands, and by
$$hatC(alpha )=sum _isum _xhatH_i(x)cdot C_i(x)$$
(8)
the global carbon impact associated with the optimal distribution (\widehatH_i(x)_i,x(=\widehatH_i^alpha (x)_i,x)) of croplands for some carbon-biodiversity weighting (alpha in [0,1]). The relative change between the current and the optimal carbon impact is then given by
$$hatc(alpha )=100 % cdot frachatC(alpha )-barCbarC$$
(9)
Using analogous notation, the relative change between the current and the optimal global biodiversity impact across all crops and grid cells is given by
$$widehatb(alpha )=100 % cdot fracwidehatB(alpha )-barBbarB$$
(10)
The dark blue line in Fig. 3a visualises (widehatc(alpha )) and (widehatb(alpha )) for the full range of carbon-biodiversity weightings (alpha in [0,1]), each of which corresponds to a specific optimal distribution (\widehatH_i(x)_i,x) of croplands. We defined an optimal weighting (alpha _rmopt), meant to represent a scenario in which the trade-off between minimising the total carbon impact and minimising the total biodiversity impact is as small as possible. Such a weighting is necessarily subjective; here, we defined it as
$$alpha _{{{{{{rmopt}}}}}}=arg ,{min }_alpha in [0,1]left|beginarrayllfracfracpartial hatc(alpha) partial hatb(alpha)hatc(alpha) cdot fracfracpartial hatb(alpha) partial hatc(alpha)hatb(alpha)endarrayright|$$
(11)
Each of the two factors on the right-hand side represents the relative rate of change in the reduction of one impact type with respect to the change in the reduction of the other one as α varies. Thus, αopt represents the weighting at which neither impact type can be further reduced by varying α without increasing the relative impact of the other by at least the same amount. Scenarios based on this optimal weighting are shown in Figs. 1, 2, and Supplementary Figs. 3–6, and are represented by the black markers in Fig. 3.
Our approach does not account for multiple cropping; i.e., part of a grid cell is not allocated to more than one crop, and the assumed annual yield is based on a single harvest. Allowing for multiple crops to be successively planted in the same location during a growing period would increase the dimensionality of the optimisation problem substantially. However, given that only 5% of current global rainfed areas are under multiple cropping98, this is likely not a strong limitation of our rainfed-based analysis. As a result of this approach, our results may even slightly underestimate local crop production potential and therefore global impact reduction potentials.
### Optimal national relocation
In the case of areas being relocated within national borders, the mathematical framework is identical with the exception that the sum over relevant grid cells x in Eqs. (2) and (4) is taken over the cells that define the given country of interest, instead of the whole world. In this way, the total production of each crop within each country for optimally distributed croplands is the same as for current areas. The optimisation problem is then solved independently for each country.
### Optimal partial relocation
When (either for national or transnational relocation) only a certain proportion (lambda in [0,1]) of the production of each crop (of a country or the world) is being relocated rather than the total production, Eq. (3) changes to
$$mathopsumlimits_xwidehatH_i(x)cdot widehatY_i(x)=lambda cdot barP_i,{{{{rmfor}}}},{{{{rmeach}}}},{{{{{rmcrop}}}}},i,.$$
(12)
In addition, the area potentially available for new croplands, V(x), (see above) is reduced by the area that remains occupied by current croplands accounting for the proportion ((1-lambda )) of production that is not being relocated. We denote by (H_i^lambda (x)) the area that continues to be used for the production of crop i in grid cell x in the scenario where the proportion λ of the production is being optimally redistributed. In particular, (H_i^0(x)=H_i(x)) and (H_i^1(x)=0) for all i and x. For a given carbon-biodiversity weighting (alpha in [0,1]) in Eq. (6), (H_i^lambda (x)) is calculated as follows. First, all grid cells in which crop i is currently grown are ordered according to their agro-environmental efficiency, i.e., the grid-cell-specific ratio between the environmental impact attributed to the production of the crop and the local production,
$$E_i^alpha (x)=fracH_i(x)cdot (alpha cdot C_i(x)+(1-alpha )cdot B_i(x))P_i(x).$$
(13)
Let (x_1(=x_1(i,alpha ))) denote the index of the grid cell in which crop i is currently grow for which (E_i^alpha ) is smallest among all grid cells in which the crop is grown. Then let x2 be the index for which (E_i^alpha ) is second smallest (or equal to the smallest), and so on. Thus, the vector ((x_1,x_2,x_3,ldots )) contains all indices of grid cells where crop i is currently grown in descending order of agro-environmental efficiency. The area (H_i^lambda (x_n)) retained in some grid cell (x_n) is then given by
$$H_i^lambda (x_n)=left{beginarrayllH_i(x_n) & rmif;mathopsum limits_m=1^nP_i(x_m)le (1-lambda )cdot barP_i\ 0, & hskip-7.5pc{{rmelse}}endarrayright.$$
(14)
Thus, cropping areas in a grid cell (x_n) are retained if they are amongst the most agro-environmentally efficient ones of crop i on which the combined production does not exceed ((1-lambda )cdot barP_i) (which is not being relocated). Growing areas in the remaining, less agro-environmental efficient grid cells are abandoned and become potentially available for other relocated crops. Note that (H_i^lambda ) depends on the weighting α of carbon against biodiversity impacts. Finally, instead of Eq. (4), we have, in the case of the partial relocation of the proportion λ of the total production,
$$mathopsumlimits_iwidehatH_i(x)le V(x)-H_i^lambda (x)quad{{{{{rmfor}}}}},{{{{{rmeach}}}}},{{{{rmgrid}}}},{{{{rmcell}}}},x,.$$
(15)
### Solving the optimisation problem
All datasets needed in the optimisation (i.e., (A(x)), (P_i(x)), (H_i(x)), (C_i(x)), (B_i(x)), (widehatY_i(x)), (V(x))) are available at a 5 arc-minute (0.083°) resolution; however, computational constraints required us to upscale these to a 20-arc-minute grid (0.33°) spatial grid. At this resolution, Eq. (6) defines a 1.12 × 106-dimensional linear optimisation problem in the scenario of across-border relocation. The high dimensionality of the problem is in part due to the requirement in Eq. (3) that the individual production level of each crop is maintained. Requiring instead that, for example, only the total caloric production is maintained31,99 reduces Eq. (6) to a 1-dimensional problem. However, in such a scenario, the production of individual crops, and therefore of macro- and micronutrients, would generally be very different from current levels, implicitly assuming potentially drastic dietary shifts that may not be nutritionally or culturally realistic.
The optimisation problem in Eq. (6) was solved using the dual-simplex algorithm in the function linprog of the Matlab R2021b Optimization Toolbox100 for a termination tolerance on the dual feasibility of 10−7 and a feasibility tolerance for constraints of 10−4.
In the case of a transnational relocation of crop production, the algorithm always converged to the optimal solution, i.e., for all crop management levels, climate scenarios, and proportions of production that were being relocated. For the relocation within national borders, this was not always the case. This is because some countries produce small quantities of crops which, according to the GAEZ v4 potential yield estimates, could not be grown in the relevant quantities anywhere in the country under natural climatic conditions and for rainfed water supply; these crops likely require greenhouse cultivation or irrigation can therefore not be successfully relocated within our framework. Across all countries, this was the case for production occurring on 0.6% of all croplands. When this was the case for a certain country and crop, we excluded the crop from the optimisation routine, and a country’s total carbon and biodiversity impacts were calculated as the sum of the impacts of optimally relocated crops plus the current impacts of non-relocatable crops.
This issue is linked to why determining the optimal distribution of croplands within national borders is not a well-defined problem for future climatic conditions. Under current climatic conditions, if a crop cannot be relocated within our framework, then its current distribution offers a fall-back solution that provides the current production level and allows us to quantify environmental impacts. Different climatic conditions in the future mean that the production of a crop across current growing locations will not be the same as it is today, and therefore the fall-back solution available for the present is no longer available, so that a consistent quantification of the environmental impacts of a non-relocatable crop is not possible.
### Carbon and biodiversity recovery trajectories
Our analysis in Supplementary Fig. 6 requires spatially explicit estimates of the carbon recovery trajectory on abandoned croplands. Whilst carbon and biodiversity regeneration have been shown to follow certain general patterns, recovery is context-specific (Supplementary Note 1) in that, depending on local conditions, the regeneration in a specific location can take place at slower or faster speeds than would typically be the case in the broader ecoregion. Here, we assumed that these caveats can be accommodated by using conservative estimates of recovery times and by assuming that local factors will average out at the spatial resolution of our analysis. The carbon recovery times assumed here are based on ecosystem-specific estimates of the time required for abandoned agricultural areas to retain pre-disturbance carbon stocks82. Aiming for a conservative approach, we assumed carbon recovery times equal to at least three times these estimates, rounded up to the nearest quarter century (Table 3). Independent empirical estimates from specific sites and from meta-analyses are well within these time scales (Supplementary Note 1).
Applying the values in Table 3 to a global map of potential natural biomes44 provides a map of carbon recovery times. We assumed a square root-shaped carbon recovery trajectory across these regeneration periods101; similar trajectories, sometimes modelled by faster-converging exponential functions, have been identified in other studies25,27,30,102,103,104,105. Thus, the carbon stocks in an area of a grid cell x previously used to grow crop i were assumed to regenerate according to the function
$$left{beginarrayllC_rmagricultural(x)+sqrt{fractT_rmcarbon(x)}cdot (C_{rmpotential}(x)-C_{{{{rmagricultural}}}}(x)) & {{rmif}},t ; < ; T_{{{{rmcarbon}}}}\ hskip14.7pcC_{{{{{{rmpotential}}}}}}(x) & {{{{{rmif}}}}},tge T_{{{{{{rmcarbon}}}}}}endarrayright.$$
(16)
where, using the same notation as further above
$$C_{{{{{{rmpotential}}}}}}(x) =C_{{{{{{rmpotential}}}}},{{{{{rmvegetation}}}}}}(x)+C_{{{{{{rmpotential}}}}},{{{{rmSOC}}}}}(x)\ C_{{{{{{rmagricultural}}}}}}(x) =C_i(x)+0.75cdot {C}_{{{{{{rmpotential}}}}},{{{{{rmSOC}}}}}}(x)$$
(17)
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
|
2022-10-04 02:58:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6695963144302368, "perplexity": 2730.604054087058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00472.warc.gz"}
|
https://coffea-casa.readthedocs.io/en/latest/cc_user.html
|
# First Steps at Coffea-Casa @ UNL
## Prerequisites
The primary mode of analysis with coffea-casa is coffea. Coffea provides plenty of examples to users in its documentation. Further resources, meant to run specifically on coffea-casa, can be found in the sidebar under the “Gallery of Coffea-casa Examples” section or the appropriate repository here.
Knowledge of Python is assumed. The standard environment for coffea analyses is within Jupyter Notebooks, which allow for dynamic, block-by-block execution of code. Coffea-casa employs the JupyterLab interface. JupyterLab is designed for hosting Jupyter Notebooks on the web and permits the usage of additional features within its environment, including Git access, compatibility with cluster computing tools, and much, much more.
If you aren’t familiar with any of these tools, please click on the links above and get acquainted with how they work before delving into coffea-casa.
## Access
Important
For CMS or opendata files, please see the relevant sections for coffea-casa at T2 Nebraska. For ATLAS files, see coffea-casa at UChicago.
There are two access points to the Coffea-casa AF @ T2 Nebraska. The site at https://coffea-opendata.casa is for Opendata and can be accessed through any CILogon identity provider, though it will not be able to process any files that require authentication.
Important
Remember that to access this instance you need to register: click “Register for access”. (We have limited resources available and can’t provide access to everyone under CILogon).
The other at https://coffea.casa is for CMS data and can be accessed through the CMS AuthZ instance; this site is capable of handling all CMS files and uses tokens for authentication.
Another coffea-casa instance exists for the AF @ UChicago, which is meant to be used with ATLAS data. You can accesss it at https://coffea.af.uchicago.edu_.
See the appropriate section below if you need help going through the registration process for either access point.
### Opendata CILogon Authentication Instance
Important
This section applies only to the Opendata Coffea-Casa instance.
Currently Opendata Coffea-Casa supports any CILogon identity provider. Select your identity provider:
For accessing Opendata Coffea-Casa, we are offering a self-signup registration form with approval.
Click to proceed to the next stage:
Click to proceed to the next stage:
If you see the next window, it means that the registration request was sent succesfully!
Important
After this step please wait until you get approved by an administrator!
After your request is approved, you will receive an email, where you will simply need to click a link:
Voila! Now you can login to Opendata Coffea-Casa. Click on “Authorized Users Only: Sign in with OAuth 2.0” to do so:
### CMS AuthZ Authentication Instance
Important
This section applies only to the CMS Coffea-Casa instance.
Currently Coffea-Casa Analysis Facility @ T2 Nebraska supports any member of CMS VO organisation.
To access it please sign in or sign up using Apply for an account.
### ATLAS AuthZ Authentication Instance
Currently Coffea-Casa Analysis Facility @ UChicago can support any member of ATLAS.
## Docker Image Selection
The default image is preloaded with coffea, Dask, and HTCondor and you should select it:
This will forward you to your own personal Jupyterhub instance running at Analysis Facility @ T2 Nebraska:
## Cluster Resources in Coffea-Casa Analysis Facility @ T2 Nebraska
By default, the Coffea-casa Dask cluster should provide you with a scheduler and workers, which you can see by clicking on the colored Dask icon in the left sidebar.
As soon as you start your computations, you will notice that available resources at the Opendata Coffea-Casa Analysis Facility @ T2 Nebraska autoscale depending on the resources available in the HTCondor pool at Nebraska Tier 2.
## Opening a New Console or File
There are three ways by which you can open a new tab within coffea-casa. Two are located within the File menu at the very top of the JupyterLab interface: New and New Launcher.
The New dropdown menu allows you to open the console or a file of a specified format directly. The New Launcher option creates a new tab with buttons that permit you to launch a console or a new file, exactly like the interface you are shown when you first open coffea-casa.
The final way is specific to the File Browser tab of the sidebar.
This behaves exactly like the New Launcher option above.
Note
Regardless of the method you use to open a new file, the file will be saved to the current directory of your File Browser.
## Using Git
Cloning a repository in the Coffea-casa Analysis Facility @ T2 Nebraska is simple, though it can be a little confusing because it is spread across two tabs in the sidebar: the File Browser and the Git tabs.
In order to clone a repository, first go to the Git tab. It should look like this:
Simply click the appropriate button (initialize a repository, or clone a repository) and you’ll be hooked up to GitHub. This should then take you to the File Browser tab, which is where you can see all of the repositories you have cloned in your JupyterLab instance. The File Browser should look like this:
If you wish to change repositories, simply click the folder button to enter the root directory. If you are in the root directory, the Git tab will reset and allow you to clone another repository.
If you wish to commit, push, or pull from the repository you currently have active in the File Browser, then you can return to the Git tab. It should change to look like this, so long as you have a repository open in the File Browser:
The buttons in the top right allow for pulling and pushing respectively. When you have edited files in a directory, they will show up under the Changed category, at which point you can hit the + to add them to a commit (at which point they will show up under Staged). Filling out the box at the bottom of the sidebar will file your commit, and prepare it for you to push.
## Using XCache
Important
This section applies only to the CMS Coffea-Casa instance.
When we use CMS data, we generally require certificates or we will be faced with authentication errors. Coffea-casa handles the issue of certificates internally through xcache tokens so that its users do not explicitly have to import their certificates, though this dynamic requires adjustiment of the redirector portion of the path to the root file requested.
Let’s say we wish to request the file:
root://cmsxrootd.fnal.gov//store/data/Run2018A/DoubleMuon/NANOAOD/02Apr2020-v1/30000/0555868D-6B32-D249-9ED1-6B9A6AABDAF7.root
Then we would replace the cmsxrootd.fnal.gov redirector with the xcache redirector:
root://xcache//store/data/Run2018A/DoubleMuon/NANOAOD/02Apr2020-v1/30000/0555868D-6B32-D249-9ED1-6B9A6AABDAF7.root
Now, we will be able to access our data.
In addition to handling authentication, XCache will cache files so that they are able to be pulled more quickly in subsequent runs of the analysis. It should be expected, then, that the first analysis run with a new coffea-casa file will run slower than ones which follow afterwards.
## ServiceX
Important
This section applies only to the ATLAS Coffea-Casa instance.
Important
This section applies only to the ATLAS Coffea-Casa instance at UChicago. The instances at T2 Nebraska are capable of handling ServiceX requests through uproot, but the feature is still at an experimental stage. Ask an administrator for more information on accessing ServiceX on the T2 Nebraska instances.
When dealing with very large datasets it is often better to do initial data filtering and augmentation using ServiceX. ServiceX transformations produce their output as an Awkward Array. The array can then be used in a regular Coffea processor. Here is a schema explaining the workflow:
There are two different UC AF-deployed ServiceX instances. The only difference between them is the type of input data they are capable of processing. Uproot processes any kind of “flat” ROOT files, while xAOD processes only Rucio registered xAOD files.
To use them one has to register and get approved. Sign in will lead you to a Globus registration page where you may choose to use an account connected to your institution:
Once approved, you will be able to see the status of your requests in the dashboard:
For an example analysis using ServiceX and Coffea look here.
## Opendata Example
In this example (which corresponds to ADL Benchmark 1), we’ll try to run a simple analysis example on the Coffea-Casa Analysis Facility. We will use the coffea_casa wrapper library, which allows use of pre-configured settings for HTCondor configuration and Dask scheduler/worker images.
Our goal in this toy analysis is to plot the missing transverse energy (MET) of all events from a sample dataset; this data was converted from 2012 CMS Open Data (17 GB, 54 million events), and is available in public EOS (root://eospublic.cern.ch//eos/root-eos/benchmark/Run2012B_SingleMu.root).
First, we need to import the coffea libraries used in this example:
import numpy as np
%matplotlib inline
from coffea import hist
import coffea.processor as processor
import awkward as ak
from coffea.nanoevents import schemas
To select the aforementioned data in a coffea-friendly syntax, we employ a dictionary of datasets, where each dataset (key) corresponds to a list of files (values):
fileset = {'SingleMu' : ["root://eospublic.cern.ch//eos/root-eos/benchmark/Run2012B_SingleMu.root"]}
Coffea provides the coffea.processor module, where users may write their analysis code without worrying about the details of efficient parallelization, assuming that the parallelization is a trivial map-reduce operation (e.g., filling histograms and adding them together).
# This program plots an event-level variable (in this case, MET, but switching it is as easy as a dict-key change). It also demonstrates an easy use of the book-keeping cutflow tool, to keep track of the number of events processed.
# The processor class bundles our data analysis together while giving us some helpful tools. It also leaves looping and chunks to the framework instead of us.
class Processor(processor.ProcessorABC):
def __init__(self):
# Bins and categories for the histogram are defined here. For format, see https://coffeateam.github.io/coffea/stubs/coffea.hist.hist_tools.Hist.html && https://coffeateam.github.io/coffea/stubs/coffea.hist.hist_tools.Bin.html
dataset_axis = hist.Cat("dataset", "")
MET_axis = hist.Bin("MET", "MET [GeV]", 50, 0, 100)
# The accumulator keeps our data chunks together for histogramming. It also gives us cutflow, which can be used to keep track of data.
self._accumulator = processor.dict_accumulator({
'MET': hist.Hist("Counts", dataset_axis, MET_axis),
'cutflow': processor.defaultdict_accumulator(int)
})
@property
def accumulator(self):
return self._accumulator
def process(self, events):
output = self.accumulator.identity()
# This is where we do our actual analysis. The dataset has columns similar to the TTree's; events.columns can tell you them, or events.[object].columns for deeper depth.
MET = events.MET.pt
# We can define a new key for cutflow (in this case 'all events'). Then we can put values into it. We need += because it's per-chunk (demonstrated below)
output['cutflow']['all events'] += ak.size(MET)
output['cutflow']['number of chunks'] += 1
# This fills our histogram once our data is collected. The hist key ('MET=') will be defined in the bin in __init__.
output['MET'].fill(dataset=dataset, MET=MET)
return output
def postprocess(self, accumulator):
return accumulator
With our data in our fileset variable and our processor ready to go, we simply need to connect to the Dask Labextention-powered cluster available within the Coffea-Casa Analysis Facility @ T2 Nebraska. This can be done by dragging the scheduler into the notebook, or by manually typing the following:
from dask.distributed import Client
client = Client("tls://localhost:8786")
Then we bundle everything up to run our job, making use of the Dask executor. We must point it to our client as defined above. In the Runner, we specify that we want to make use of the NanoAODSchema (as our input file is a NanoAOD).
executor = processor.DaskExecutor(client=client)
run = processor.Runner(executor=executor,
schema=schemas.NanoAODSchema,
savemetrics=True
)
output, metrics = run(fileset, "Events", processor_instance=Processor())
The final step is to generates a 1D histogram from the data output to the ‘MET’ key. fill_opts are optional arguments to fill the graph (default is a line).
hist.plot1d(output['MET'], overlay='dataset', fill_opts={'edgecolor': (0,0,0,0.3), 'alpha': 0.8})
As a result you should see the following plot:
## CMS Example
Important
This section applies only to the CMS Coffea-Casa instance.
Now we will try to run a short example, using CMS data, which corresponds to plotting the dimuon Z-peak. We use dimuon data which consists of ~3 million events at ~2.7 GB which belongs to the /DoubleMuon/Run2018A-02Apr2020-v1/NANOAOD dataset.
We import some common coffea libraries used in this example:
import numpy as np
from coffea import hist
from coffea.analysis_objects import JaggedCandidateArray
import coffea.processor as processor
%matplotlib inline
To select the aforementioned data in a coffea-friendly syntax, we employ a dictionary of datasets, where each dataset (key) corresponds to a list of files (values):
fileset = {'DoubleMu' : ['root://xcache//store/data/Run2018A/DoubleMuon/NANOAOD/02Apr2020-v1/30000/0555868D-6B32-D249-9ED1-6B9A6AABDAF7.root',
'root://xcache//store/data/Run2018A/DoubleMuon/NANOAOD/02Apr2020-v1/30000/09BED5A5-E6CC-AC4E-9344-B60B3A186CFA.root']}
Coffea provides the coffea.processor module, where users may write their analysis code without worrying about the details of efficient parallelization, assuming that the parallelization is a trivial map-reduce operation (e.g., filling histograms and adding them together).
class Processor(processor.ProcessorABC):
def __init__(self):
dataset_axis = hist.Cat("dataset", "Dataset")
dimu_mass_axis = hist.Bin("dimu_mass", "$\mu\mu$ Mass [GeV]", 50, 20, 120)
self._accumulator = processor.dict_accumulator({
'dimu_mass': hist.Hist("Counts", dataset_axis, dimu_mass_axis),
})
@property
def accumulator(self):
return self._accumulator
def process(self, events):
output = self.accumulator.identity()
mu = events.Muon
# Select events with 2 muons whose charges cancel out (Zs are charge-neutral).
dimu_neutral = mu[(ak.num(mu) == 2) & (ak.sum(mu.charge, axis=1) == 0)]
# Add together muon pair p4's, find dimuon mass.
dimu_mass = (dimu_neutral[:, 0] + dimu_neutral[:, 1]).mass
# Plot dimuon mass.
output['dimu_mass'].fill(dataset=dataset, dimu_mass=dimu_mass)
return output
def postprocess(self, accumulator):
return accumulator
With our data in our fileset variable and our processor ready to go, we simply need to connect to the Dask Labextention-powered cluster available within the Coffea-Casa Analysis Facility @ T2 Nebraska. This can be done by dragging the scheduler into the notebook, or by manually typing the following:
from dask.distributed import Client
client = Client("tls://localhost:8786")
Then we bundle everything up to run our job, making use of the Dask executor. To do this, we must point to a client within executor_args.
executor = processor.DaskExecutor(client=client)
run = processor.Runner(executor=executor,
schema=schemas.NanoAODSchema,
)
output = run(fileset, "Events", processor_instance=Processor())
The final step is to generates a 1D histogram from the data output to the ‘MET’ key. fill_opts are optional arguments to fill the graph (default is a line).
hist.plot1d(output['dimu_mass'], overlay='dataset', fill_opts={'edgecolor': (0,0,0,0.3), 'alpha': 0.8})
As a result you should see the following plot:
## ATLAS Examples
Important
This section applies only to the ATLAS Coffea-Casa instance.
The notebooks about columnar data analysis with DAOD_PHYSLITE here<https://github.com/nikoladze/agc-tools-workshop-2021-physlite>_ may be useful as a reference.
|
2023-03-22 21:27:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18622763454914093, "perplexity": 4816.081320696818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00181.warc.gz"}
|
https://forum.wilmott.com/viewtopic.php?t=100752&start=30
|
SERVING THE QUANTITATIVE FINANCE COMMUNITY
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm
### Re: Duck Typing
You mean in the code you posted, or do you mean in the tag dispatching code I posted?
In general I think it's good to use C++11 features if those save you time or make things look cleaner. Meyers is very analytical and make rational design choices (instead of subjective design choices). If he suggest things than IMO that's always something to try and understand?
Do you have his "Overview of the new C++11/C++14" slides? That's a nice compact overview, pp presenation style.
Cuchulainn
Topic Author
Posts: 62138
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
### Re: Duck Typing
I am referring to replacing the typedef stuff in your code by the alias template.
The limitations of typedef and pre-C+11 workarounds have been around since C++03. Now it might be possible to make the code more readable.
www.cppreference.com is the most precise account of C++11 for me.
Cuchulainn
Topic Author
Posts: 62138
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
### Re: Duck Typing
On another level, the Boost/C++11 community could (should) tell how to design using new C++11 features. Meyers discusses simple widgets classes which is not enough.
Now it looks like reverse engineering.
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm
### Re: Duck Typing
I am referring to replacing the typedef stuff in your code by the alias template.
The limitations of typedef and pre-C+11 workarounds have been around since C++03. Now it might be possible to make the code more readable.
http://www.cppreference.com is the most precise account of C++11 for me.
The typedef aren't template parameter dependent types, they are "tags", like little stickers "I'm American", "I'm a Call". The tags are nothing more than empty struct with unique names you want to glue to you class and which cary information for the compiler. You *could* e.g. replace it with a variable "bool is_american;" but that's not good. The compiler won't know the value, it will consume memory, the code will need to test it run-time. I don't see how you can get around having to specify a list of tags?
...but I'm no expert on the new C++11/14 features! I went pretty deep in C++ in 2008, that's where I used these patterns, but now I'm no longer up to speed.
Cuchulainn
Topic Author
Posts: 62138
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
### Re: Duck Typing
Duck typing and PBD is detailed design and too data oriented. What's more crucial is to pin down provides-requires interface between components. I don't want all the data being spread around in argument lists, at least not just yet.
Once the s/w contracts are in place, let each group design their own component. We tell them what interfaces we require and then they deliver. You can stick in any data you want as long as no other components see it.
Last edited by Cuchulainn on June 29th, 2017, 8:08 pm, edited 2 times in total.
Cuchulainn
Topic Author
Posts: 62138
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
### Re: Duck Typing
Example: The main components in a MC solver, pure interface-driven. You can plug in any stuff you want as long as you respect the interface contract. It's a bit like hardware.
public interface ISde
{ // Standard one-factor SDE dX = a(X,t)dt + b(X,t)dW, X(0)given
// dX = mu(X,t)dt + sig(X,t)dW
double Drift(double x, double t); // a (mu)
double Diffusion(double x, double t); // b (sig)
// Some extra functions associated with the SDE
double DriftCorrected(double x, double t, double B);
double DiffusionDerivative(double x, double t);
double InitialCondition
{
get;
set;
}
double Expiry
{
get;
set;
}
}
public interface IFdm
{ // Interface for one-step FDM methods for SDEs
// Choose which SDE model to use
ISde StochasticEquation
{
get;
set;
}
// Advance solution from level t[n] to level t[n+1]
double advance(double xn, double tn, double dt, double WienerIncrement, double WienerIncrement2);
}
public interface IRng
{
double GenerateRn();
}
public abstract class Rng : IRng
{
public abstract double GenerateRn();
}
public interface IPricer
{
void ProcessPath(ref double[] arr); // The path from the evolver
void PostProcess(); // Finish off computations
double DiscountFactor(); // (simple) discounting function
double Price(); // Computed option price
}
public class MCBuilder<S, F, R>
where S : ISde
where F : IFdm
where R : IRng
{
// etc.
}
I can't think of any design that is more maintainable than this one (I've tried..)
Cuchulainn
Topic Author
Posts: 62138
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
### Re: Duck Typing
To configure with a Builder just bung in the assemblies dll ! You can even load/unload dlls at run-time.
BTW the same design approach can be applied to a wide range of problems.
MCBuilder<ISde, FdmBase, IRng> builder = new
MCBuilder<ISde, FdmBase, IRng>(data);
var parts = builder.Parts();
var path = builder.GetPaths();
var finish = builder.GetEnd();
MCMediator mcp = new
MCMediator(parts, path, finish, data.Item7);
mcp.start();
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm
### Re: Duck Typing
Make drift etc. a non-member functions, it promotes loose coupling
double drift(SDE& sde, double x, double t);
etc.
Cuchulainn
Topic Author
Posts: 62138
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
### Re: Duck Typing
Make drift etc. a non-member functions, it promotes loose coupling
double drift(SDE& sde, double x, double t);
etc.
I hear what you are saying, but what the compelling reason for this approach? And it's compile-time.
It's a design choice indeed. But C# does not support non-member unless it's a static method.
I think your solution will be difficult to maintain. Just think of shared data. You need a class to hold it all together.
I welcome being proved wrong. Have you actually developed a MC prototype using the approach? Showing running code >> pseudocode.
Last edited by Cuchulainn on June 29th, 2017, 9:00 pm, edited 1 time in total.
Cuchulainn
Topic Author
Posts: 62138
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
### Re: Duck Typing
Make drift etc. a non-member functions, it promotes loose coupling
double drift(SDE& sde, double x, double t);
etc.
I have the framework also in C++ (remember QFCL that got nuked ?) using the C# approach. I can try this non-member approach as well. and take a vote. Like at the snack bar you have Frites with {mayo, ketchup., peanut butter, HP sauce}. Some people are allergic to peanuts.
Those were the QFCL days
viewtopic.php?f=44&t=96115
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm
### Re: Duck Typing
Yes, give it a try in C++, I think it will scale better, the interface will be more like small lego bricks instead of monolithic interfaces (classes where you have to add members as you use them in new situations).
IMO it's not a matter of taste but a design decision based rational/convincing arguments. I think we've talked about this a lot many times? It's amongst others in Effective C++, this stackoverflow discussion give a good argument. And here is someone discussion how std::basic_string has too many member functions and how it could have been better.
Should be fun to experiment with these design choices!
Cuchulainn
Topic Author
Posts: 62138
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
### Re: Duck Typing
Yes, give it a try in C++, I think it will scale better, the interface will be more like small lego bricks instead of monolithic interfaces (classes where you have to add members as you use them in new situations).
I already have tried this approach as I mentioned and it tends to be difficult as the size gets bigger,
The (toy) examples on Stack are not useful for a number of reasons. I do not see them scaling well and they are too much focused on C++. The basic_string issue is a bit worn out at this stage. Maybe it works for small problems but I don't see how it can be applied to larger libraries and frameworks. It's undocumented AFAIK.
I need bigger examples for sparring with.
Last edited by Cuchulainn on June 30th, 2017, 8:29 am, edited 2 times in total.
Cuchulainn
Topic Author
Posts: 62138
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:
### Re: Duck Typing
On a more philosophical level (is it OK?) is that each developer has one or more models of realty.
No two languages are ever sufficiently similar to be considered as representing the same social reality. The worlds in which different societies live are distinct worlds, not merely the same world with different labels attached. Edward Sapir
The diversity of languages is not a diversity of signs and sounds but a diversity of views of the world.
Wilhelm von Humboldt
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm
### Re: Duck Typing
The toy models at stackoverflow are of course not the reason why c++ expert advocate these rules.
But you are free to do whatever you like of course!
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm
### Re: Duck Typing
I already have tried this approach as I mentioned and it tends to be difficult as the size gets bigger,
Can you show it? This doesn't sound right. When done right it will scale much better.
|
2020-06-03 18:22:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44738367199897766, "perplexity": 7378.004614913243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435987.85/warc/CC-MAIN-20200603175139-20200603205139-00466.warc.gz"}
|
https://www.tutorke.com/lesson/5085-figure-1-shows-the-path-of-array-of-yellow-light-through-a-glass-prism-the-speed-of-yellow-light-in.aspx
|
Get premium membership and access revision papers with marking schemes, video lessons and live classes.
OR
# Form 3 Physics Refraction of Light Questions and Answers
Figure 1 shows the path of array of yellow light through a glass prism. The speed of yellow light in the prism is 1.88 x 10^8 m/s.
(a) Determine the refractive index of the prism material for the light (speed of light in vacuum c = 3.0 x 10^8 ms^-1).
(b) Show on the figure the critical angle, c, and determine its value.
(c) Given that r = 21.2^0, determine the angle theta .
(d) On the same figure, sketch the path of the light after striking the prism if the prism was replaced by another of similar shape but lower refractive index.
|
2023-01-27 21:18:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41159796714782715, "perplexity": 864.9768107400388}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495012.84/warc/CC-MAIN-20230127195946-20230127225946-00035.warc.gz"}
|
https://www.clutchprep.com/chemistry/practice-problems/131604/a-2-950-x-10-2-m-solution-of-nacl-in-water-is-at-20-0-c-the-sample-was-created-b
|
Chemistry Practice Problems ppm Practice Problems Solution: A 2.950 × 10−2 M solution of NaCl in water is at 2...
# Solution: A 2.950 × 10−2 M solution of NaCl in water is at 20.0°C. The sample was created by dissolving a sample of NaCl in water and then bringing the volume up to 1.000 L. It was determined that the volume of water needed to do this was 999.2 mL . The density of water at 20.0°C is 0.9982 g/mL.Part A. Calculate the molality of the salt solution. Express your answer to four significant figures and include the appropriate units.m NaCl =Part B. Calculate the mole fraction of salt in this solution. Express the mole fraction to four significant figures.χ NaCl =Part C. Calculate the concentration of the salt solution in percent by mass. Express your answer to four significant figures and include the appropriate units.percent by mass NaCl =Part D. Calculate the concentration of the salt solution in parts per million. Express your answer as an integer to four significant figures and include the appropriate units.parts per million NaCl =
###### Problem
A 2.950 × 10−2 M solution of NaCl in water is at 20.0°C. The sample was created by dissolving a sample of NaCl in water and then bringing the volume up to 1.000 L. It was determined that the volume of water needed to do this was 999.2 mL . The density of water at 20.0°C is 0.9982 g/mL.
Part A. Calculate the molality of the salt solution. Express your answer to four significant figures and include the appropriate units.
NaCl =
Part B. Calculate the mole fraction of salt in this solution. Express the mole fraction to four significant figures.
χ NaCl =
Part C. Calculate the concentration of the salt solution in percent by mass. Express your answer to four significant figures and include the appropriate units.
percent by mass NaCl =
Part D. Calculate the concentration of the salt solution in parts per million. Express your answer as an integer to four significant figures and include the appropriate units.
parts per million NaCl =
###### Solution
Based on the information provided to prepare a 2.950x10-2 M NaCl solution in water at 20 °C, we´re asked to calculate the concentration of the salt solution in 4 parts
Molality (m), mole fraction (X), percent by mass (%), and parts per million (ppm).
For all cases, we need the amount of NaCl in the sample.
We calculate the moles of NaCl from the definition of Molarity (M):
$\overline{){\mathbf{Molarity}}{\mathbf{\left(}}{\mathbf{M}}{\mathbf{\right)}}{\mathbf{=}}\frac{\mathbf{moles}}{\mathbf{Liters}}}$
View Complete Written Solution
ppm
ppm
#### Q. A person is considered legally intoxicated with a blood alcohol level of 80.0 mg/dL. Assuming that blood plasma has a density of 1.025 g/mL, what is t...
Solved • Fri Nov 01 2019 09:53:35 GMT-0400 (EDT)
ppm
#### Q. Glucose makes up about 0.10% by mass of human blood.Calculate this concentration in ppm.
Solved • Mon Dec 03 2018 16:17:17 GMT-0500 (EST)
ppm
#### Q. The maximum allowable concentration of lead in drinking water is 9.0 ppb.Calculate the molarity of lead in a 9.0- ppb solution.
Solved • Mon Dec 03 2018 16:17:17 GMT-0500 (EST)
ppm
#### Q. The maximum allowable concentration of lead in drinking water is 9.0 ppb.How many grams of lead are in a swimming pool containing 9.0 ppb lead in 72 m...
Solved • Mon Dec 03 2018 16:17:17 GMT-0500 (EST)
|
2020-04-07 05:02:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4785115122795105, "perplexity": 1821.273053950026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371665328.87/warc/CC-MAIN-20200407022841-20200407053341-00243.warc.gz"}
|
https://scicomp.stackexchange.com/questions/25559/using-two-reference-values-for-a-scalar-variable-whats-the-name-of-this-type-o
|
# Using two reference values for a scalar variable: What's the name of this type of problem?
I don't really know where to ask this one... In fact, I am not sure I can define it properly.
Here goes...
Let's say I take measurements.
In order to "normalize" these measurements, I divide their values by a reference measurement.
So, if my reference measurement has value 100, and one measurement has value 150, I rescale it as follows: 150 / 100 = 1.5. That particular measurement's value is 1.5 time the reference value.
So far, trivial.
Now, time for a picture:
The triangles are measurement values. A and B are reference values.
The domain the measurements are taken in indicates that the closer a measurement value is to A, the more A should be the reference value to rescale it. Similarly, the closer a measurement value is to B, the more B should be the reference value to rescale it.
So, values 1 and 2 should be restated in terms of A, while values 4 and 5 should be restated in terms of B.
But, in general, all values should probably be restated in function of A and B. This is particular true for value 3 in the picture, as it is almost equidistant from A and B.
In other words, there must be a function f(x){A, B}, that will allow to order all measurements.
What do you call that type of problem? Any reference or pointer I could use to restate what I mean?
--
Update 1:
Here is a metaphor that just came to me... Let's say I must define a scale that measure the total cost of moving an object at a certain speed.
A and B become two reference "cost per unit of speed"s. For speeds close to B, the total cost can be accurately estimated in terms of B. For speeds close to A, the total cost can be accurately estimated in terms of A. For speeds in between A and B, the total cost must be a function of A and B.
The farther away a speed is from A/B, the less A/B is a factor in the cost of moving the object at that speed.
• I think that you can just interpolate your reference measurement, and use the interpolated values to normalize the other measurements. – nicoguaro Nov 15 '16 at 15:06
Have you thought of barycentric coordinates? There is a unique way to write $x=\alpha A + \beta B$ with $\alpha+\beta=1$.
Barycentric coordinates are usually employer in larger dimensions, but seem to be the concept you are looking for. You can sort your observations according to either $\alpha$ or $\beta$, this will give you their "closeness" to either. If $A$ and $B$ are well chosen, values should cluster around $[0,1]$, or even around $\{0,1\}$. See e.g. https://en.wikipedia.org/wiki/Barycentric_coordinate_system
|
2021-07-31 22:02:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7092637419700623, "perplexity": 395.54525637959875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154126.73/warc/CC-MAIN-20210731203400-20210731233400-00544.warc.gz"}
|
https://mersenneforum.org/showpost.php?s=0177fd1089cd957cce5771c3f5e3dbc4&p=558295&postcount=1
|
View Single Post
2020-09-30, 02:11 #1 jshort "James Short" Mar 2019 Canada 17 Posts alternative 2nd stage of p-1 factoring algorithm Suppose we're factoring an integer via the p-1 method and we've already completed the first stage ie. $L = a^{B!} mod(n)$ where $n$ is the composite we wish to factor. In the 2nd stage, we assume that there is one prime factor remaining $q > B$ and go on to compute $L^{p}$ for various prime integers. If $q-1$ is fairly smooth, would it not be more worthwhile to consider the set $(L^{2^{b!}}, L^{3^{b!}}, L^{4^{b!}},. . .,L^{a^{b!}})$ for some considerably smaller integer $b < B$ and then compute $gcd(L^{i^{b!}} - L^{j^{b!}},n)$ for all $1 < i < j < a$? Keep in mind that we can perform another kind of "2nd stage" on this as well. ie assume that $b!$ captures most of the prime factors of $q-1$ and then use a 2nd stage (3rd stage?) by computing $(L^{2^{p(b!)}}, L^{3^{p(b!)}}, L^{4^{p(b!)}},. . . ,L^{a^{p(b!)}})$ for various primes $p > b$ and again computing $gcd(L^{i^{p(b!)}} - L^{j^{p(b!)}},n)$ for all $1 < i < j < a$.
|
2022-01-27 00:29:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 15, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6431183218955994, "perplexity": 399.4316282709027}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00051.warc.gz"}
|
https://www.enotes.com/homework-help/n-cos-pin-n-2-determine-convergence-divergence-794176
|
# `a_n=cos(pin)/n^2` Determine the convergence or divergence of the sequence with the given n'th term. If the sequence converges, find its limit.
`cos(pi n)={(1 if n=2k-1),(-1 if n=2k):},` `k in ZZ` i.e. it is equal to `1` for odd `n` and `-1` for even `n.` Therefore, we can break this into two cases.
`n=2k-1` (n is odd)
`lim_(n to infty)a_n=lim_(n to infty)1/n^2=1/infty^2=1/infty=0`
`n=2k` (n is even)
`lim_(n to infty)a_n=lim_(n to infty)-1/n^2=-1/infty^2=-1/infty=0`
Since the limit is the same in both cases, the sequence is convergent and its limit is equal to zero.
Image below shows first 15 terms of the sequence. We can see that the odd-numbered terms are negative, while the even-numbered terms are positive, but they are both approaching the `x`-axis implying convergence to zero.
Images:
This image has been Flagged as inappropriate Click to unflag
Image (1 of 1)
|
2022-11-30 08:09:27
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8706873655319214, "perplexity": 383.8531533880628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710733.87/warc/CC-MAIN-20221130060525-20221130090525-00816.warc.gz"}
|
https://www.aps.org/programs/honors/prizes/prizerecipient.cfm?last_nm=Regge&first_nm=Tullio&year=1964
|
Prize Recipient
Tullio Regge
Citation:
"For important papers introducing into particle theory the concept of analytic continuation in angular momentum."
|
2020-03-31 08:03:55
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9913713335990906, "perplexity": 4785.851745761907}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500331.13/warc/CC-MAIN-20200331053639-20200331083639-00381.warc.gz"}
|
https://projecteuclid.org/euclid.rae/1337001365
|
## Real Analysis Exchange
### Characterizing Derivatives by Preimages of Sets
Krzysztof Ciesielski
#### Abstract
In this note we will show that many classes $\mathcal{F}$ of real functions $f\colon {\mathbb R}\to\mathbb{R}$ can be characterized by preimages of sets in a sense that there exist families $\mathcal{A}$ and $\mathcal{D}$ of subsets of $\mathbb{R}$ such that $\mathcal{F}=\mathcal{C}(\mathcal{D},\mathcal{A})$, where $\mathcal{C}(\mathcal{D},\mathcal{A})=\{f\in\mathbb{R}^\mathbb{R}\colon f^{-1}(A)\in \mathcal{D}\ \text{ for every } A\in\mathcal{A}\}.$ In particular, we will show that there exists a Bernstein $B\subset \mathbb{R}$ such that the family $\Delta$ of all derivatives can be represented as $\Delta=\mathcal{C}(\mathcal{D},\mathcal{A})$, where $\mathcal{A}=\bigcup_{c\in\mathbb{R}}\{(-\infty,c),(c,\infty),B+c\}$ and $\mathcal{D}=\{g^{-1}(A)\colon A\in\mathcal{A}\ \&\ g\in\Delta\}$.
#### Article information
Source
Real Anal. Exchange, Volume 23, Number 2 (1999), 553-566.
Dates
First available in Project Euclid: 14 May 2012
https://projecteuclid.org/euclid.rae/1337001365
Mathematical Reviews number (MathSciNet)
MR1639976
Zentralblatt MATH identifier
0943.26015
#### Citation
Ciesielski, Krzysztof. Characterizing Derivatives by Preimages of Sets. Real Anal. Exchange 23 (1999), no. 2, 553--566. https://projecteuclid.org/euclid.rae/1337001365
|
2019-10-14 20:22:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.765095055103302, "perplexity": 636.3271007166909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655310.17/warc/CC-MAIN-20191014200522-20191014224022-00414.warc.gz"}
|
http://planetmath.org/TheProofOfTheoremIsWrong
|
# The proof of theorem is wrong
Let’s create a very simple measurable space: $X=\{a,b\}$, $\mathcal{A}=\{\emptyset,\{a\},\{b\},X\}$.
Let’s take the $\pi$-system $P=\{\{a\}\}$ containing only one subset of $X$.
Let’s create two measures $\mu=\delta_{a}+\delta_{b}$ and $\nu=\delta_{a}+2\delta_{b}$. Then obviously $\mu$ and $\nu$ agree on $P$ and are finite, but they obviously are not equal on $\mathcal{A}$.
The proof, however, claims that it is sufficient if $\mu$ and $\nu$ are finite. I believe that $\mu(X)=\nu(X)$ is a necessary condition.
Title The proof of theorem is wrong TheProofOfTheoremIsWrong 2013-03-22 19:16:05 2013-03-22 19:16:05 tomprimozic (26284) tomprimozic (26284) 4 tomprimozic (26284) Example msc 28A12
|
2018-03-19 11:00:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 14, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9394204020500183, "perplexity": 676.8246702802791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646875.28/warc/CC-MAIN-20180319101207-20180319121207-00697.warc.gz"}
|
https://math.stackexchange.com/questions/2716012/injective-cogenerator-in-abelian-category
|
# Injective Cogenerator in Abelian Category
I am trying to understand the proof of the Freyd-Mitchell Embedding Theorem and got stuck on the following detail. If $\mathcal{A}$ if a left-complete Abelian category with a generator, such that every object in $\mathcal{A}$ may be embedded in an injective object, then $\mathcal{A}$ has an injective cogenerator.
The proof is on page 70 of Freyd's book on Abelian categories and goes like so:
Let $G$ be a generator for $\mathcal{A}$, and let $P$ be the product of all the quotient objects of $G$. Let $P\to E$ be a monomorphism with $E$ injective. Then $E$ is an injective cogenerator. To prove it, let $A\to B$ be a non-zero map. Since $G$ is a generator there exists a map $G\to A$ such that $G\to A\to B\neq 0$. Let $I\to B$ the image of $G\to A\to B$, and $I\to P\to E$ be a monomorphism (this is the part I don't understand). Since $E$ in injective there exists a map $B\to E$ such that $I\to B\to E=I\to P\to E$. Now $A\to B\to E\neq 0$ because $G\to A\to B\to E=G\to A\to I\to B\to E\neq 0$.
I don't understand the choice of $P$ in the first place, I don't see where it comes into the proof. I can only assume it is used to allow the choice of the monomorphism $I\to P\to E$, but I'm not sure why. Can anyone clear this up for me?
$I\to B$ is defined as the image of a map $G\to B$; in an abelian category, this can be obtained as the factorization of $G\to B$ through the cokernel of its kernel. So $I$ is a quotient of $G$, and thus it must be a subobject of $P$, since $P$ is the product of all quotients of $G$. Then $I\to P\to E$ is a mono, since it is the composition of two monos.
This explains the choice of $P$: you need an object which has all the quotients of $G$ as subobjects. Thus the simplest choice is to take the product.
• Thank you, that makes it very clear. Can I trouble you on a couple more things; we need to know the family of quotient of objects of $G$ is a set, Freyd proves the family of subobjects of any object in an Abelian category with a generator is a set and then claims later this implies the family of quotient objects of the generator is a set, but I don't see how this follows. Mar 31 '18 at 16:22
• Also, in proving the family of subobjects is a set he says a subobject $A'\to A$ is distinguished by $(G, A')\subset (G, A)$, but I don't see this either as $(G, A')=(G, A'')$ doesn't imply $A'=A''$, maybe they are the same as subobjects though, although I'm not sure why. Mar 31 '18 at 16:26
• Ah yes, fantastic! About the second comment, $(G, -)$ being faithful implies any two distinct monomorphisms between $A'$ and $A$ are sent to separate monomorphisms (which implies the class of monomorphisms $A'\to A$ is a set, though that is already known), but when the domain is different I really don't see what can be said. Mar 31 '18 at 22:47
• @MadChickenMan If $(G,A')=(G,A'')$ as subobjects of $(G,A)$, then an arrow $G\to A$ factors through $A'$ iff it factors through $A''$. In particular, for any arrow $G\to A'$, the composition $G\to A'\to A$ factors through $A''$, and thus $G\to A'\to A\to A/A''$ must be $0$. Then $G$ being a generator implies that $A'\to A\to A/A''$ is $0$, and thus that $A'\subset A''$ (as subobjects of $A$). In the same way, we find $A''\subset A'$, and thus $A'=A''$. Apr 1 '18 at 17:35
|
2021-10-17 14:04:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9718834757804871, "perplexity": 70.7394133614603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00280.warc.gz"}
|
http://www.gamedev.net/index.php?app=forums&module=extras§ion=postHistory&pid=5004646
|
• Create Account
#ActualServant of the Lord
Posted 27 November 2012 - 02:10 PM
Excellent description of your intents. If you can't describe it, you have a problem - but you described it very succinctly.
Are chunks are composed of tiles? Or objects?
Programming rule #3421: Premature optimization is the root of many evils.
Programming rule #745: KiSS: Keep it Stupidly Simple (alternatively, Keep it Simple, Stupid)
Have you actually tried just making a grid of uniform chunks, and actually seeing if it's too slow for your program?
But that aside, let's see what we can do. It helps simplify things if you put stuff into their own classes or structs.
Your game world is composed of Objects (enemies, walls, the player, moving platforms, spikes, etc...).
Each object has a location in the world, probably in a pixel-like measurement unit.
So: World has 100,000+ objects. Obviously we can't load and process all those at the same time. (Actually with modern computers, we probably could - but we'll pretend we can't).
So you break your world into 'chunks'. World has a grid of smart pointers to Chunks (yes, a grid). Each location on the grid can either be a null smart pointer (thus saving you your precious memory ), or can have a smart pointer to a valid Chunk.
A World:
• Has a grid of chunks.
• Streams chunks in and out around the player.
• Tells chunks to draw, update, etc...
A chunk is just a "bucket" of entities. Any solid who's center is over the chunk's boundaries, belongs to that chunk.
A Chunk:
• Tells the Objects within it to draw, think, etc...
• If an Object Entity walks off of one chunk, that chunk passes ownership of that object to the next chunk.
When 'unloaded', the Chunk still exists, the Objects are destroyed (until the next reload), unless that particular Object is persistent and needs to think even when distant from the player, in which case they aren't drawn and only need to be updated at a much slower update rate.
Almost all walls are persistent, but don't need to think when the player isn't near. Almost all enemies need to think, but don't need to be persistent. Some enemies need to think and be persistent (a special enemy hunting the player from a million miles away, for example).
Maybe you want to specify to only keep an object in memory if it's within a certain range.
Let's convert this directly to pseudo-code:
Object
{
Position
IsPersistent
RangeToKeepInMemory //The distance away the player needs to be before it is destroyed.
bool SaveStateWhenDestroyed() -> (IsPersistant && hasChanged)
bool KeepInMemory(playerDistance) -> (playerDistance > RangeToKeepInMemory)
Save()
Draw()
Update(deltaTime) //Think, animate, etc...
}
Chunk
{
Vector<Object::Ptr> Objects
StreamOut() //Unloads all objects except those it needs to keep in memory.
StreamIn() //Loads all persistent objects, like walls and enemies that don't spawn but have preset locations like bosses.
Draw() -> Draw every object
Update(deltaTime) -> Update every object
}
World
{
Grid<Chunk::Ptr> Chunks;
StreamChunksAround(location);
Draw() -> Draws all chunks nearby.
}
This can be improved upon, but it's a good start and really straightforward and simple. It also does not waste much memory or speed.
You can steal my C++ Grid class here - re-sizable and permits negative indices. If you know the size of your World ahead of time, and it doesn't change during the course of that play, you could use a std::vector of size (width*height) instead.
The whole map<map<Object>> thing just doesn't seem like good design.
#1Servant of the Lord
Posted 27 November 2012 - 02:07 PM
Excellent description of your intents. If you can't describe it, you have a problem - but you described it very succinctly.
Are chunks are composed of tiles? Or objects?
Programming rule #3421: Premature optimization is the root of many evils.
Programming rule #745: KiSS: Keep it Stupidly Simple (alternatively, Keep it Simple, Stupid)
Have you actually tried just making a grid of uniform chunks, and actually seeing if it's too slow for your program?
But that aside, let's see what we can do. It helps simplify things if you put stuff into their own classes or structs.
You have:
- A world (which might be the entire game, or just a single level)
- Objects (enemies, walls, the player, moving platforms, spikes, etc...)
Each object has a location in the world, probably in a pixel-like measurement unit.
So:
World has 100,000+ objects. Obviously we can't load and process all those at the same time. (Actually with modern computers, we probably could - but we'll pretend we can't).
So you break your world into 'chunks'. World has a grid of smart pointers to Chunks (yes, a grid). Each location on the grid can either be a null smart pointer (thus saving you your precious memory ), or can have a smart pointer to a valid Chunk.
A World:
• Has a grid of chunks.
• Streams chunks in and out around the player.
• Tells chunks to draw, update, etc...
A chunk is just a "bucket" of entities. Any solid who's center is over the chunk's boundaries, belongs to that chunk.
A Chunk:
• Tells the Objects within it to draw, think, etc...
• If an Object Entity walks off of one chunk, that chunk passes ownership of that object to the next chunk.
When 'unloaded', the Chunk still exists, the Objects are destroyed (until the next reload), unless that particular Object is persistent and needs to think even when distant from the player, in which case they aren't drawn and only need to be updated at a much slower update rate.
Almost all walls are persistent, but don't need to think when the player isn't near. Almost all enemies need to think, but don't need to be persistent. Some enemies need to think and be persistent (a special enemy hunting the player from a million miles away, for example).
Maybe you want to specify to only keep an object in memory if it's within a certain range.
Let's convert this directly to pseudo-code:
Object
{
Position
IsPersistent
RangeToKeepInMemory //The distance away the player needs to be before it is destroyed.
bool SaveStateWhenDestroyed() -> (IsPersistant && hasChanged)
bool KeepInMemory(playerDistance) -> (playerDistance > RangeToKeepInMemory)
Save()
Draw()
Update(deltaTime) //Think, animate, etc...
}
Chunk
{
Vector<Object::Ptr> Objects
StreamOut() //Unloads all objects except those it needs to keep in memory.
StreamIn() //Loads all persistent objects, like walls and enemies that don't spawn but have preset locations like bosses.
Draw() -> Draw every object
Update(deltaTime) -> Update every object
}
World
{
Grid<Chunk::Ptr> Chunks;
StreamChunksAround(location);
Draw() -> Draws all chunks nearby.
|
2014-04-20 16:06:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29130229353904724, "perplexity": 3388.628239667125}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/thermodynamics-otto-cycle.861324/
|
# Thermodynamics: Otto Cycle
1. Mar 9, 2016
### jdawg
1. The problem statement, all variables and given/known data
A four cylinder, four stroke internal combustion engine has a bore of 3.7 in and a stroke of 3.4 in. The clearance volume is 16% of the cylinder volume at bottom dead center and the crankshaft rotates at 2400 RPM. The processes within each cylinder are modeled as an air standard Otto cycle with a pressure of 14.5 lbf/in2 and a temperature of 60 °F at the beginning of compression. The maximum temperature in the cycle is 5200 °R. Based on this model, calculate the net work per cycle in Btu and the power developed by the engine in horse power.
2. Relevant equations
3. The attempt at a solution
So p1=14.5 lbf/in2 and T1=520°R. The max temperature is at T3 so T3=5200°R. V3=V2. V4=V1.
And they tell you in the problem that the clearance volume ( V2) is 16% of the volume at bottom dead center (V1). I would think that meant V2=(0.16)*(V1), but the solution I'm looking at says V2=(0.16)*(V1+V2).
What exactly is the stroke? All my book says about it is that its the distance the piston moves in one direction, which kind of makes me think its the total volume? The solution I have kind of treats it as a height because they multiply the stroke by the area they calculated with the bore:
ΔV1-2=(π(3.7/2)2)(3.4)=0.02116 ft3
I feel like the terminology is the main thing that's tripping me up. If someone could clarify I would really appreciate it! :)
2. Mar 9, 2016
### SteamKing
Staff Emeritus
The stroke is the distance the piston moves up or down during one revolution of the crankshaft. The clearance volume is that small space which remains when the piston is at the top of its stroke. The swept volume is the product of the area of the cylinder and the piston stroke.
3. Mar 9, 2016
### jdawg
Ok thanks! Now I'm trying to find vr2 and its coming out wrong. I looked up the value for vr1 to be 158.58 and calculated V1=0.026 ft3 and V2=0.004716 ft3.
So now using (vr2/vr1)=(V2/V1) and found vr2=28.76... But it should be 25.373.
They did (1/6.25)*(158.58)=vr2
How did they find the compression ratio to be 6.25? It seems like they found the compression ratio without using V1 and V2. I know 1/6.25=0.16, is it a coincidence that the inverse of the compression ratio is the percentage of the clearance volume at bottom dead center?
4. Mar 9, 2016
### SteamKing
Staff Emeritus
The compression ratio is defined using the geometric properties of the cylinder:
$CR = \frac{SV+CV}{CV}$
5. Mar 9, 2016
### jdawg
I don't understand why it doesn't work when I just plug in my values to (vr2)=(V2/V1)*(vr1)
Are the values for V1 and V2 incorrect?
6. Mar 9, 2016
### SteamKing
Staff Emeritus
Your calculation for SV = 0.02116 ft3 looks OK. IDK if you have calculated the CV correctly, though.
I would work in cubic inches for these calculations and convert to cubic feet later. 1 ft3 = 1728 in3
7. Mar 9, 2016
### jdawg
Thanks, I'll try that!
8. Dec 14, 2016
### jiasd
Can you show the complete solution I wanna compare with mine
|
2017-11-18 17:56:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37293383479118347, "perplexity": 1289.471866026487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805008.39/warc/CC-MAIN-20171118171235-20171118191235-00574.warc.gz"}
|
https://oneclass.com/class-notes/ca/mcgill/math-sta-sci/math-323/279036-double-integralspdf.en.html
|
Class Notes (835,117)
MATH 323 (2)
Lecture
# Double Integrals.pdf
8 Pages
83 Views
School
Department
Mathematics & Statistics (Sci)
Course
MATH 323
Professor
W.J.Anderson
Semester
Winter
Description
Double Integrals http://www.math.oregonstate.edu/home/programs/undergrad/CalculusQu... Double Integrals In calculus of a single variable the definite integral for f(x)>=0 is the area under the curve f(x) from x=a to x=b. For general f(x) the definite integral is equal to the area above the x-axis minus the area below the x-axis. The definite integral can be extended to functions of more than one variable. Consider a function of 2 variables z=f(x,y). The definite integral is denoted by where R is the region of integration in the xy-plane. For positive f(x,y), the definite integral is equal to the volume under the surface z=f(x,y) and above xy-plane for x and y in the region R. This is shown in the figure below. For general f(x,y), the definite integral is equal to the volume above the xy-plane minus the volume below the xy-plane. This page includes the following sections: Applications Brief Discussion of Riemann Sums Double Integrals over Rectangular Regions Example Double Integrals over General Regions Example Applications Double integrals arise in a number of areas of science and engineering, including computations of Double Integrals http://www.math.oregonstate.edu/home/programs/undergrad/CalculusQu... Area of a 2D region Volume Mass of 2D plates Force on a 2D plate Average of a function Center of Mass and Moment of Inertia Surface Area Brief Discussion of Riemann Sums As in the case of an integral of a function of one variable, a double integral is defined as a limit of a Riemann sum. Suppose we subdivide the region R into subrectangles as in the figure below (say there are M rectangles in the x direction and N rectangles in the y direction). Label the rectangles R_ij where 1<=i<=M and 1<=j<=N. Think of the definite integral as representing volume. The volume under the surface above rectangle R_ij is approximately f(x_i,y_j)A_ij, where A_ij is area of the rectangle and f(x_i,y_j) is the approximate height of the surface in the rectangle. Here (x_i,y_j) is some point in the rectangle R_ij. If we sum over all rectangles we have In the limit as the size of the rectangles goes to 0, the sum on the right converges to a value which is the definite integral. The quantity f(x,y)dAin the definite integral represents the volume in some infinitesimal region around the point (x,y). The region is so small that the f(x,y) only varies infinitesimally in the region. The double integral sign says: add up volumes in all the small regions in R. Double Integrals over a Rectangular Region Suppose that f(x,y) is continuous on a rectangular region in the xy plane as shown above. The double integral Double Integrals http://www.math.oregonstate.edu/home/programs/undergrad/CalculusQu... represents the volume under the surface. We can compute the volume by slicing the three-dimensional region like a loaf of bread. Suppose the slices are parallel to the y-axis. An example of slice between x and x+dx is shown in the figure. In the limit of infinitesimal thickness dx, the volume of the slice is the product of the cross-sectional area and the thickness dx. The cross sectional area is the area under the curve f(x,y) for fixed x and y varying between c and d. (Note that if the thickness dx is infinitesimal, x varies only infinitesimally on the slice. We can assume that x is constant.) The picture below shows the cross-sectional area. The area is given by the integral Double Integrals http://www.math.oregonstate.edu/home/programs/undergrad/CalculusQu... The variable of integation is y and x is a CONSTANT. The cross-sectional area depends on x and this is why we write C=C(x). The volume of the slice between x and x+dx is C(x)dx. The total volume is the sum of the volumes of all the slices between x=a and x=b: If substitute for C(x), we obtain: This is an example of an iterated integral. One integrates with respect to y first, then x. The
More Less
Related notes for MATH 323
Me
OR
Join OneClass
Access over 10 million pages of study
documents for 1.3 million courses.
Join to view
OR
By registering, I agree to the Terms and Privacy Policies
Just a few more details
So we can recommend you notes for your school.
Get notes from the top students in your class.
Request Course
Submit
|
2018-04-20 18:59:53
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9018504023551941, "perplexity": 583.3178024273731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944677.39/warc/CC-MAIN-20180420174802-20180420194802-00502.warc.gz"}
|
https://brilliant.org/problems/badminton-ii/
|
Algebra Level 3
Two players are playing a shortened version of badminton: a $$k$$-point, $$n$$-game match with no deuce, where $$n,k > 1$$ are integers, and $$n$$ is odd. Specifically, in each game, the player who first scores $$k$$ points wins. The winner of the match is the player who first wins $$\lceil \frac{n}{2}\rceil$$ out of $$n$$ games.
Is it possible for the loser to earn possible maximum amount of points not strictly more than the winner's possible minimum amount of points? If so, how many choices of $$n$$ and $$k$$ are there?
See the report
×
|
2017-09-22 16:49:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40585944056510925, "perplexity": 424.69172433456146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689028.2/warc/CC-MAIN-20170922164513-20170922184513-00175.warc.gz"}
|
https://www.mometrix.com/academy/matrices-geometric-transformations/
|
# Matrices – Geometric Transformations
Hello and welcome to this video about using matrices to transform figures on the coordinate plane! In this video, we will cover translations dilations, reflections, and rotations
Using addition, subtraction, scalar multiplication, and matrix multiplication, we can transform figures on the coordinate plane. All we need are the coordinates of the figure, which can be any shape and does need to be closed. This triangle will be used to demonstrate each transformation:
First, we need to create the coordinate matrix for the figure. The general form of a coordinate matrix is x1, x2, x3 , y1, y2, y3, and so on.
The key to remembering this matrix is remembering that row 1 represents the x-coordinates and row 2 represents the y-coordinates.
The coordinate matrix for our triangle, we’ll call it T, is T equals 1, 2, 3, 0, 3, 1.
Now we can perform operations on our coordinate matrix in order to discover the coordinates of our triangle transformed in various ways.
Translations, or slides, can be performed simply by adding the amount and direction of the slide to the x- and y- coordinates separately.
Suppose we wanted to slide our triangle 2 units to the right. Since this is a change involving the x-coordinates moving in the positive direction, we would simply perform a matrix addition that looks like this, which causes 2 to be added to each x-coordinate.
The coordinates of our translated triangle are (3, 0), (4, 3) and (5, 1), as shown on the graph.
Pretty simple, right? Suppose we wanted to slide our triangle 3 units down from its original position. The matrix addition would look like this, which causes 3 to be subtracted from each y-coordinate.
Of course, if we wanted to perform both transformations at once, it would look like this:
We put our positive 2s in the top row to move the triangle along the x-axis, and put negative 3s in the bottom row to also move it along the y-axis.
Suppose we wanted to make our triangle twice as large. We would simply multiply our coordinate matrix by a factor of 2, like this.
This transformation is called a dilation, which, in the case of this example, looks like this:
Dilations can be either expansions or reductions. Instead of expanding our shape like we just did, we might have reduced its size by a factor of one-half like this:
We can also use matrices to reflect figures in various ways. This is done with matrix multiplication. Let’s briefly examine the big picture first:
If we multiply T by the 2×2 identity matrix, we don’t change our triangle’s size or position. Let’s take a look:
Essentially, the x-coordinates are multiplied by 1 and the y-coordinates are multiplied by 1, so they don’t change. Remember, the identity matrix is the multiplicative identity, so this is what’s supposed to happen.
Suppose we wanted to reflect our triangle over the y-axis. None of the y-coordinates would change and all the x-coordinates would become the inverse. Here’s what the multiplication looks like:
All the x-coordinates are now opposite the originals, and our triangle has been reflected:
If you wanted to reflect the triangle over the x-axis, you would go through the same process, but use this matrix to multiply:
The new x-coordinates are the inverse of the original. (x,y) becomes (x,-y).
If you wanted to reflect the triangle over the origin, meaning reflect it simultaneously over both axes, you would use this matrix to multiply:
The new x- and y-coordinates are both inverse of the originals. (x,y) becomes (-x,-y).
Lastly, let’s look at rotation. We can rotate our triangle by any angle measure in the counterclockwise direction. First, let’s look at some common rotations, then we’ll see how to rotate any angle amount.
Suppose we want to rotate our triangle by 90 degrees. We would simply multiply our coordinate matrix by this identity matrix, which gives us this matrix.
which gives us:
This is how the triangle is now positioned:
A 180-degree rotation is the same as reflecting about the origin, so we use the same matrix:
For a 270-degree counterclockwise rotation, we would use this matrix:
There are some clear parallels to reflections here, but there is also a trigonometry connection that allows us to rotate any angle amount we want. The general matrix to use to multiply looks like this:
We can quickly see where our three common matrices come from:
A 90-degree counterclockwise rotation would look like this,
whereas a 180-degree counterclockwise rotation might look like this
And finally, a 270-degree counterclockwise rotation looks like this
But now we can rotate by other angles, like 30 or 45 degrees if we want to.
Thanks for watching, and happy studying!
## Practice Questions
Question #1:
The coordinates of the vertices of quadrilateral Q are shown in the matrix.
$$Q=\begin{bmatrix} 3 & 1 & 2 & 4\\ -3 & 0 & 0 & -2 \end{bmatrix}$$
What are the coordinates of the vertices of quadrilateral Q after it has been translated $$4$$ units left and $$3$$ units up?
$$\begin{bmatrix} -1 & -3 & -2 & 0\\ 0 & 3 & 3 & 1 \end{bmatrix}$$
$$\begin{bmatrix} 7 & 5 & 6 & 8\\ 0 & 3 & 3 & 1 \end{bmatrix}$$
$$\begin{bmatrix} -1 & -3 & -2 & 0\\ -6 & -3 & -3 & -5 \end{bmatrix}$$
$$\begin{bmatrix} 7 & 5 & 6 & 8\\ -6 & -3 & -3 & -5 \end{bmatrix}$$
To translate the triangle $$4$$ units left and $$3$$ units up we will add the original matrix to the matrix reflecting the translation:
$$\begin{bmatrix} -4 & -4 & -4 &-4 \\ 3& 3 & 3 & 3 \end{bmatrix}+\begin{bmatrix} 3 & 1 & 2 & 4\\ -3 & 0 & 0 & -2 \end{bmatrix}=\begin{bmatrix} -1 & -3 & -2 & 0\\ 0 & 3 & 3 & 1 \end{bmatrix}$$
Therefore, the coordinates of the vertices of quadrilateral Q after it has been translated $$4$$ units up and $$3$$ units left are $$\begin{bmatrix} -1 & -3 & -2 & 0\\ 0 & 3 & 3 & 1 \end{bmatrix}$$.
Question #2:
Matrix K shows the coordinates of the vertices of a triangle.
$$K=\begin{bmatrix} -1 & 2 &-1 \\ 3& 4 & 1 \end{bmatrix}$$
What are the coordinates of the vertices of triangle K after a dilation by a factor of $$3$$?
$$\begin{bmatrix} 2 & 5 & 2\\ 6& 7 & 4 \end{bmatrix}$$
$$\begin{bmatrix} -3 & 6 & -3\\ 9& 12 & 3 \end{bmatrix}$$
$$\begin{bmatrix} 4 & 1 & 4\\ 0& -1 & 2 \end{bmatrix}$$
$$\begin{bmatrix} 3 & -6 & 3\\ -9& -12 & -3 \end{bmatrix}$$
To dilate by a factor of $$3$$ we must multiply all the entries in the matrix by $$3$$, which will result in the matrix $$\begin{bmatrix} -3 & 6 & -3\\ 9& 12 & 3 \end{bmatrix}$$, which are the coordinates of triangle K after it has been dilated by a factor of $$3$$.
Question #3:
The vertices of the coordinates of triangle W are shown in the matrix.
$$W=\begin{bmatrix} -4 & -1 & 0\\ -1 & 1 & -2 \end{bmatrix}$$
What are the coordinates of the vertices of triangle W after the size is reduced to $$\frac{1}{3}$$ of the original size?
$$\begin{bmatrix} -7 & -4 & -3\\ -4 & -2 & -5 \end{bmatrix}$$
$$\begin{bmatrix} -1 & 2 & 3\\ 2 & 4 & 1 \end{bmatrix}$$
$$\begin{bmatrix} \frac{4}{3} & \frac{1}{3} & 0\\ \frac{1}{3} & -\frac{1}{3} & \frac{2}{3} \end{bmatrix}$$
$$\begin{bmatrix} -\frac{4}{3} & -\frac{1}{3} & 0\\ -\frac{1}{3} & \frac{1}{3} & -\frac{2}{3} \end{bmatrix}$$
To reduce the size of a triangle we multiply the original coordinates by the factor it is being reduced by, in this case $$\frac{1}{3}$$, which results in the following coordinates for the vertices $$\begin{bmatrix} -\frac{4}{3} & -\frac{1}{3} & 0\\ -\frac{1}{3} & \frac{1}{3} & -\frac{2}{3} \end{bmatrix}$$.
Question #4:
The coordinates of the vertices of quadrilateral L are shown in the matrix.
$$L=\begin{bmatrix} 0 & 3 & 2 & -3\\ 1 & 3 & 4 & 5 \end{bmatrix}$$
What are the coordinates of the vertices of quadrilateral L after it has been reflected across the $$y$$-axis?
$$\begin{bmatrix} 0 & 3 & 2 & -3\\ -1 & -3 & -4 & -5 \end{bmatrix}$$
$$\begin{bmatrix} 0 & -3 & -2 & 3\\ -1 & -3 & -4 & -5 \end{bmatrix}$$
$$\begin{bmatrix} 0 & -3 & -2 & 3\\ 1 & 3 & 4 & 5 \end{bmatrix}$$
$$\begin{bmatrix} 1 & 3 & 4 & 5\\ 0 & 3 & 2 & -3 \end{bmatrix}$$
We can find the coordinates of the vertices of quadrilateral L after a reflection across the $$y$$-axis by multiplying the matrix by $$\begin{bmatrix} -1 & 0\\ 0 & 1 \end{bmatrix}$$, which results in the value of the y-coordinates staying the same and the x-values change sign, $$\begin{bmatrix} 0 & -3 & -2 & 3\\ 1 & 3 & 4 & 5 \end{bmatrix}$$.
Question #5:
The coordinates of the vertices of triangle T are shown in the matrix.
$$T=\begin{bmatrix} 2 & 5 & 3\\ -3 & 1 & 0 \end{bmatrix}$$
What are the coordinates of the vertices of triangle T after it has been rotated $$90°$$ counterclockwise?
$$\begin{bmatrix} 3 & -1 & 0\\ 2 & 5 & 3 \end{bmatrix}$$
$$\begin{bmatrix} -3 & 1 & 0\\ 2 & 5 & 3 \end{bmatrix}$$
$$\begin{bmatrix} -3 & 1 & 0\\ -2 & -5 & -3 \end{bmatrix}$$
$$\begin{bmatrix} 3 & -1 & 0\\ -2 & -5 & -3 \end{bmatrix}$$
To use matrices to rotate the coordinates of the vertices of triangle T by $$90°$$ we will multiply the coordinates of the vertices of triangle T by $$\begin{bmatrix} 0 & -1\\ 1 & 0 \end{bmatrix}$$, which results in the following coordinates, $$\begin{bmatrix} 3 & -1 & 0\\ 2 & 5 & 3 \end{bmatrix}$$.
|
2022-07-05 06:20:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8028817772865295, "perplexity": 371.3772952934949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104514861.81/warc/CC-MAIN-20220705053147-20220705083147-00531.warc.gz"}
|
https://www.xaprb.com/blog/2006/06/07/how-to-update-a-gcc-profile-on-gentoo/
|
# How to update a GCC profile on Gentoo
### Slots in Gentoo
Gentoo allows installing multiple versions of packages side-by-side in different “slots.” This avoids dependency problems. For example, it’s possible to run programs that require different versions of libraries, because they can all coexist happily (the lack of this feature on Microsoft Windows is known as DLL hell).
Often an upgraded package will install in a new slot, rather than replacing the previous version. Sometimes the old version will continue to be the system default, even though there’s a newer version available. GCC is such a package.
### GCC profiles
GCC, and certain other packages such as MySQL, require the system administrator to explicitly select which version should be used. With MySQL and some other packages, the eselect tool selects the version, but selecting a version for GCC is more complex. Not only is there a version to select, but a “profile.” The profile is a set of behaviors and optimizations. The gcc-config tool selects a GCC profile, which is sourced from /etc/profile.
### How to select a profile
On my workstation at work, I became root, then ran the following command to view available profiles:
# gcc-config -l
[1] i686-pc-linux-gnu-3.3.6 *
[2] i686-pc-linux-gnu-3.3.6-hardened
[3] i686-pc-linux-gnu-3.3.6-hardenednopie
[4] i686-pc-linux-gnu-3.3.6-hardenednopiessp
[5] i686-pc-linux-gnu-3.3.6-hardenednossp
[6] i686-pc-linux-gnu-3.4.6
[7] i686-pc-linux-gnu-3.4.6-hardened
[8] i686-pc-linux-gnu-3.4.6-hardenednopie
[9] i686-pc-linux-gnu-3.4.6-hardenednopiessp
[10] i686-pc-linux-gnu-3.4.6-hardenednossp
My current profile was i686-pc-linux-gnu-3.3.6, as indicated by the asterisk after that entry (gcc-config -c also prints this information). To choose a newer profile, I ran
# gcc-config i686-pc-linux-gnu-3.4.6
* Switching native-compiler to i686-pc-linux-gnu-3.4.6 ...
>>> Regenerating /etc/ld.so.cache... [ ok ]
* If you intend to use the gcc from the new profile in an already
* running shell, please remember to do:
* # source /etc/profile
As you can see, it switched me to the new profile, and advised me to update my environment variables if I wanted to use the new profile in my existing shell.
Update That’s not all; you need to do a bunch more work to make sure your system is stable and sane. Fortunately, Gentoo has a good document about this: Gentoo GCC Upgrade Guide. If I’d known about that document, I wouldn’t have written this article.
Update Wow, this is a major pain. The suggested way to do this basically involves re-compiling your entire system twice. That is not acceptable, especially if something fails to compile (as it seems to do fairly often, judging by other people’s experiences). This is my major gripe with Gentoo’s way of compiling from source. Actually, I have lots of gripes with that, but I’m still in love with Gentoo anyway.
Regardless, I’m going to try this guide on recompiling each package only once and see how it goes.
I'm Baron Schwartz, the founder and CEO of VividCortex. I am the author of High Performance MySQL and lots of open-source software for performance analysis, monitoring, and system administration. I contribute to various database communities such as Oracle, PostgreSQL, Redis and MongoDB. More about me.
|
2017-09-25 11:46:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21594147384166718, "perplexity": 3738.9596316579523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818691476.47/warc/CC-MAIN-20170925111643-20170925131643-00587.warc.gz"}
|
https://phys.libretexts.org/Bookshelves/Electricity_and_Magnetism/Book%3A_Electromagnetics_II_(Ellingson)/10%3A_Antennas
|
$$\require{cancel}$$
# 10: Antennas
An antenna is a transducer; that is, a device which converts signals in one form into another form. In the case of an antenna, these two forms are (1) conductor-bound voltage and current signals and (2) electromagnetic waves. Traditional passive antennas are capable of this conversion in either direction.
Thumbnail: Polar plots of the horizontal cross sections of a (virtual) Yagi-Uda-antenna. Outline connects points with 3 dB field power compared to an ISO emitter. (CC BY-SA 4.0 International; Timothy Truckle via Wikipedia)
10: Antennas is shared under a CC BY-SA license and was authored, remixed, and/or curated by Steven W. Ellingson.
|
2022-05-18 16:47:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9423036575317383, "perplexity": 1399.227717274397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522284.20/warc/CC-MAIN-20220518151003-20220518181003-00705.warc.gz"}
|
http://mathoverflow.net/api/userquestions.html?userid=17673&page=1&pagesize=10&sort=votes
|
5
# Questions
2
8
1
268
views
### families of genus four curves with only hyperelliptic reduction
feb 20 at 17:17 David Lehavi 2,7751813
1
3
1
135
views
### genus two curve with special automorphisms
oct 9 at 18:18 Felipe Voloch 16.7k23661
2
0
124
views
### genus four curve with $|3p|= \mathfrak{g}^1_3$
apr 15 12 at 18:24 Yusuf Mustopa 3864
1
vote
1
198
views
### isomorphism of line bundles over $\mathrm{Spec} \mathbb{Z}$
may 25 12 at 8:14 Will Sawin 19.4k12448
0
|
2013-06-19 18:32:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5916479825973511, "perplexity": 9458.06700129811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709006458/warc/CC-MAIN-20130516125646-00064-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://kau.diva-portal.org/smash/person.jsf?pid=authority-person%3A3359
|
Change search
BETA
Bobylev, Alexander
##### Publications (10 of 46) Show all publications
Bobylev, A., Pulvirenti, M. & Saffirio, C. (2013). From Particle Systems to the Landau Equations: A Consistency Result. Communications in Mathematical Physics, 319(3), 693-702
Open this publication in new window or tab >>From Particle Systems to the Landau Equations: A Consistency Result
2013 (English)In: Communications in Mathematical Physics, ISSN 0010-3616, E-ISSN 1432-0916, Vol. 319, no 3, p. 693-702Article in journal (Refereed) Published
##### Abstract [en]
We consider a system of N classical particles, interacting via a smooth, short-range potential, in a weak-coupling regime. This means that N tends to infinity when the interaction is suitably rescaled. The j-particle marginals, which obey to the usual BBGKY hierarchy, are decomposed into two contributions: one small but strongly oscillating, the other hopefully smooth. Eliminating the first, we arrive to establish the dynamical problem in term of a new hierarchy (for the smooth part) involving a memory term. We show that the first order correction to the free flow converges, as N →∞, to the corresponding term associated to the Landau equation. We also show the related propagation of chaos.
Springer, 2013
Mathematics
Mathematics
##### Identifiers
urn:nbn:se:kau:diva-16083 (URN)10.1007/s00220-012-1633-6 (DOI)000318291500003 ()
##### Funder
Swedish Research Council, 621-2009-5751 Available from: 2012-12-04 Created: 2012-12-04 Last updated: 2017-12-07Bibliographically approved
Bobylev, A. & Potapenko, I. (2013). Monte Carlo methods and their analysis for Coulomb collisions in multicomponent plasmas. Journal of Computational Physics, 246, 123-144
Open this publication in new window or tab >>Monte Carlo methods and their analysis for Coulomb collisions in multicomponent plasmas
2013 (English)In: Journal of Computational Physics, ISSN 0021-9991, E-ISSN 1090-2716, Vol. 246, p. 123-144Article in journal (Refereed) Published
##### Abstract [en]
A general approach to Monte Carlo methods for Coulomb collisions is proposed. Its key idea is an approximation of Landau-Fokker-Planck equations by Boltzmann equations of quasi-Maxwellian kind. It means that the total collision frequency for the corresponding Boltzmann equation does not depend on the velocities. This allows to make the simulation process very simple since the collision pairs can be chosen arbitrarily, without restriction. It is shown that this approach includes the well-known methods of Takizuka and Abe (1977) [12] and Nanbu (1997) as particular cases, and generalizes the approach of Bobylev and Nanbu (2000). The numerical scheme of this paper is simpler than the schemes by Takizuka and Abe [12] and by Nanbu. We derive it for the general case of multicomponent plasmas and show some numerical tests for the two-component (electrons and ions) case. An optimal choice of parameters for speeding up the computations is also discussed. It is also proved that the order of approximation is not worse than O(root epsilon), where epsilon is a parameter of approximation being equivalent to the time step Delta t in earlier methods. A similar estimate is obtained for the methods of Takizuka and Abe and Nanbu. (C) 2013 Elsevier Inc. All rights reserved.
Elsevier, 2013
##### Keywords
Monte Carlo methods, Coulomb collisions, Landau-Fokker-Planck equations, Boltzmann equations, Error of approximation
##### National Category
Other Mathematics
Mathematics
##### Identifiers
urn:nbn:se:kau:diva-38602 (URN)10.1016/j.jcp.2013.03.024 (DOI)000320604000009 ()
Available from: 2015-11-30 Created: 2015-11-23 Last updated: 2017-12-01Bibliographically approved
Andriash, A. V., Bobylev, A. V., Brantov, A. V., Bychenkov, V. Y., Karpov, S. A. & Potapenko, I. F. (2013). Stochastic simulation of the nonlinear kinetic equation with high-frequency electromagnetic fields. PROBLEMS OF ATOMIC SCIENCE AND TECHNOLOGY (4), 233-237
Open this publication in new window or tab >>Stochastic simulation of the nonlinear kinetic equation with high-frequency electromagnetic fields
2013 (English)In: PROBLEMS OF ATOMIC SCIENCE AND TECHNOLOGY, ISSN 1562-6016, no 4, p. 233-237Article in journal (Refereed) Published
##### Abstract [en]
A general approach to Monte Carlo methods for Coulomb collisions is proposed. Its key idea is an approximation of Landau-Fokker-Planck (LFP) equations by Boltzmann equations of quasi-Maxwellian kind. High-frequency fields are included into consideration and comparison with the well-known results are given.
##### Place, publisher, year, edition, pages
KHARKOV INST PHYSICS & TECHNOLOGY, NATL SCIENCE CTR, 2013
##### Keywords
BASIC INTERACTIONS, CALCULATION METHODS, DIFFERENTIAL EQUATIONS, ELASTIC SCATTERING, ELECTROMAGNETIC INTERACTIONS, EQUATIONS, FUNCTIONS, INTEGRO-DIFFERENTIAL EQUATIONS, INTERACTIONS, KINETIC EQUATIONS
##### National Category
Mathematics Mathematical Analysis
##### Research subject
Materials Science
##### Identifiers
urn:nbn:se:kau:diva-38666 (URN)000324081600054 ()
Available from: 2015-11-23 Created: 2015-11-23 Last updated: 2016-08-12Bibliographically approved
Bobylev, A. & Esposito, R. (2013). Transport Coefficients in the 2-dimensional Boltzmann Equation. Kinetic and Related Models, 6(4), 789-800
Open this publication in new window or tab >>Transport Coefficients in the 2-dimensional Boltzmann Equation
2013 (English)In: Kinetic and Related Models, ISSN 1937-5093, E-ISSN 1937-5077, Vol. 6, no 4, p. 789-800Article in journal (Refereed) Published
##### Abstract [en]
We show that a rarefied system of hard disks in a plane, described in the Boltzmann-Grad limit by the 2-dimensional Boltzmann equation, has bounded transport coefficients. This is proved by showing opportune compactness properties of the gain part of the linearized Boltzmann operator.
##### Keywords
Boltzmann equation, transport coefficients
Mathematics
Mathematics
##### Identifiers
urn:nbn:se:kau:diva-38569 (URN)10.3934/krm.2013.6.789 (DOI)000327733900008 ()
Available from: 2016-01-22 Created: 2015-11-23 Last updated: 2017-11-30Bibliographically approved
Bobylev, A. & Windfäll, Å. (2012). Boltzmann equation and hydrodynamics at the Burnett level. Kinetic and Related Models, 5(2), 237-260
Open this publication in new window or tab >>Boltzmann equation and hydrodynamics at the Burnett level
2012 (English)In: Kinetic and Related Models, ISSN 1937-5093, Vol. 5, no 2, p. 237-260Article in journal (Refereed) Published
##### Abstract [en]
The hydrodynamics at the Burnett level is discussed in detail. First we explain the shortest way to derive the classical Burnett equations from the Boltzmann equation. Then we sketch all the computations needed for details of these equations. It is well known that the classical Burnett equations are ill-posed. We therefore explain how to make a regularization of these equations and derive the well-posed generalized Burnett equations (GBEs). We discuss briefly an optimal choice of free parameters in GBEs and consider a specific version of these equations. It is remarkable that this version of GBEs is even simpler than the original Burnett equations, it contains only third derivatives of density. Finally we prove a linear stability for GBEs. We also present some numerical results on the sound propagation based on GBEs and compare them with the Navier-Stokes results and experimental data.
##### Place, publisher, year, edition, pages
American Institute of Mathematical Sciences, 2012
##### Keywords
Hydrodynamics, regularized Burnett equations, Stability, sound propagation.
Mathematics
Mathematics
##### Identifiers
urn:nbn:se:kau:diva-8710 (URN)10.3934/krm.2012.5.237 (DOI)000302962700002 ()
Available from: 2011-11-03 Created: 2011-11-03 Last updated: 2012-12-04Bibliographically approved
Bobylev, A., Potapenko,, I. F. & Karpov, S. A. (2012). DSMC Methods for Multicomponent Plasmas (1ed.). In: Michel Mareschal, Andrés Santos (Ed.), DSMC and Related Simulations: 28th International Symposium on Rarefied Gas Dynamics 2012. Paper presented at 28th International Symposium on Rarefied Gas Dynamics, Zaragoza, July 9-13th, 2012 (pp. 541-548). New York: American Institute of Physics (AIP)
Open this publication in new window or tab >>DSMC Methods for Multicomponent Plasmas
2012 (English)In: DSMC and Related Simulations: 28th International Symposium on Rarefied Gas Dynamics 2012 / [ed] Michel Mareschal, Andrés Santos, New York: American Institute of Physics (AIP), 2012, 1, p. 541-548Conference paper, Published paper (Refereed)
##### Abstract [en]
A general approach to Monte Carlo methods for Coulomb collisions is proposed. Its key idea is an approximation of the Landau-Fokker-Planck equations by the Boltzmann equations of a quasi-Maxwellian kind. This means that the total collision frequency for the corresponding Boltzmann equation does not depend on velocities. This allows one to make the simulation process very simple since the collision pairs can be chosen arbitrarily, without restriction. It is shown that this approach includes (as particular cases) the well-known methods of Takizuka & Abe(1977) and Nanbu(1997) and generalizes the approach of Bobylev & Nanbu(2000). The numerical scheme of this paper is simpler than the schemes by Takizuka & Abe and by Nanbu. We derive it for the general case of multicomponent plasmas
##### Place, publisher, year, edition, pages
New York: American Institute of Physics (AIP), 2012 Edition: 1
##### Series
AIP Conference Proceedings, ISSN 0094-243X ; 1501
##### Keywords
Boltzmann equations, Coulomb collisions, Landau-Fokker-Planck equations, Monte Carlo methods
Mathematics
Mathematics
##### Identifiers
urn:nbn:se:kau:diva-16082 (URN)10.1063/1.4769589 (DOI)000312411200070 ()978-0-7354-1115-9 (ISBN)
##### Conference
28th International Symposium on Rarefied Gas Dynamics, Zaragoza, July 9-13th, 2012
##### Funder
Swedish Research Council, 621-2009-5751
##### Note
28th International Symposium on Rarefied Gas Dynamics, Zaragoza, July 9-13th, 2012
Available from: 2012-12-04 Created: 2012-12-04 Last updated: 2016-04-25Bibliographically approved
Bobylev, A., Karpov, S. & Potapenko, I. (2012). Monte-Carlo method for two component plasmas. Matematicheskoe Modelirovanie, 24(9), 35-49
Open this publication in new window or tab >>Monte-Carlo method for two component plasmas
2012 (Russian)In: Matematicheskoe Modelirovanie, ISSN 0234-0879, Vol. 24, no 9, p. 35-49Article in journal (Refereed) Published
##### Abstract [en]
The new direct simulation method of Monte-Carlo type (DSMC) for Coulomb collisions in the case of two component plasma is considered. A brief literature review and preliminary information concerning the problem are given. Further the idea that lies in the basis of the method is discussed and its scheme is provided. The illustrative numerical simulation of the initial distribution relaxation for one and two sorts of particles in 3D case in the velocity space is performed. Simulation results are compared with the numerical results based on the completely conservative finite difference schemes for the Landau-Fokker-Planck equation. Estimation of calculation accuracy obtained from numerical results is given.
##### Place, publisher, year, edition, pages
Moskva: Steklov Mathematical Institute, Russian Academy of Sciences, 2012
Mathematics
Mathematics
##### Identifiers
urn:nbn:se:kau:diva-16081 (URN)
Available from: 2012-12-03 Created: 2012-12-03 Last updated: 2017-06-30Bibliographically approved
Bobylev, A. & Gamba, I. (2012). Solutions of the linear Boltzmann equation and some Dirichlet series. Forum Mathematicum, 24(2), 239-251
Open this publication in new window or tab >>Solutions of the linear Boltzmann equation and some Dirichlet series
2012 (English)In: Forum Mathematicum, ISSN 1435-5337, Vol. 24, no 2, p. 239-251Article in journal (Refereed) Published
##### Abstract [en]
It is shown that a broad class of generalized Dirichlet series (including the polylogarithm, related to the Riemann zeta-function) can be presented as a classof solutions of the Fourier transformed spatially homogeneous linear Boltzmannequation with a special Maxwell-type collision kernel. The result is based on anexplicit integral representationof solutions to the Cauchy problem for the Boltzmann equation. Possibleapplications to the theory of Dirichlet seriesare briefly discussed.
##### Place, publisher, year, edition, pages
Walter de Gruyter, 2012
##### Keywords
Boltzmann equation, Dirichlet series and functional equations, Riemann Zeta and L\$L\$-functions
Mathematics
Mathematics
##### Identifiers
urn:nbn:se:kau:diva-10556 (URN)10.1515/form.2011.058 (DOI)000303419000002 ()
##### Funder
Swedish Research Council, 2006-3404 Available from: 2012-02-08 Created: 2012-02-08 Last updated: 2012-12-04Bibliographically approved
Bobylev, A. & Vinerean (Bernhoff), M. (2012). Symmetric extensions of normal discrete velocity models (1ed.). In: Michel Mareschal, Andrés Santos (Ed.), Michel Mareschal, Andrés Santos (Ed.), 28th International Symposium on Rarefied Gas Dynamics 2012: . Paper presented at 28th International Symposium on Rarefied Gas Dynamics 2012, July 9 - 13, Zaragoza (pp. 254-261). American Institute of Physics (AIP), 1501(1)
Open this publication in new window or tab >>Symmetric extensions of normal discrete velocity models
2012 (English)In: 28th International Symposium on Rarefied Gas Dynamics 2012 / [ed] Michel Mareschal, Andrés Santos, American Institute of Physics (AIP), 2012, 1, Vol. 1501, no 1, p. 254-261Conference paper, Published paper (Refereed)
##### Abstract [en]
In this paper we discuss a general problem related to spurious conservation laws for discrete velocity models (DVMs) of the classical (elastic) Boltzmann equation. Models with spurious conservation laws appeared already at the early stage of the development of discrete kinetic theory. The well-known theorem of uniqueness of collision invariants for the continuous velocity space very often does not hold for a set of discrete velocities. In our previous works we considered the general problem of the construction of normal DVMs, we found a general algorithm for the construction of all such models and presented a complete classification of normal DVMs with small number n of velocities (n<11). Even if we have a general method to classify all normal discrete kinetic models (and in particular DVMs), the existing method is relatively slow and the amount of possible cases to check increases rapidly with n. We remarked that many of our normal DVMs appear to be axially symmetric. In this paper we consider a connection between symmetric transformations and normal DVMs. We first develop a new inductive method that, starting with a given normal DVM, leads by symmetric extensions to a new normal DVM. This method can produce very fast many new normal DVMs with larger number of velocities, showing that the class of normal DVMs contains a large subclass of symmetric models. We finally apply the method to several normal DVMs and construct new models that are not only normal, but also symmetric relatively to more and more axes. We hope that such symmetric velocitysets can be used for DSMC methods of solving Boltzmann equation.
##### Place, publisher, year, edition, pages
American Institute of Physics (AIP), 2012 Edition: 1
##### Series
AIP Conference Proceedings, ISSN 0094-243X, E-ISSN 1551-7616 ; 1501
##### Keywords
Kinetic theory, discrete kinetic (velocity) models, conservation laws
Mathematics
Mathematics
##### Identifiers
urn:nbn:se:kau:diva-16072 (URN)10.1063/1.4769516 (DOI)000312411200032 ()978-0-7354-1115-9 (ISBN)
##### Conference
28th International Symposium on Rarefied Gas Dynamics 2012, July 9 - 13, Zaragoza
Available from: 2012-12-03 Created: 2012-12-03 Last updated: 2017-10-30Bibliographically approved
Bobylev, A. & Windfäll, Å. (2011). Kinetic modeling of economic games with large number of participants. Kinetic and Related Models, 4(1), 169-185
Open this publication in new window or tab >>Kinetic modeling of economic games with large number of participants
2011 (English)In: Kinetic and Related Models, ISSN 1937-5093, Vol. 4, no 1, p. 169-185Article in journal (Refereed) Published
##### Abstract [en]
We study a Maxwell kinetic model of socio-economic behavior introduced in the paper A. V. Bobylev, C. Cercignani and I. M. Gamba, Commun. Math. Phys., 291 (2009), 599-644. The model depends on three non-negative parameters $\displaystyle{\left\lbrace\gamma,{q},{s}\right\rbrace}$ where $\displaystyle{0}<\gamma\leq{1}$ is the control parameter. Two other parameters are fixed by market conditions. Self-similar solution of the corresponding kinetic equation for distribution of wealth is studied in detail for various sets of parameters. In particular, we investigate the efficiency of control. Some exact solutions and numerical examples are presented. Existence and uniqueness of solutions are also discussed.
##### Place, publisher, year, edition, pages
American Institute of Mathematical Sciences, 2011
##### Keywords
Maxwell models, self-similar solutions, distribution of wealth, market economy
Mathematics
Mathematics
##### Identifiers
urn:nbn:se:kau:diva-8711 (URN)10.3934/krm.2011.4.169 (DOI)000286926200010 ()
Available from: 2011-11-03 Created: 2011-11-03 Last updated: 2012-12-04Bibliographically approved
#### Search in DiVA
Show all publications
|
2019-04-19 20:50:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5302172303199768, "perplexity": 3910.0159609850657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528058.3/warc/CC-MAIN-20190419201105-20190419222039-00078.warc.gz"}
|
https://tensorflow.rstudio.com/tools/tensorboard.html
|
# TensorBoard
## Overview
The computations you’ll use TensorFlow for - like training a massive deep neural network - can be complex and confusing. To make it easier to understand, debug, and optimize TensorFlow programs, a suite of visualization tools called TensorBoard is available. You can use TensorBoard to visualize your TensorFlow graph, plot quantitative metrics about the execution of your graph, and show additional data like images that pass through it.
For example, here’s a TensorBoard display for Keras accuracy and loss metrics:
## Recording Data
The method for recording events for visualization by TensorBoard varies depending upon which TensorFlow interface you are working with:
Keras When using Keras, include the callback_tensorboard() when invoking the fit() function to train a model. See the Keras documentation for additional details. Estimators When using TF Estimators, TensorBoard events are automatically written to the model_dir specified when creating the estimator. See the Estimators documentation for additional details. Core API When using the core API, you need to attach tf$summary$scalar operations to the graph for the metrics you want to record for viewing in TensorBoard. See the core documentation for additional details.
Note that in all cases it’s important that you use a unique directory to record training events (otherwise events from multiple training runs will be aggregated together).
You can remove and recreate event log directories between runs, or alternatively use the tfruns package to do training, which will automatically create a new directory for each training run.
## Viewing Data
To view TensorBoard data for a given set of runs you use the tensorboard() function, pointing it to to a directory which contains TensorBoard logs:
It’s often useful to run TensorBoard while you are training a model. To do this, simply launch tensorboard within the training directory right before you begin training:
Keras writes TensorBoard data at the end of each epoch so you won’t see any data in TensorBoard until 10-20 seconds after the end of the first epoch (TensorBoard automatically refreshes it’s display every 30 seconds during training).
### tfruns
If you are using the tfruns package to track and manage training runs then there are some shortcuts available for the tensorboard() function:
## Comparing Runs
TensorBoard will automatically include all runs logged within the sub-directories of the specified log_dir, for example, if you logged another run using:
Then called tensorboard as follows:
The TensorBoard visualization would look like this:
You can also pass multiple log directories. For example:
### tfruns
If you are using the tfruns package to track and manage training runs then you easily pass multiple runs that match a criteria using the ls_runs() function. For example:
|
2019-01-24 09:38:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17723649740219116, "perplexity": 3427.101224894004}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584519757.94/warc/CC-MAIN-20190124080411-20190124102411-00171.warc.gz"}
|
http://rettacs.org/mathr_munu-frac12g_-munur-frac8pi-gc4t_munumath-made-simpler/
|
# $R_{\mu\nu} - \frac{1}{2}g_{\mu\nu}R + g_{\mu\nu}\Lambda= \frac{8\pi G}{c^4}T_{\mu\nu}$ made simple(r)
Einstein’s Special Theory of relativity is capable of being responsibly taught in early undergraduate physics courses. It’s not easy but the mathematics is accessible and the concepts amenable to interesting analogies and occasional paradoxes.
The General Theory is an entirely different mountain to climb requiring substantially more preparation, focus, and stamina. Physical chemists such as myself have to have a very solid foundation in several branches of physics but GR has left many of us at Base Camp Motel 6 saying, “Someday…”
And then comes this beautiful 2 hour video by Dr. Physics A of the UK; “Einstein Field Equations – for beginners!” He takes the famous field equations as shown in the Subject and explains where each of the terms comes from and how they work together to describe space, time, and matter affecting one another. The Doc is refreshingly honest about what he is doing – basic introduction, not rigorous, covering only the essence. He’s understating a marvelous accomplishment. Having watched this handcrafted lecture, I now think that I might, in time, be able to make another attempt at the classic text/doorstop of Misner, Thorne, and Wheeler. It will still be an hell of a climb but there’s some idea of the destination and a path towards it.
For the full treatment, he recommends Prof. Susskind’s 2008 lecture series at Stanford:
And who is this Doctor Physics A who prepares so many videos for British high school students? Turns out he’s a nuclear physicist by training and an entertainer by avocation. Impressive!
|
2017-11-23 13:12:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30090218782424927, "perplexity": 1503.674856888965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806832.87/warc/CC-MAIN-20171123123458-20171123143458-00375.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=42&t=37768&p=127928
|
## Delocalized Pi Bonding
$sp, sp^{2}, sp^{3}, dsp^{3}, d^{2}sp^{3}$
Meigan Wu 2E
Posts: 66
Joined: Fri Sep 28, 2018 12:29 am
Been upvoted: 1 time
### Delocalized Pi Bonding
Can someone explain what delocalized pi bonding is and when does it exist?
Patrick Cai 1L
Posts: 93
Joined: Fri Sep 28, 2018 12:25 am
### Re: Delocalized Pi Bonding
Delocalized pi bonding is essentially the sharing of a pi bond over more than two nuclei. This can be prominently be seen with benzene, which shares 3 pi bonds over 6 carbon nuclei and contributes to the ring of electron cloud density characteristic of benzene.
Carlos De La Torre 2L
Posts: 60
Joined: Tue Oct 09, 2018 12:16 am
### Re: Delocalized Pi Bonding
this is the cloud of electrons over a molecule it an have three pi bonds on every other bond, and due to resonance there are characteristics of double bonding at every bond at the same time, resulting in a delocalized pi bond over the entire molecule.
Jasmine Chow 1F
Posts: 60
Joined: Fri Sep 28, 2018 12:16 am
Been upvoted: 1 time
### Re: Delocalized Pi Bonding
A delocalized pi bond is when electrons can freely move more that two nuclei. You will often see this in a lewis structure in carbon rings. Carbon rings often consist of double bonds. These pi bonds that are spread out over the lewis structure means the pi bonds are delocalized as they do not reside in only one area.
### Who is online
Users browsing this forum: No registered users and 1 guest
|
2019-08-18 23:54:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5522821545600891, "perplexity": 5827.1120764832995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314353.10/warc/CC-MAIN-20190818231019-20190819013019-00520.warc.gz"}
|
https://tex.stackexchange.com/questions/199161/parabolic-moebius-map-on-sphere-using-tikz?noredirect=1
|
# Parabolic Moebius map on sphere using tikz
There is a discussion here about coding the elliptic Moebius map. I however am interested in how I might do the parabolic one. case [d]
The arrow heads are not an issue, but how do I get the lines on the surface of the sphere? Most (all?) tutorials only focus on lines of latitude and longitude)
• parametrization :) Sep 2 '14 at 18:15
• @cmhughes in general it is the TeX code I have problems with, not the maths. Can you elaborate just a little what you mean? Is there an easy way to plot parametrised curves on a 2 sphere and have it look similar to texample.net/tikz/examples/map-projections Sep 3 '14 at 18:44
# How to Draw Parametrized Curves on a Sphere in TikZ
The technique shown here parametrizes azimuth and elevation in order to draw things in a spherical coordinate system. These are then converted to the cartesian XYZ coordinate system for plotting.
Using the technique shown below, the requested parabolic Moebius map can be drawn using the following command (the formula was provided by the OP):
\def\theX{0.5*(1-cos(deg(x)))} % 0 to 1 to 0
\def\theY{0.5*(sin(deg(x))} % 0 to 0.5 to 0 to -0.5 to 0
(
{sin(\i)*\theX}, % X coordinate
{2*sin(0.5*\i)*\theY}, % Y coordinate
{1-((1-cos(\i)))*\theX} % Z coordinate
);
And the result, when rendered from different angles:
## Using TikZ + PGFPlots
By combining TikZ and PGFPlots you can have a nice looking sphere and automatically hidden lines.
If you are drawing on the surface of a sphere, the sign of the z-depth directly shows whether that point is in front of or behind the sphere. The z-depth is obtained by multiplying the point in question with the view direction vector of the camera. Using the coordinate filtering mechanism of PGFPlots, you can exploit this to only draw parts of a path that are not hidden behind the sphere. This is what the sytles only background and only foreground do. Note that there is a slight overlap from depth -0.05 to depth 0.05 to avoid gaps between foreground and background parts.
Additionally, you can use TikZ to draw a nice looking sphere instead of the grid sphere shown at the bottom of this post. However, this requires aligning the PGFPlots coordinate system with the TikZ coordinate system, which is not trivial for 3D plots with variable view angles.
First you have to adjust TikZ's XYZ coordinate system to show the perspective you like. Since I haven't found anything similar to PGFPlots' view option in TikZ, I have simply implemented a viewport style that sets the x, y and z keys and accepts the same arguments as view. Using this style on an axis environment aligns the PGFPlots coordinate system with the TikZ one. But it is still necessary to assign the same parameters to the view style because otherwise PGFPlots hides parts of the plot when looking from some angles.
Then you can then use \addplot3 ({x-expr}, {y-expr}, {z-expr}); to draw your parametrized plot. In the example I defined helper macros \azimuth and \elevation to separate the definition of the curve from the coordinate transformation, such that the \draw command contain only the transformation.
Using the styles for z-depth filtering, you can then choose to draw only the visible or only the hidden parts of the plot. The macro \addFGBGplot automates this process for you by first drawing the hidden parts transparently and then drawing the visible parts using opaque lines.
\documentclass[margin=5pt, tikz]{standalone}
\usepackage{pgfplots}
\usepackage{xxcolor}
\pgfplotsset{compat=1.10}
% Declare nice sphere shading: http://tex.stackexchange.com/a/54239/12440
color(0bp)=(tikz@ball!0!white);
color(7bp)=(tikz@ball!0!white);
color(15bp)=(tikz@ball!70!black);
color(20bp)=(black!70);
color(30bp)=(black!70)}
\makeatother
% Style to set TikZ camera angle, like PGFPlots view
\tikzset{viewport/.style 2 args={
x={({cos(-#1)*1cm},{sin(-#1)*sin(#2)*1cm})},
y={({-sin(-#1)*1cm},{cos(-#1)*sin(#2)*1cm})},
z={(0,{cos(#2)*1cm})}
}}
% Styles to plot only points that are before or behind the sphere.
\pgfplotsset{only foreground/.style={
restrict expr to domain={rawx*\CameraX + rawy*\CameraY + rawz*\CameraZ}{-0.05:100},
}}
\pgfplotsset{only background/.style={
restrict expr to domain={rawx*\CameraX + rawy*\CameraY + rawz*\CameraZ}{-100:0.05}
}}
% Automatically plot transparent lines in background and solid lines in foreground
}
\newcommand{\ViewAzimuth}{-30}
\newcommand{\ViewElevation}{30}
\begin{document}
\begin{tikzpicture}
% Compute camera unit vector for calculating depth
\pgfmathsetmacro{\CameraX}{sin(\ViewAzimuth)*cos(\ViewElevation)}
\pgfmathsetmacro{\CameraY}{-cos(\ViewAzimuth)*cos(\ViewElevation)}
\pgfmathsetmacro{\CameraZ}{sin(\ViewElevation)}
\path[use as bounding box] (-1,-1) rectangle (1,1); % Avoid jittering animation
% Draw a nice looking sphere
\begin{scope}
\clip (0,0) circle (1);
\begin{scope}[transform canvas={rotate=-20}]
\shade [ball color=white] (0,0.5) ellipse (1.8 and 1.5);
\end{scope}
\end{scope}
\begin{axis}[
hide axis,
view={\ViewAzimuth}{\ViewElevation}, % Set view angle
every axis plot/.style={very thin},
disabledatascaling, % Align PGFPlots coordinates with TikZ
anchor=origin, % Align PGFPlots coordinates with TikZ
viewport={\ViewAzimuth}{\ViewElevation}, % Align PGFPlots coordinates with TikZ
]
% Plot equator and two longitude lines with occlusion
\addFGBGplot[domain=0:2*pi, samples=100, samples y=1] ({cos(deg(x))}, {sin(deg(x))}, 0);
\addFGBGplot[domain=0:2*pi, samples=100, samples y=1] (0, {sin(deg(x))}, {cos(deg(x))});
\addFGBGplot[domain=0:2*pi, samples=100, samples y=1] ({sin(deg(x))}, 0, {cos(deg(x))});
% Draw heart shape with occlusion
\def\azimuth{deg(0.7*sin(x)+pi)}
\def\elevation{deg(1.1*abs(sin(0.67*x))-0.4)}
(
{sin(\azimuth)*cos(\elevation)}, % X coordinate
{cos(\azimuth)*cos(\elevation)}, % Y coordinate
{sin(\elevation)} % Z (vertical) coordinate
);
\end{axis}
\end{tikzpicture}
\end{document}
## Using only TikZ
Using only TikZ, you can draw plots on the surface of a sphere, but there is no automatic way to hide parts which are behind the sphere.
If you don't want to use PGFPlots for whatever reason, you can also plot parametrized functions with the TikZ \draw plot command. However, the lack of a coordinate filtering mechanism means that the curve is always drawn regardless of whether it is on the visible or on the hidden side of the sphere, as you can see in the animated image above. For simple shapes on a sphere you can fix this by adjusting the domain option such that only the visible parts of the curve are actually drawn, like the equator in the example code. Of course, you have to do that every time you change the viewport.
\documentclass[tikz,margin=5pt]{standalone}
\usepackage{tikz}
% Declare nice sphere shading: http://tex.stackexchange.com/a/54239/12440
color(0bp)=(tikz@ball!0!white);
color(7bp)=(tikz@ball!0!white);
color(15bp)=(tikz@ball!70!black);
color(20bp)=(black!70);
color(30bp)=(black!70)}
\makeatother
% Style to set camera angle, like PGFPlots view style
\tikzset{viewport/.style 2 args={
x={({cos(-#1)*1cm},{sin(-#1)*sin(#2)*1cm})},
y={({-sin(-#1)*1cm},{cos(-#1)*sin(#2)*1cm})},
z={(0,{cos(#2)*1cm})}
}}
% Convert from spherical to cartesian coordinates
\newcommand{\ToXYZ}[2]{
{sin(#1)*cos(#2)}, % X coordinate
{cos(#1)*cos(#2)}, % Y coordinate
{sin(#2)} % Z (vertical) coordinate
}
\begin{document}
\def\Rotation{-10}
\begin{tikzpicture}
% Draw shaded circle that looks like a sphere
\begin{scope}
\clip (0,0) circle (1);
\begin{scope}[transform canvas={rotate=-20}]
\shade [ball color=white] (0,0.5) ellipse (1.8 and 1.5);
\end{scope}
\end{scope}
% Draw things in actual 3D coordinates.
\begin{scope}[viewport={\Rotation}{30}, very thin]
% Draw equator (manually hidden behind sphere)
\draw[domain=90-\Rotation:270-\Rotation, variable=\azimuth, smooth] plot (\ToXYZ{\azimuth}{0});
\draw[domain=-90-\Rotation:90-\Rotation, variable=\azimuth, smooth, densely dotted] plot (\ToXYZ{\azimuth}{0});
% Draw "poles"
\draw[domain=0:360, variable=\azimuth, smooth] plot (\ToXYZ{\azimuth}{80});
\draw[domain=0:360, variable=\azimuth, smooth, densely dotted] plot (\ToXYZ{\azimuth}{-80});
% Draw two longitude lines for orientation
\foreach \azimuth in {0,90} {
\draw[domain=0:360, variable=\elevation, smooth] plot (\ToXYZ{\azimuth}{\elevation});
}
% Draw parametrized plot in spherical coordinates
\def\azimuth{deg(0.7*sin(\t)+pi)}
\def\elevation{deg(1.1*abs(sin(0.67*\t))-0.4)}
\draw[red, domain=-180:180, variable=\t, samples=101] plot (\ToXYZ{\azimuth}{\elevation});
\end{scope}
\end{tikzpicture}
\end{document}
## Using only PGFPlots
See the updated answer above. This is left here only for reference.
The pgfplots package allows you to draw parametrized 3d plots rather easily. It can also handle occlusion with z buffer=sort, but only inside a single \addplot command; later plots are simply drawn on top of earlier plots. For example, the averse side of the sphere is not drawn because there are other faces in front of it, but the equator that is added later is simply drawn on top of the sphere.
\documentclass[margin=5pt]{standalone}
\usepackage{pgfplots}
\pgfplotsset{compat=1.8}
\begin{document}
\begin{tikzpicture}
\begin{axis}[view={60}{30}, width=15cm, axis equal image]
% Draw sphere (example from the pgfplots manual)
surf, z buffer=sort, colormap/cool, point meta=-z,
samples=20, domain=-1:1, y domain=0:2*pi
]
(
{sqrt(1-x^2) * cos(deg(y))}, % X coordinate
{sqrt(1-x^2) * sin(deg(y))}, % Y coordinate
x % Z (vertical) coordinate
);
% Black twiddly line-thing that I just made up (parametrized)
\def\azimuth{(sin(deg(2*x)))}
\def\elevation{(0.5*cos(deg(x))+1)}
(
{cos(deg(\azimuth))*cos(deg(\elevation))}, % X coordinate
{sin(deg(\azimuth))*cos(deg(\elevation))}, % Y coordinate
{sin(deg(\elevation))} % Z (vertical) coordinate
);
% Draw equator to show missing occlusion
({cos(deg(x))}, {sin(deg(x))}, 0);
\end{axis}
\end{tikzpicture}
\end{document}
• May be using gnuplot as a backend would speed up the compilation time. Sep 5 '14 at 23:09
• Ewwwww gnuplot. Maybe it would, but I don't like it. Sep 5 '14 at 23:49
• I mean passing the calculations to gnuplot, which is doable with TikZ. Sep 6 '14 at 0:09
• Ah okay. Yes, that might be possible if you have gnuplot installed. Note however, that the TikZ version is already really fast, only the PGFPlots version is obscenely slow (specifically, when drawing the black path which has only 50 samples). So the gain from using gnuplot is probably minimal and must be weighed against the overhead of starting the gnuplot processes. Sep 6 '14 at 0:27
• Your pgfplots example suffers from a little unexpected gotcha: your two line plots are actually surface plots. The first is a matrix of 50x50, the second a matrix of 20x20 samples. Adding samples y=1 solves the issue. Sep 6 '14 at 7:33
I am still stunned by Fritz's great answer, and even more by what one can do with pgfplots. Here is a small addendum: it is possible to automatically discriminate between visible and hidden points with TikZ "only". One "only" has to modify the plot handler a bit. (This has severe side-effects: the paths get cut to tiny pieces and one can no longer be used for intersections and so on. Of course, one could just redraw the paths with a \path command, but this will certainly not win a prize for elegance.) Anyway, here is the code.
\documentclass[tikz,border=3.14mm]{standalone}
\usepackage{tikz-3dplot}
\makeatletter
% from https://tex.stackexchange.com/a/375604/121799
%along x axis
\define@key{x sphericalkeys}{theta}{\def\mytheta{#1}}
\define@key{x sphericalkeys}{phi}{\def\myphi{#1}}
\tikzdeclarecoordinatesystem{x spherical}{%
\setkeys{x sphericalkeys}{#1}%
%along y axis
\define@key{y sphericalkeys}{theta}{\def\mytheta{#1}}
\define@key{y sphericalkeys}{phi}{\def\myphi{#1}}
\tikzdeclarecoordinatesystem{y spherical}{%
\setkeys{y sphericalkeys}{#1}%
%along z axis
\define@key{z sphericalkeys}{theta}{\def\mytheta{#1}}
\define@key{z sphericalkeys}{phi}{\def\myphi{#1}}
\tikzdeclarecoordinatesystem{z spherical}{%
\setkeys{z sphericalkeys}{#1}%
\pgfmathsetmacro{\Xtest}{sin(\tdplotmaintheta)*cos(\tdplotmainphi-90)*sin(\mytheta)*cos(\myphi)
+sin(\tdplotmaintheta)*sin(\tdplotmainphi-90)*sin(\mytheta)*sin(\myphi)
+cos(\tdplotmaintheta)*cos(\mytheta)}
% \Xtest is the projection of the coordinate on the normal vector of the visible plane
\pgfmathsetmacro{\ntest}{ifthenelse(\Xtest<0,0,1)}
\ifnum\ntest=0
\xdef\MCheatOpa{0.3}
\else
\xdef\MCheatOpa{1}
\fi
%\typeout{\mytheta,\tdplotmaintheta;\myphi,\tdplotmainphi:\ntest}
%%%%%%%%%%%%%%%%%
\tikzoption{spherical smooth}[]{\let\tikz@plot@handler=\pgfplothandlersphericalcurveto}
\pgfdeclareplothandler{\pgfplothandlersphericalcurveto}{}{%
point macro=\pgf@plot@curveto@handler@spherical@initial,
jump macro=\pgf@plot@smooth@next@spherical@moveto,
end macro=\pgf@plot@curveto@handler@spherical@finish
}
\def\pgf@plot@smooth@next@spherical@moveto{%
\pgf@plot@curveto@handler@spherical@finish%
\global\pgf@plot@startedfalse%
\global\let\pgf@plotstreampoint\pgf@plot@curveto@handler@spherical@initial%
}
\def\pgf@plot@curveto@handler@spherical@initial#1{%
\pgf@process{#1}%
\ifx\tikz@textcolor\pgfutil@empty%
\else
\pgfsetstrokecolor{\tikz@textcolor}
\fi
\pgf@xa=\pgf@x%
\pgf@ya=\pgf@y%
\pgf@plot@first@action{\pgfqpoint{\pgf@xa}{\pgf@ya}}%
\xdef\pgf@plot@curveto@first{\noexpand\pgfqpoint{\the\pgf@xa}{\the\pgf@ya}}%
\global\let\pgf@plot@curveto@first@support=\pgf@plot@curveto@first%
\global\let\pgf@plotstreampoint=\pgf@plot@curveto@handler@spherical@second%
}
\def\pgf@plot@curveto@handler@spherical@second#1{%
\pgf@process{#1}%
\xdef\pgf@plot@curveto@second{\noexpand\pgfqpoint{\the\pgf@x}{\the\pgf@y}}%
\global\let\pgf@plotstreampoint=\pgf@plot@curveto@handler@spherical@third%
\global\pgf@plot@startedtrue%
}
\def\pgf@plot@curveto@handler@spherical@third#1{%
\pgf@process{#1}%
\xdef\pgf@plot@curveto@current{\noexpand\pgfqpoint{\the\pgf@x}{\the\pgf@y}}%
% compute difference vector:
\pgf@xa=\pgf@x%
\pgf@ya=\pgf@y%
\pgf@process{\pgf@plot@curveto@first}
% compute support directions:
\pgf@xa=\pgf@plottension\pgf@xa%
\pgf@ya=\pgf@plottension\pgf@ya%
% first marshal:
\pgf@process{\pgf@plot@curveto@second}%
\pgf@xb=\pgf@x%
\pgf@yb=\pgf@y%
\pgf@xc=\pgf@x%
\pgf@yc=\pgf@y%
\@ifundefined{MCheatOpa}{}{%
\pgf@plotstreamspecial{\pgfsetstrokeopacity{\MCheatOpa}}}
\edef\pgf@marshal{\noexpand\pgfsetstrokeopacity{\noexpand\MCheatOpa}
\noexpand\pgfpathcurveto{\noexpand\pgf@plot@curveto@first@support}%
{\noexpand\pgfqpoint{\the\pgf@xb}{\the\pgf@yb}}{\noexpand\pgf@plot@curveto@second}
\noexpand\pgfusepathqstroke
\noexpand\pgfpathmoveto{\noexpand\pgf@plot@curveto@second}}%
{\pgf@marshal}%
%\pgfusepathqstroke%
% Prepare next:
\global\let\pgf@plot@curveto@first=\pgf@plot@curveto@second%
\global\let\pgf@plot@curveto@second=\pgf@plot@curveto@current%
\xdef\pgf@plot@curveto@first@support{\noexpand\pgfqpoint{\the\pgf@xc}{\the\pgf@yc}}%
}
\def\pgf@plot@curveto@handler@spherical@finish{%
\ifpgf@plot@started%
\pgfpathcurveto{\pgf@plot@curveto@first@support}{\pgf@plot@curveto@second}{\pgf@plot@curveto@second}%
\fi%
}
\makeatother
\begin{document}
\begin{tikzpicture}
\tdplotsetmaincoords{72}{100}
\begin{scope}[tdplot_main_coords]
% \draw[-latex] (0,0,0) -- (\RadiusSphere,0,0) node[below]{$x$};
% \draw[-latex] (0,0,0) -- (0,\RadiusSphere,0) node[left]{$y$};
% \draw[-latex] (0,0,0) -- (0,0,\RadiusSphere) node[left]{$z$};
\begin{scope}
\foreach \X in {0,20,...,180}
\draw[blue] plot[spherical smooth,variable=\x,domain=-180:180,samples=60]
\foreach \X in {0,20,...,180}
\draw[gray] plot[spherical smooth,variable=\x,domain=-180:180,samples=60]
\end{scope}
\begin{scope}[xshift=8cm]
\foreach \X in {0,20,...,180}
\draw[gray] plot[spherical smooth,variable=\x,domain=-180:180,samples=60]
\foreach \X in {0,20,...,180}
\draw[blue] plot[spherical smooth,variable=\x,domain=-180:180,samples=60]
\end{scope}
\begin{scope}[yshift=-8cm]
\draw[blue] plot[spherical smooth,variable=\x,domain=0:180,samples=360]
\end{scope}
\begin{scope}[xshift=8cm,yshift=-8cm]
\foreach \X in {1,...,10}
{\draw[blue] plot[spherical smooth,variable=\x,domain=0:180,samples=360]
theta= {10*\X*sin(\x)});
\draw[blue] plot[spherical smooth,variable=\x,domain=0:180,samples=360]
theta= {10*\X*sin(\x)});}
\end{scope}
\end{scope}
\end{tikzpicture}
\end{document}
Ah, important. I was using essentially the same trick here and here. If anyone reading this feels that posting a very similar answer over and over is not a good idea, I will be happy to remove this post. (Perhaps also important, the spurious almost horizontal and vertical lines are not in the PDF, they come only after the conversion to PNG, and I dunno why. They seem to be related to the shade sphere directive.)
• I like this approach! How does one change the line styles (e.g. dashed, thickness)? Apr 10 '20 at 5:26
If the idea is to simply visualize the parabolic Moebius map, you can exploit how the \tdplotsphericalsurfaceplot command in tikz-3dplot works. Here is an example:
\documentclass[tikz,border=10pt]{standalone}
\usepackage{tikz,tikz-3dplot}
\begin{document}
\tdplotsetmaincoords{135}{350}
\begin{tikzpicture}[tdplot_main_coords,fill opacity=.7,]
\tdplotsetpolarplotrange{0}{180}{0}{180}
\tdplotsphericalsurfaceplot{36}{36}{6*sin(\tdplotphi)*sin(\tdplottheta)}{black!70!red}{red}{}{}{}
\end{tikzpicture}
\end{document}
Of course, this is a rather crude solution. However, if you don't need anything fancier, this is an example with minimal code. The details about the \tdplotsphericalsurfaceplot command can be found in the tikz-3dplot manual.
Based on Fritz's answer above, here is how I did it:
\documentclass[margin=5pt, tikz]{standalone}
\usepackage{pgfplots}
\usepackage{xxcolor}
\pgfplotsset{compat=1.10}
% Declare nice sphere shading: http://tex.stackexchange.com/a/54239/12440
color(0bp)=(tikz@ball!0!white);
color(7bp)=(tikz@ball!0!white);
color(15bp)=(tikz@ball!70!black);
color(20bp)=(black!70);
color(30bp)=(black!70)}
\makeatother
% Style to set TikZ camera angle, like PGFPlots view
\tikzset{viewport/.style 2 args={
x={({cos(-#1)*1cm},{sin(-#1)*sin(#2)*1cm})},
y={({-sin(-#1)*1cm},{cos(-#1)*sin(#2)*1cm})},
z={(0,{cos(#2)*1cm})}
}}
% Styles to plot only points that are before or behind the sphere.
\pgfplotsset{only foreground/.style={
restrict expr to domain={rawx*\CameraX + rawy*\CameraY + rawz*\CameraZ}{-0.05:100},
}}
\pgfplotsset{only background/.style={
restrict expr to domain={rawx*\CameraX + rawy*\CameraY + rawz*\CameraZ}{-100:0.05}
}}
% Automatically plot transparent lines in background and solid lines in foreground
}
\newcommand{\ViewAzimuth}{20}
\newcommand{\ViewElevation}{30}
\begin{document}
\begin{tikzpicture}
% Compute camera unit vector for calculating depth
\pgfmathsetmacro{\CameraX}{sin(\ViewAzimuth)*cos(\ViewElevation)}
\pgfmathsetmacro{\CameraY}{-cos(\ViewAzimuth)*cos(\ViewElevation)}
\pgfmathsetmacro{\CameraZ}{sin(\ViewElevation)}
\path[use as bounding box] (-1,-1) rectangle (1,1); % Avoid jittering animation
% Draw a nice looking sphere
\begin{scope}
\clip (0,0) circle (1);
\begin{scope}[transform canvas={rotate=-20}]
\shade [ball color=white] (0,0.5) ellipse (1.8 and 1.5);
\end{scope}
\end{scope}
\begin{axis}[
hide axis,
view={\ViewAzimuth}{\ViewElevation}, % Set view angle
every axis plot/.style={very thin},
disabledatascaling, % Align PGFPlots coordinates with TikZ
anchor=origin, % Align PGFPlots coordinates with TikZ
viewport={\ViewAzimuth}{\ViewElevation}, % Align PGFPlots coordinates with TikZ
]
\foreach \i in {10,30,...,350} {
\def\theX{0.5*(1-cos(deg(x)))} % 0 to 1 to 0
\def\theY{0.5*(sin(deg(x))} % 0 to 0.5 to 0 to -0.5 to 0
(
{sin(\i)*\theX}, % X coordinate
{2*sin(0.5*\i)*\theY}, % Y coordinate
{1-((1-cos(\i)))*\theX} %
); %
}
\end{axis}
\end{tikzpicture}
\end{document}
Updated. ^^This is the final version. Thanks @Fritz and everyone for helping :)
• Why not save it as a pdf? Nov 13 '14 at 23:12
• png, pdf ... all good :) Nov 13 '14 at 23:33
• No, I mean, not as a raster image. Nov 13 '14 at 23:45
• oh. that's just due to my cluelessness Nov 13 '14 at 23:48
• Surely you should have given the tick to Fritz since your answer is based on this -- and especially given all of the effort that he put into his great answer.
– user30471
Nov 4 '15 at 2:25
|
2022-01-18 15:07:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8792402148246765, "perplexity": 3903.5481516545615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300849.28/warc/CC-MAIN-20220118122602-20220118152602-00493.warc.gz"}
|
https://oddmuse.org/wiki/New_Text_Formatting_Rules
|
# New Text Formatting Rules
The output of a rule is sent directly to the browser. Therefore, the output can contain raw HTML. The output of a rule is either “dirty” or “clean”. By default, the rule output is considered “clean” and stored in the HTML cache. The rule will not be called again unless the page is edited. Rules that produce raw HTML output may produce “clean” output.
Rules can also be used to create dynamic output – output that may change every time the page is viewed. The output of the rule must therefore be marked as “dirty”. This will prevent it from being cached.
In order to define new text formatting rules, you need to define a function called MyRules in the config file. The function will be called on every piece of text that did not trigger any other rule. The function must use a regular expression using \G.
Here is the relevant piece of information from the perlre manpage:
The \G assertion can be used to chain global matches (using m//g), as described in “Regexp Quote-Like Operators” in perlop. It is also useful when writing “lex”-like scanners, when you have several patterns that you want to match against consequent substrings of your string, see the previous reference. Currently \G is only fully supported when anchored to the start of the pattern; while it is permitted to use it elsewhere, as in /(?<=\G..)./g, some such uses (/.\G/g, for example) currently cause problems, and it is recommended that you avoid such usage for now.
Therefore, at the core of the MyRules function, you will have an expression like the following:
sub MyRules {
if (m/\G.../gc) {
...
return ...;
}
return;
}
If the function returns a string, it will be used instead of the the text matched. Unless you take extra precautions (see below), the string you return will be put in the HTML cache. The next time the page is viewed, MyRules will no longer be called, since output for the text in question is cached.
You can have multiple rules:
sub MyRules {
if (m/\G.../gc) {
...
return ...;
} elsif (m/\G.../gc) {
...
return ...;
}
return;
}
If none of your rules matched, return nothing—but don’t write return undef because that returns (undef) in list context.
Here is an example for legal texts in Switzerland. A text of the form Art. 2 Abs. 3 URG will be linked to the page Art. 2 URG and to the anchor #3. You can create anchors on pages using [#3], and this will be replaced by a §3 in the text.
sub MyRules {
if (m/\G(Art. (\d+) Abs. (\d+) ([^ ]+))/gc) {
return $q->a({-href=>$ScriptName.'/'.UrlEncode("Art._$2_$4").'#'.$3},$1);
## No caching: producing “dirty” output
In order to avoid caching, you have to write some extra code. The idea is to store the raw expression as a dirty block, print something else, and return the empty string, thereby avoiding the default functionality:
• store the raw expression as a dirty block Use m/\G(...)/gc and call Dirty($1), this will store your matched string as a dirty block. • print something else Whatever the result of your rule, just print it. • return the empty string Always return a string if your rule matched. Since you already printed something else, you should 1. return ‘’##. Return nothing when your rule did not match at all. Basic structure: sub MyRules { if (m/\G(...)/gc) { Dirty($1);
...
return '';
}
return;
}
if (m/\G($\[download:FreeLinkPattern\|([^$]+)\]\])/cg
or m!\G($\[download:FreeLinkPattern$\])!cg) {
Dirty($1); print GetDownloadLink($2, undef, undef, $3); return ''; } return; } ## Searching: saving and restoring$_ and pos
All the text formatting rules rely on using the \G assertion. This depends on Perl remembering where the last match left off. This relies on pos:
Returns the offset of where the last “mg” search left off for the variable in question ($_ is used when the variable is not specified). This means that you cannot do any further matches on$_ within your text formatting rule. If you do, you will change pos and then the rest of your page will either be garbled or loop forever (as your rule matches again and again).
The solution is to save post and $_, do your stuff, and then restore the two. When you restore the two, order is important as one relies on the other. The basic structure: sub MyRules { if (m/\G(...)/gc) { Dirty($1);
my ($oldpos,$old_) = ((pos), $_); ... ($_, pos) = ($old_,$oldpos); # restore \G (assignment order matters!)
return '';
}
return;
}
This example replaces the pseudo-tag <source> with a bit of text, some links, and a list of pages. Thus, if more matching pages are created, the output of this rule must change, even if the current page has not changed.
• it does a search within the code used to produce the HTML
• it creates multiple HTML blocks (closing tags have to be printed)
• the output depends on other pages (it must not be cached)
push(@MyRules, \&SourceTemplate);
sub SourceTemplate {
if (m/\G(<source\/*>)/gc) {
Clean(CloseHtmlEnvironments()); # if block level dirty block
Dirty($1); my ($oldpos, $old_) = ((pos),$_);
my $id = GetId(); my$text = NormalToFree($id); my$esc = UrlEncode($text); my$tag = lc($id); my$file = $tag;$file =~ tr/_/-/;
my %hash = ();
foreach my $id (SearchTitleAndBody("tag:$tag tag:issue tag:open")) {
$hash{$id} = 1;
}
my @found = map { $q->li(GetPageLink($_)) } sort keys %hash;
push(@found, "none") unless @found;
# This is a dirty rule: the HTML output changes even if this page
# is not edited. It depends on other pages. Thus, it must not be
# cached. It's "dirty".
print <<"EOT";
<h2>Source Code</h2>
<p>How to get a copy of the source code from git:</p>
<pre>git clone https://alexschroeder.ch/cgit/$file</pre> <p>You can also <a href="https://alexschroeder.ch/cgit/$file/about">browse the code on the web</a>.</p>
<h2>Issues</h2>
<p>
<a class="button" href="https://alexschroeder.ch/software?action=search-issue;tag=$tag">Search issue</a> <a class="button" href="https://alexschroeder.ch/software?action=new-issue;tag=$esc">New issue</a>
<a class="button" href="https://alexschroeder.ch/software?action=changed-issues;tag=$tag">Changes</a> </p> <h2>Open</h2> <div class="search list"> <ul>@found</ul> </div> EOT Clean(AddHtmlEnvironment('p')); ($_, pos) = ($old_,$oldpos); # restore \G (assignment order matters!)
return '';
}
return;
}
Note that we return the empty string, when the rule matched, and we return nothing when the rule did not match.
Make sure you call Dirty() before actually printing anything. This will print all the clean blocks waiting to get printed. If you print first, then your dirty block and the last clean block will appear in the wrong order after the first save (the rest of the page will appear correct).
A common mistake is to call Dirty(), print something, and return a string. Oddmuse will have registered a dirty block, and then printed something, and the string you returned was considered to be a clean block. So the first time around everything looks cool. The next time around Oddmuse sees the dirty block, and renders it again. Then it finds a clean block in the cache, and prints it. You duplicate the output of your dirty blocks.
## Closing words
If you find yourself writing many rules, consider putting them in a Module.
|
2021-05-06 07:12:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3042677342891693, "perplexity": 3818.254417268575}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00625.warc.gz"}
|
https://www.storyofmathematics.com/fractions-to-decimals/11-16-as-a-decimal/
|
# What Is 11/16 as a Decimal + Solution With Free Steps?
The fraction 11/16 as a decimal is equal to 0.6875.
Division could appear to be the most complex of all mathematical operations, but it is not that difficult because there is a way to handle this difficult issue. Long Division is the process of answering the question in fractional form.
Here is a comprehensive explanation of how to use the long division method to solve the given fraction, 11/16, and generate the decimal equivalent.
## Solution
When we talk about the fraction it consists of two parts. The part above the fraction is known as the Dividend, and similarly, the lower part of the fraction is known as the Divisor.
Dividend = 11
Divisor = 16
When we solve the fraction, we produce a new term known as Quotient, which is the result of the fraction.
Quotient = Dividend $\div$ Divisor = 11 $\div$ 16
Now, by using the Long Division we can solve the problem as:
Figure 1
## 11/16 Long Division Method
Here are the steps of the Long Division method through which we can solve the desired division.
We had:
11 $\div$ 16
Since we have to divide the two numbers and in this case, we have a numerator value less than the denominator value, i.e. 11 is less than 16. So we have to add the decimal point first, after doing this we can multiply our Dividend by 10 and it will become 110.
After dividing the terms, the remaining part is referred to as the Remainder.
110 $\div$ 16 $\approx$ 6
Where:
16 x 6 = 96
This indicates that a Remainder was also generated from this division, and it is equal to 110 – 96 = 14. So after the first step, we have a remainder of 14.
Since we have a remainder less than the divisor, so will multiply 10 with the remainder but this time no need to add the decimal point again because it already has been added with the Quotient.
From the previous step, the Remainder we have is 14. So by multiplying it by 10 we get 140. So in the next step, we will divide the remainder with the divisor to further proceed with the method, and again no need to add the decimal point again because it is already in the quotient.
140 $\div$ 16 $\approx$ 8
Where:
16 x 8 = 128
The remainder we have after solving this is 12, so by multiplying it by 10 we now have 120. So for the third decimal point, the solution is as follows:
120 $\div$ 16 $\approx$ 7
Where:
16 x 7 = 112
As a result, we have a resultant Quotient of 0.687 and a Remainder of 8. This suggests that if we keep solving, we might be able to get a more accurate and precise answer.
Images/mathematical drawings are created with GeoGebra.
Fractions to Decimals List
5/5 - (5 votes)
|
2022-10-02 05:56:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7945685386657715, "perplexity": 310.9003461213637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00456.warc.gz"}
|
https://hal.inria.fr/hal-02425684
|
# Minimax Rates for Estimating the Dimension of a Manifold
1 DATASHAPE - Understanding the Shape of Data
CRISAM - Inria Sophia Antipolis - Méditerranée , Inria Saclay - Ile de France
Abstract : Many algorithms in machine learning and computational geometry require, as input, the intrinsic dimension of the manifold that supports the probability distribution of the data. This parameter is rarely known and therefore has to be estimated. We characterize the statistical difficulty of this problem by deriving upper and lower bounds on the minimax rate for estimating the dimension. First, we consider the problem of testing the hypothesis that the support of the data-generating probability distribution is a well-behaved manifold of intrinsic dimension $d_1$ versus the alternative that it is of dimension $d_2$, with \$d_{1}
Document type :
Journal articles
Domain :
Cited literature [18 references]
https://hal.inria.fr/hal-02425684
Contributor : Jisu KIM Connect in order to contact the contributor
Submitted on : Tuesday, December 31, 2019 - 2:48:08 AM
Last modification on : Friday, July 8, 2022 - 10:07:44 AM
Long-term archiving on: : Wednesday, April 1, 2020 - 1:40:37 PM
### File
DimensionEstimator_arxiv.pdf
Files produced by the author(s)
### Citation
Jisu Kim, Alessandro Rinaldo, Larry Wasserman. Minimax Rates for Estimating the Dimension of a Manifold. Journal of Computational Geometry, Carleton University, Computational Geometry Laboratory, 2019, 10 (1), ⟨10.20382/jocg.v10i1a3⟩. ⟨hal-02425684⟩
Record views
|
2022-08-13 00:26:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.468147337436676, "perplexity": 1574.3345902984074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571847.45/warc/CC-MAIN-20220812230927-20220813020927-00230.warc.gz"}
|
https://mathoverflow.net/questions/320420/can-bellows-make-loops
|
# Can bellows make loops?
Can flexible polyhedron (hyperbolic or euclidean) have non-simply connected configuration space not containing singular polyhedra?
• Could you explain what the phrase "singular polyhedra" means? Thanks. – Joseph O'Rourke Jan 9 at 15:23
• Ones with angles between faces equal to $0$ or $\pi$. Without excluding that things you can easily find examples in $1$-dim polyhedra. – Denis T. Jan 10 at 6:56
|
2019-03-24 05:47:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44593706727027893, "perplexity": 2816.1124127371695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203326.34/warc/CC-MAIN-20190324043400-20190324065400-00296.warc.gz"}
|
https://astronomy.stackexchange.com/questions/38334/approximate-formula-to-find-velocity-from-cosmological-redshift
|
# Approximate formula to find velocity from cosmological redshift
From IOAA 2013 (Greece) Theory question no. 15, they stated that the approximate formula to find velocity from cosmological redshift is $$v = c*log_e (1+z)$$ and that it is often used by cosmologists. I did a quick google search but found nothing similar to this formula.
So, where did this equation come from and is it really often used by cosmologists?
This formula is exact if the expansion is linear ($$a(t) = H_0 t$$) and all peculiar velocities are zero. In that case, the comoving distance to the object is $$\int_{t_\text{then}}^{t_\text{now}} \frac{c\,\mathrm dt}{a(t)} = \frac{c}{H_0} \ln \frac{t_\text{now}}{t_\text{then}} = \frac{c}{H_0} \ln (1{+}z)$$ and the present recessional velocity is $$H_0$$ times that.
|
2021-10-24 18:50:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8258803486824036, "perplexity": 289.9020468776125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587593.0/warc/CC-MAIN-20211024173743-20211024203743-00660.warc.gz"}
|
https://gmatclub.com/forum/flor-is-choosing-three-of-five-colors-of-paint-to-use-for-her-art-proj-262607.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 14 Oct 2019, 13:36
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Flor is choosing three of five colors of paint to use for her art proj
Author Message
TAGS:
### Hide Tags
SVP
Status: It's near - I can see.
Joined: 13 Apr 2013
Posts: 1685
Location: India
GPA: 3.01
WE: Engineering (Real Estate)
Flor is choosing three of five colors of paint to use for her art proj [#permalink]
### Show Tags
03 Apr 2018, 09:38
5
00:00
Difficulty:
45% (medium)
Question Stats:
60% (01:44) correct 40% (01:42) wrong based on 149 sessions
### HideShow timer Statistics
Flor is choosing three of five colors of paint to use for her art project at school. Two of the colors, Green and Yellow, cannot both be selected. How many different ways can Flor choose the colors for her project?
A. 7
B. 9
C. 10
D. 13
E. 17
_________________
"Do not watch clock; Do what it does. KEEP GOING."
Math Expert
Joined: 02 Sep 2009
Posts: 58320
Re: Flor is choosing three of five colors of paint to use for her art proj [#permalink]
### Show Tags
03 Apr 2018, 09:48
2
3
QZ wrote:
Flor is choosing three of five colors of paint to use for her art project at school. Two of the colors, Green and Yellow, cannot both be selected. How many different ways can Flor choose the colors for her project?
1. 7
2. 9
3. 10
4. 13
5. 17
Use $$Total - Restriction$$ technique.
If she chooses Green and Yellow together then there will be three choices for the third color, so she can choose {Green, Yellow, Any of the remaining three} in 3 ways,
The total number of ways to choose 3 out of 5 is 5C3 = 10.
$$Total - Restriction=10-3=7$$
_________________
##### General Discussion
Target Test Prep Representative
Affiliations: Target Test Prep
Joined: 04 Mar 2011
Posts: 2817
Re: Flor is choosing three of five colors of paint to use for her art proj [#permalink]
### Show Tags
05 Apr 2018, 17:11
1
QZ wrote:
Flor is choosing three of five colors of paint to use for her art project at school. Two of the colors, Green and Yellow, cannot both be selected. How many different ways can Flor choose the colors for her project?
A. 7
B. 9
C. 10
D. 13
E. 17
There are three cases: 1) green is one of the five colors chosen, but yellow isn’t, 2) yellow is one of the five colors chosen, but green isn’t, and 3) neither green nor yellow is chosen. Let’s analyze each case.
Case 1: Green is one of the five colors chosen, but yellow isn’t.
If green is chosen but yellow isn’t, then we have to choose 2 more colors from the 3 remaining colors. The number of ways to do that is 3C2 = 3.
Case 2: Yellow is one of the five colors chosen, but green isn’t.
This is analogous to case 1, so there are 3 ways for this case..
Case 3: Neither green nor yellow is chosen.
If neither color is chosen, then we have to choose 3 colors from the 3 remaining colors. The number of ways to do that is 3C3 = 1.
Thus, the total number of ways Flor can choose the colors for her project is 3 + 3 + 1 = 7.
Alternate Solution:
We can use the formula:
Total number of ways to pick 3 colors = number of ways where yellow and green are both included + number of ways where yellow and green are not both included
Since we are choosing 3 colors from 5 available colors, there are 5C3 = (5 x 4)/2 = 10 ways of doing this when there are no restrictions.
The number of ways where yellow and green are both included can be found easily by observing that yellow and green occupy two of the three slots; any one of the remaining three colors can occupy the final slot. So, there are 3 ways to choose colors where yellow and green are both included.
Thus, the number of ways to pick colors where yellow and green are not included together is 10 - 3 = 7.
_________________
# Jeffrey Miller
Jeff@TargetTestPrep.com
122 Reviews
5-star rated online GMAT quant
self study course
See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews
If you find one of my posts helpful, please take a moment to click on the "Kudos" button.
Manager
Joined: 22 May 2015
Posts: 126
Re: Flor is choosing three of five colors of paint to use for her art proj [#permalink]
### Show Tags
05 Apr 2018, 18:26
# select colors without green and yellow = #selecting 3 colors out of 5 - # selecting 1 color in 3(assuming green and yellow are picked) = 5C3 - 3C1
Sent from my Moto G (5) Plus using GMAT Club Forum mobile app
_________________
Consistency is the Key
CEO
Joined: 12 Sep 2015
Posts: 3996
Re: Flor is choosing three of five colors of paint to use for her art proj [#permalink]
### Show Tags
12 Sep 2018, 07:15
2
Top Contributor
AkshdeepS wrote:
Flor is choosing three of five colors of paint to use for her art project at school. Two of the colors, Green and Yellow, cannot both be selected. How many different ways can Flor choose the colors for her project?
A. 7
B. 9
C. 10
D. 13
E. 17
The posters above me have demonstrated the approach that I'd typically use.
However, it's important to note that, when the answer choices are so small (as they are here), we should also consider the straightforward strategy of listing and counting
Let R, B, P, G and Y represent the colors Red, Blue, Purple, Green and Yellow respectively.
Now let's list the possible outcomes that meet all of the given conditions:
- RBP
- RBG
- RBY
- RPG
- RPY
- BPG
- BPY
Done!! So, there are 7 possible outcomes
Cheers,
Brent
_________________
Test confidently with gmatprepnow.com
GMATH Teacher
Status: GMATH founder
Joined: 12 Oct 2010
Posts: 935
Re: Flor is choosing three of five colors of paint to use for her art proj [#permalink]
### Show Tags
16 Sep 2018, 13:00
AkshdeepS wrote:
Flor is choosing three of five colors of paint to use for her art project at school. Two of the colors, Green and Yellow, cannot both be selected. How many different ways can Flor choose the colors for her project?
A. 7
B. 9
C. 10
D. 13
E. 17
RENAME colors to "unblock your brain": A, B, C, D and E.
Restriction: A and B cannot be BOTH chosen.
? = Number of choices of 3 colors among the 5 given, restriction obeyed.
First Scenario: neither A nor B is chosen.
There is just one possibility : CDE
Second Scenario: A is chosen (hence B is not)
There is just three possibilities: ACD, ACE, ADE.
Third Scenario: B is chosen (hence A is not)
This is similar to the previous scenario, hence additional 3 cases.
All cases mentioned above are MUTUALLY EXCLUSIVE, therefore they may be added: 7 possibilities.
All 7 cases are EXHAUSTIVE, hence we are sure the answer is (at least 7 and) not greater than 7.
This solution follows the notations and rationale taught in the GMATH method.
Regards,
Fabio.
_________________
Fabio Skilnik :: GMATH method creator (Math for the GMAT)
Our high-level "quant" preparation starts here: https://gmath.net
Re: Flor is choosing three of five colors of paint to use for her art proj [#permalink] 16 Sep 2018, 13:00
Display posts from previous: Sort by
|
2019-10-14 20:36:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4801623523235321, "perplexity": 2377.028831542833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655310.17/warc/CC-MAIN-20191014200522-20191014224022-00157.warc.gz"}
|
http://cran.ma.ic.ac.uk/web/packages/riskRegression/vignettes/IPA.html
|
# Index of Prediction Accuracy (IPA)
## 1 Introduction
This vignette demonstrates how our software calculates the index of prediction accuracy 1. We distinguish three settings:
• uncensored binary outcome
• right censored survival outcome (no competing risks)
• right censored time to event outcome with competing risks
The Brier score is a loss type metric of prediction performance where lower values correspond to better prediction performance. The IPA formula for a model is very much the same as the formula for R2 in a standard linear regression model:
\begin{equation*} \operatorname{IPA} = 1-\frac{\text{BrierScore(Prediction model)}}{\text{BrierScore(Null model)}} \end{equation*}
## 2 Package version
data.table:> [1] ‘1.12.2’
survival:> [1] ‘2.44.1.1’
riskRegression:> [1] ‘2019.11.3’
Publish:> [1] ‘2019.11.2’
## 3 Data
For the purpose of illustrating our software we simulate data alike the data of an active surveillance prostate cancer study 2. Specifically, we generate a learning set (n=278) and a validation set (n=208). In both data sets we define a binary outcome variable for the progression status after one year. Note that smallest censored event time is larger than 1 year, and hence the event status after one year is uncensored.
set.seed(18)
astrain <- simActiveSurveillance(278)
astest <- simActiveSurveillance(208)
astrain[,Y1:=1*(event==1 & time<=1)]
astest[,Y1:=1*(event==1 & time<=1)]
## 4 IPA for a binary outcome
To illustrate the binary outome setting we analyse the 1-year progression status. We have complete 1-year followup, i.e., no dropout or otherwise censored data before 1 year. We fit two logistic regression models, one including and one excluding the biomarker erg.status:
lrfit.ex <- glm(Y1~age+lpsaden+ppb5+lmax+ct1+diaggs,data=astrain,family="binomial")
publish(lrfit.inc,org=TRUE)
Variable Units OddsRatio CI.95 p-value
age 0.98 [0.90;1.06] 0.6459
ppb5 1.09 [0.92;1.28] 0.3224
lmax 1.08 [0.83;1.41] 0.5566
ct1 cT1 Ref
cT2 1.00 [0.29;3.41] 0.9994
diaggs GNA Ref
3/3 0.60 [0.27;1.34] 0.2091
3/4 0.25 [0.05;1.30] 0.1006
erg.status neg Ref
pos 3.66 [1.90;7.02] <0.0001
Based on these models we predict the risk of progression within one year in the validation set.
astest[,risk.ex:=100*predictRisk(lrfit.ex,newdata=astest)]
astest[,risk.inc:=100*predictRisk(lrfit.inc,newdata=astest)]
age lpsaden ppb5 lmax ct1 diaggs erg.status Y1 risk.ex risk.inc
62.6 -3.2 4.9 4.6 cT1 3/3 pos 0.0 23.2 36.3
66.9 -1.7 0.7 4.1 cT1 3/3 pos 1.0 14.0 24.7
65.4 -1.5 4.0 3.9 cT1 3/3 neg 0.0 17.4 10.6
59.0 -2.8 6.8 3.3 cT2 3/4 pos 1.0 10.7 21.1
55.6 -3.5 2.8 3.0 cT1 3/3 neg 0.0 21.9 11.8
71.1 -2.6 3.3 3.7 cT1 3/3 neg 0.0 15.0 9.5
To calculate the Index of Prediction Accuracy (IPA) we call the Score function as follows on a list which includes the two logistic regression models.
X1 <- Score(list("Exclusive ERG"=lrfit.ex,"Inclusive ERG"=lrfit.inc),data=astest,
formula=Y1~1,summary="ipa",se.fit=0L,metrics="brier",contrasts=FALSE)
X1
Metric Brier:
Results by model:
model Brier IPA
1: Null model 15.2 0.0
2: Exclusive ERG 14.8 2.7
3: Inclusive ERG 14.1 7.3
NOTE: Values are multiplied by 100 and given in % (use print(...,percent=FALSE) to avoid this.
NOTE: The lower Brier the better, the higher IPA the better.
Both logistic regression models have a lower Brier score than the Null model which ignores all predictor variables. Hence, both models have a positive IPA. The logistic regression model which excludes the ERG biomarker scores IPA=2.68% and the logistic regression model which includes the ERG biomarer scores IPA = 7.29%. The difference in IPA between the two models is 4.62%. This means that when we omit erg.status from the model, then we loose 4.62% in IPA compared to the full model. It is sometimes interesting to compare the predictor variables according to how much they contribute to the prediction performance. Generally, this is a non-trivial task which depends on the order in which the variables are entered into the model, the functional form and also on the type of model. However, we can drop one variable at a time from the full model and for each variable compute the loss in IPA as the difference between IPA of the full model and IPA of the model where the variable is omitted.
IPA(lrfit.inc,newdata=astest)
Variable Brier IPA IPA.drop
1: Null model 15.2 0.0 7.3
2: Full model 14.1 7.3 0.0
3: age 14.1 7.4 -0.1
5: ppb5 14.2 6.9 0.4
6: lmax 14.1 7.2 0.1
7: ct1 14.1 7.3 -0.0
8: diaggs 14.6 4.4 2.9
9: erg.status 14.8 2.7 4.6
NOTE: Values are multiplied by 100 and given in % (use print(...,percent=FALSE) to avoid this.
NOTE: IPA.drop = IPA(Full model) - IPA.
## 5 IPA for right censored survival outcome
To illustrate the survival outome setting we analyse the 3-year progression-free survival probability. So, that the combined endpoint is progression or death. We fit two Cox regression models, one including and one excluding the biomarker erg.status:
coxfit.ex <- coxph(Surv(time,event!=0)~age+lpsaden+ppb5+lmax+ct1+diaggs,data=astrain,x=TRUE)
publish(coxfit.inc,org=TRUE)
Variable Units HazardRatio CI.95 p-value
age 1.03 [0.99;1.07] 0.124
ppb5 1.21 [1.12;1.30] <0.001
lmax 1.06 [0.94;1.19] 0.359
ct1 cT1 Ref
cT2 0.97 [0.57;1.66] 0.916
diaggs GNA Ref
3/3 0.53 [0.37;0.76] <0.001
3/4 0.32 [0.18;0.58] <0.001
erg.status neg Ref
pos 1.80 [1.35;2.38] <0.001
Based on these models we predict the risk of progression or death within 3 years in the validation set.
astest[,risk.ex:=100*predictRisk(coxfit.ex,newdata=astest,times=3)]
astest[,risk.inc:=100*predictRisk(coxfit.inc,newdata=astest,times=3)]
age lpsaden ppb5 lmax ct1 diaggs erg.status Y1 risk.ex risk.inc
62.6 -3.2 4.9 4.6 cT1 3/3 pos 0.0 67.5 80.7
66.9 -1.7 0.7 4.1 cT1 3/3 pos 1.0 48.5 60.3
65.4 -1.5 4.0 3.9 cT1 3/3 neg 0.0 67.4 60.8
59.0 -2.8 6.8 3.3 cT2 3/4 pos 1.0 51.1 70.1
55.6 -3.5 2.8 3.0 cT1 3/3 neg 0.0 41.5 35.5
71.1 -2.6 3.3 3.7 cT1 3/3 neg 0.0 65.5 57.5
To calculate the Index of Prediction Accuracy (IPA) we call the Score function as follows on a list which includes the two Cox regression models.
X2 <- Score(list("Exclusive ERG"=coxfit.ex,"Inclusive ERG"=coxfit.inc),data=astest,
formula=Surv(time,event!=0)~1,summary="ipa",se.fit=0L,metrics="brier",contrasts=FALSE,times=3)
X2
Metric Brier:
Results by model:
model times Brier IPA
1: Null model 3 24.0 0.0
2: Exclusive ERG 3 22.4 6.4
3: Inclusive ERG 3 19.9 17.1
NOTE: Values are multiplied by 100 and given in % (use print(...,percent=FALSE) to avoid this.
NOTE: The lower Brier the better, the higher IPA the better.
It is sometimes interesting to compare the predictor variables according to how much they contribute to the prediction performance. Generally, this is a non-trivial task which depends on the order in which the variables are entered into the model, the functional form and also on the type of model. However, we can drop one variable at a time from the full model and for each variable compute the loss in IPA as the difference between IPA of the full model and IPA of the model where the variable is omitted.
IPA(coxfit.inc,newdata=astest,times=3)
Variable times Brier IPA IPA.drop
1: Null model 3 24.0 0.0 17.1
2: Full model 3 19.9 17.1 0.0
3: age 3 19.7 17.6 -0.6
4: lpsaden 3 20.1 16.2 0.8
5: ppb5 3 21.3 11.2 5.9
6: lmax 3 19.9 16.7 0.4
7: ct1 3 19.9 17.0 0.1
8: diaggs 3 20.8 13.0 4.1
9: erg.status 3 22.4 6.4 10.7
NOTE: Values are multiplied by 100 and given in % (use print(...,percent=FALSE) to avoid this.
NOTE: IPA.drop = IPA(Full model) - IPA.
## 6 IPA for right censored time to event outcome with competing risks
To illustrate the competing risk setting we analyse the 3-year risk of progression in presence of the competing risk of death without progression. We fit two sets of cause-specific Cox regression models 3, one including and one excluding the biomarker erg.status:
cscfit.ex <- CSC(Hist(time,event)~age+lpsaden+ppb5+lmax+ct1+diaggs,data=astrain)
publish(cscfit.inc)
Variable Units 1 2
age 1.04 [1.00;1.09] 1.01 [0.95;1.07]
ppb5 1.14 [1.04;1.24] 1.39 [1.22;1.58]
lmax 1.19 [1.03;1.39] 0.82 [0.67;1.00]
ct1 cT1 Ref Ref
cT2 1.31 [0.73;2.36] 0.31 [0.07;1.28]
diaggs GNA Ref Ref
3/3 0.54 [0.35;0.84] 0.56 [0.29;1.10]
3/4 0.44 [0.22;0.88] 0.19 [0.06;0.60]
erg.status neg Ref Ref
pos 2.20 [1.56;3.11] 1.20 [0.71;2.04]
Based on these models we predict the risk of progression in presence of the competing risk of death within 3 years in the validation set.
astest[,risk.ex:=100*predictRisk(cscfit.ex,newdata=astest,times=3,cause=1)]
astest[,risk.inc:=100*predictRisk(cscfit.inc,newdata=astest,times=3,cause=1)]
age lpsaden ppb5 lmax ct1 diaggs erg.status Y1 risk.ex risk.inc
62.6 -3.2 4.9 4.6 cT1 3/3 pos 0.0 49.7 65.5
66.9 -1.7 0.7 4.1 cT1 3/3 pos 1.0 45.2 60.1
65.4 -1.5 4.0 3.9 cT1 3/3 neg 0.0 50.6 42.3
59.0 -2.8 6.8 3.3 cT2 3/4 pos 1.0 46.0 69.0
55.6 -3.5 2.8 3.0 cT1 3/3 neg 0.0 26.3 19.9
71.1 -2.6 3.3 3.7 cT1 3/3 neg 0.0 51.8 42.2
To calculate the Index of Prediction Accuracy (IPA) we call the Score function as follows on a list which includes the two sets of cause-specific Cox regression models.
X3 <- Score(list("Exclusive ERG"=cscfit.ex,
"Inclusive ERG"=cscfit.inc),
data=astest, formula=Hist(time,event)~1,
summary="ipa",se.fit=0L,metrics="brier",
contrasts=FALSE,times=3,cause=1)
X3
Metric Brier:
Results by model:
model times Brier IPA
1: Null model 3 24.5 0.0
2: Exclusive ERG 3 23.2 5.0
3: Inclusive ERG 3 20.2 17.5
NOTE: Values are multiplied by 100 and given in % (use print(...,percent=FALSE) to avoid this.
NOTE: The lower Brier the better, the higher IPA the better.
It is sometimes interesting to compare the predictor variables according to how much they contribute to the prediction performance. Generally, this is a non-trivial task which depends on the order in which the variables are entered into the model, the functional form and also on the type of model. However, we can drop one variable at a time from the full model (here from both cause-specific Cox regression models) and for each variable compute the loss in IPA as the difference between IPA of the full model and IPA of the model where the variable is omitted.
IPA(cscfit.inc,newdata=astest,times=3)
Variable times Brier IPA IPA.drop
1: Null model 3 24.5 0.0 17.5
2: Full model 3 20.2 17.5 0.0
3: age 3 20.1 18.0 -0.5
4: lpsaden 3 20.4 16.8 0.8
5: ppb5 3 20.4 16.5 1.1
6: lmax 3 21.4 12.6 4.9
7: ct1 3 19.8 18.9 -1.4
8: diaggs 3 20.8 14.8 2.8
9: erg.status 3 23.2 5.0 12.5
NOTE: Values are multiplied by 100 and given in % (use print(...,percent=FALSE) to avoid this.
NOTE: IPA.drop = IPA(Full model) - IPA.
## Footnotes:
1
Michael W Kattan and Thomas A Gerds. The index of prediction accuracy: An intuitive measure useful for evaluating risk prediction models. Diagnostic and Prognostic Research, 2(1):7, 2018.
2
Berg KD, Vainer B, Thomsen FB, Roeder MA, Gerds TA, Toft BG, Brasso K, and Iversen P. Erg protein expression in diagnostic specimens is associated with increased risk of progression during active surveillance for prostate cancer. European urology, 66(5):851–860, 2014.
3
Brice Ozenne, Anne Lyngholm S{\o }rensen, Thomas Scheike, Christian Torp-Pedersen, and Thomas Alexander Gerds. riskregression: Predicting the risk of an event using Cox regression models. R Journal, 9(2):440–460, 2017.
Last update: 04 Nov 2019 by Thomas Alexander Gerds.
|
2022-01-21 12:38:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5491563677787781, "perplexity": 8571.746813897287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303356.40/warc/CC-MAIN-20220121101528-20220121131528-00085.warc.gz"}
|
https://www.r-bloggers.com/2019/04/mapping-the-vikings-using-r/
|
[This article was first published on R – What You're Doing Is Rather Desperate, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
The commute to my workplace is 90 minutes each way. Podcasts are my friend. I’m a long-time listener of In Our Time and enjoyed the recent episode about The Danelaw.
Melvyn and I hail from the same part of the world, and I learned as a child that many of the local place names there were derived from Old Norse or Danish. Notably: places ending in -by denote a farmstead, settlement or village; those ending in -thwaite mean a clearing or meadow.
So how local are those names? Time for some quick and dirty maps using R.
First, we’ll need a dataset of British place names. There are quite a few of these online, but top of my Google search was Index of Place Names in Great Britain (July 2016). It comes in several formats including CSV, easy to read into R like so:
library(tidyverse)
library(maps)
A quick inspection of the data reveals that whilst there is a unique identifier, objectid_1, each row is not as such a unique place (the dataset is based on grid locations). We can reduce the number of rows a little by taking distinct(placesort, lat, long_), but that will still retain duplicate place names with slightly different coordinates. For our purposes, it doesn’t really matter – we just want an indication of distribution, rather than a highly-accurate map.
We’ll start by looking at places ending in -by. For this example, we’ll let the points themselves define the outline of Great Britain rather than drawing one. We’ll emphasise the -by places and try to de-emphasise the rest.
gbplaces %>%
distinct(placesort, lat, long_) %>%
mutate(isBy = ifelse(grepl("^.+by$", placesort), TRUE, FALSE)) %>% # not the territories! filter(lat > 40) %>% ggplot(aes(long_, lat)) + geom_point(aes(color = isBy, alpha = isBy), size = 0.5) + scale_colour_viridis_d(direction = -1, name = "ends in -by", option = "inferno") + scale_alpha_manual(values = c(0.3, 1)) + theme(axis.title = element_blank(), axis.text = element_blank(), axis.ticks = element_blank(), panel.grid = element_blank(), panel.border = element_blank()) + labs(title = "Distribution of GB place names ending -by") + guides(alpha = FALSE) + coord_map() Here’s the result – click for a larger version. Not bad. Lots of locations in Cumbria and eastern England. I like how the “plotting by points only” approach emphasises the empty mountainous regions in Scotland, Northern England and Wales. Now we’ll look at -thwaite. This time we’ll use map_data() to pull an outline from the maps package. # filter out N Ireland ggplot(data = map_data("world", "UK") %>% filter(group != 3), aes(x = long, y = lat)) + geom_polygon(aes(group = group), fill = "darkolivegreen") + coord_map() + geom_point(data = gbplaces %>% filter(grepl("^.+thwaite$", placesort),
lat > 40),
aes(long_, lat),
color = "yellow",
size = 0.5) +
theme(axis.title = element_blank(),
axis.text = element_blank(),
axis.ticks = element_blank(),
panel.grid = element_blank(),
panel.border = element_blank()) +
labs(title = "Distribution of GB place names ending -thwaite")
Result below. We see that -thwaite is much more localised to Cumbria and parts of Yorkshire.
Summary
I find mapping languages quite fascinating, but of course it’s not an original idea. Here’s an interactive map of Norse-derived place names in the UK, developed for an exhibition at the British Museum. I’m sure there are many others.
If you want to put data on a map, R offers many options using base R, ggplot2 or interactive Javascript such as Leaflet. I think it’s never been quicker or easier to do.
|
2021-04-12 19:38:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.195048987865448, "perplexity": 2629.687522696802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069133.25/warc/CC-MAIN-20210412175257-20210412205257-00608.warc.gz"}
|
https://www.roseindia.net/answers/viewqa/Java-Beginners/16993-take-variables-in-text-file-and-run-in-java.html
|
# take variables in text file and run in java
I have a text file which have variables
17 10 23 39 13 33
How to take "17"in java code?
March 4, 2011 at 3:41 PM
import java.io.*;
public static void main(String[] args) throws Exception {
File f = new File("C:/numbers.txt");
String st = "";
System.out.println(st);
}
}
}
take variables in text file and run in java
take variables in text file and run in java I have a text file which have variables 17 10 23 39 13 33 How to take "17"in java code
Reading a text file in java
inserting text into text file using java application
inserting text into text file using java application Hi, I want to insert a text or string into a text file using java application
Take input in batch file
Take input in batch file How to take input in batch file? Thanks Hi, You can use the following line of code in your batch file: set /p myfile=Enter file name: In the above code myfile variable will hold the data
How to save run time created text-file on a disk using jsp/servlet?
How to save run time created text-file on a disk using jsp/servlet? I have a JSP page with save button.In that I created the file(e.g a.txt) at run-time,when I click on the save button,it will ask where to save the file
Variables in Java
variables) Java Primitive Data Types Data Type Description... 0.0d In this section, you will learn about Java variables. A variable... when a program executes. The Java contains the following types of variables
How to append text to an existing file in Java
How to append text to an existing file in Java How to append text to an existing file in Java
text file
text file Hi can you help me I have to modify the program below so that all the data held in it is stored in an external text file.So there should not be any variables in the program which are assigned values like 12
Count characters from text file in Java
mtext file in Java". I tried to run the code, but the error at the line have the directory of the text file ("C:\text.txt"). I use Eclipse to run this code. I... is a java code that count the occurrence of each character from text file. import
Java - Declaring variables in for loops
Java - Declaring variables in for loops Java - Declaring variables in for loops
text file
text file Hello can I modify the program below so that all the numerical data is stored in an external text file,that is the data contained in the array list.Thank you! mport java.util.*; import java.text.*; import
how to read a text file with scanner in java
how to read a text file with scanner in java Hi, I am looking for the example code in Java for reading text file line by line using the Scanner class. how to read a text file with scanner in java? Thanks Hi
Java program to read a text file and write to another file
Java program to read a text file and write to another file - Creating... and we want to copy the content into another text file from our Java program... run the above code it will copy the content of one text file into another text
Variables In Java
This tutorial demonstrates you about the variables and their types in java
Java Substituting Variables
Java Substituting Variables Java Substituting Variables
Static & Instance variables in java
Static & Instance variables in java What are the Difference between Static & Instance variables in java
Java search text file
Java search text file In this tutorial, you will learn how to search a text file in the current directory. For this, we have prompt the user to enter the name of a text file to search for. If the name does not have a .txt extension
Java Read Lines from Text File and Output in Reverse order to a Different Text File
Java Read Lines from Text File and Output in Reverse order to a Different Text File I need to read a file that was selected by the user using... to another text file. When that is done the output values of that file need
java protected variables
java protected variables Can we inherit the Java Protected variable..? of course you can. But I think what you mean is "override"? Is tha so? There are some restriction
writing a text into text file at particular line number
writing a text into text file at particular line number Hi, thanks for quick response, I want to insert text at some particular line number.. after line number four my text will display in text file using java program
how to change a column data in a every row in a text file in java
how to change a column data in a every row in a text file in java i have text file like this 11,6,13/9/14,1287605778,89... column in every row ie. account number to some text in a text file in java could
Convert Text File to PDF file
Convert Text File to PDF file Here is the way how to covert your Text file to PDF File, public class TextFileToPDF { private static void...(inLine); System.out.println("Text is inserted into pdf file
How to read a large text file line by line in java?
How to read a large text file line by line in java? I have been assigned a work to read big text file and extract the data and save into database... you kind advice and let's know how to read a large text file line by line in java
Variables
Variables What are the difference between Static variables, instance variables and local variables
Read text File Hi,How can I get line and keep in a String in Java
Convert Text File to PDF file
Convert Text File to PDF file import java.io.BufferedReader; import...); System.out.println("Text is inserted into pdf file"); document.close... FileReader( Input File)); String inLine = null
how to read text file with java 8 stream api
how to read text file with java 8 stream api Hi, I want to use Java... code. how to read text file with java 8 stream api? Thanks Hi, Following example is for reading text file line by line in Java using the stream api
write a program in java to read a text file and write the output to an excel file using filereader and filewriter?
write a program in java to read a text file and write the output to an excel file using filereader and filewriter? write a program in java to read a text file and write the output to an excel file using filereader and filewriter
|
2022-05-18 22:09:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31468045711517334, "perplexity": 2259.079210532335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00059.warc.gz"}
|
http://koasas.kaist.ac.kr/handle/10203/45663
|
(A) haptic interface for simulation of catheter in gastrointestinal endoscopy = 소화기 내시경의 카테터 시뮬레이션을 위한 햅틱 인터페이스
Goal of this thesis is to develop a catheter haptic device for training of gastrointestinal simulation. Catheter simulation is a pair with navigation simulation of endoscope. Instrument channel port is the appropriate position which can avoid interference with the pre-developed simulator for endoscopic navigation. Device volume and weight should be small for installation at this position. Passive braking method is good to small device, and it is suitable for force ranges of catheter operations of ERCP and colonoscopy. Realized device consists of two mechanisms, braking mechanism and position sensing mechanism. Solenoid is used as actuator in braking mechanism and rotary encoders are used in position sensing mechanism. They are so small that it is good to reduce the whole size of device. The realized device has 63 $\times$ 46 $\times$ 39 mm volume and 109 grams weight. And they show force output of 2N ~ 7N and resolution of 0.02mm. It covers requirements of device so that it is enough to develop the catheter haptic device in the research. Solenoid shows proportional property between force and voltage. And output force of complete device also shows proportional property with voltage. Therefore, the force exerted to user’s hand can be regulated to desired force by voltage change. The relative error between desired force and actual force is less than 10%. Additionally, device can express the stronger force than target range of force. So, it can be used as a good option to make simulation better.
Lee, Doo-Yongresearcher이두용researcher
Publisher
한국과학기술원
Issue Date
2009
Identifier
308497/325007 / 020074104
Language
eng
Description
학위논문(석사) - 한국과학기술원 : 기계공학전공, 2009.2, [ vi, 77 p. ]
Keywords
catheter; haptic; medical; colonoscopy; endoscopy; 카테터; 햅틱; 의료시뮬레이션; 대장내시경; 소화기내시경; catheter; haptic; medical; colonoscopy; endoscopy; 카테터; 햅틱; 의료시뮬레이션; 대장내시경; 소화기내시경
URI
http://hdl.handle.net/10203/45663
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=308497&flag=t
Appears in Collection
ME-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.
• Hit : 96
|
2017-08-22 18:32:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5208271145820618, "perplexity": 2167.396125446702}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886112539.18/warc/CC-MAIN-20170822181825-20170822201825-00333.warc.gz"}
|
https://eymaps.com/en/Inflow_(hydrology)
|
# Discharge (hydrology)
In hydrology, discharge is the volumetric flow rate of water that is transported through a given cross-sectional area.[1] It includes any suspended solids (e.g. sediment), dissolved chemicals (e.g. CaCO3(aq)), or biologic material (e.g. diatoms) in addition to the water itself. Terms may vary between disciplines. For example, a fluvial hydrologist studying natural river systems may define discharge as streamflow, whereas an engineer operating a reservoir system may equate it with outflow, contrasted with inflow.
## Theory and calculation
A discharge is a measure of the quantity of any fluid flow over unit time. The quantity may be either volume or mass. Thus the water discharge of a tap (faucet) can be measured with a measuring jug and a stopwatch. Here the discharge might be 1 litre per 15 seconds, equivalent to 67 ml/second or 4 litres/minute. This is an average measure. For measuring the discharge of a river we need a different method and the most common is the 'area-velocity' method. The area is the cross sectional area across a river and the average velocity across that section needs to be measured for a unit time, commonly a minute. Measurement of cross sectional area and average velocity, although simple in concept, are frequently non-trivial to determine.
The units that are typically used to express discharge in streams or rivers include m3/s (cubic meters per second), ft3/s (cubic feet per second or cfs) and/or acre-feet per day.[2]
A commonly applied methodology for measuring, and estimating, the discharge of a river is based on a simplified form of the continuity equation. The equation implies that for any incompressible fluid, such as liquid water, the discharge (Q) is equal to the product of the stream's cross-sectional area (A) and its mean velocity (${\displaystyle {\bar {u}}}$), and is written as:
${\displaystyle Q=A\,{\bar {u}}}$
where
• ${\displaystyle Q}$ is the discharge ([L3T−1]; m3/s or ft3/s)
• ${\displaystyle A}$ is the cross-sectional area of the portion of the channel occupied by the flow ([L2]; m2 or ft2)
• ${\displaystyle {\bar {u}}}$ is the average flow velocity ([LT−1]; m/s or ft/s)
For example, the average discharge of the Rhine river in Europe is 2,200 cubic metres per second (78,000 cu ft/s) or 190,000,000 cubic metres (150,000 acre⋅ft) per day.
Because of the difficulties of measurement, a stream gauge is often used at a fixed location on the stream or river.
## Hydrograph
A stream hydrograph. Increases in stream flow follow rainfall or snowmelt. The gradual decay in flow after the peaks reflects diminishing supply from groundwater.
A hydrograph is a graph showing the rate of flow (discharge) versus time past a specific point in a river, channel, or conduit carrying flow. The rate of flow is typically expressed in cubic meters or cubic feet per second (cms or cfs).
It can also refer to a graph showing the volume of water reaching a particular outfall, or location in a sewerage network. Graphs are commonly used in the design of sewerage, more specifically, the design of surface water sewerage systems and combined sewers.
## Catchment discharge
Torrente Pescone, one of the inflows of Lake Orta (Italy).
The catchment of a river above a certain location is determined by the surface area of all land which drains toward the river from above that point. The river's discharge at that location depends on the rainfall on the catchment or drainage area and the inflow or outflow of groundwater to or from the area, stream modifications such as dams and irrigation diversions, as well as evaporation and evapotranspiration from the area's land and plant surfaces. In storm hydrology, an important consideration is the stream's discharge hydrograph, a record of how the discharge varies over time after a precipitation event. The stream rises to a peak flow after each precipitation event, then falls in a slow recession. Because the peak flow also corresponds to the maximum water level reached during the event, it is of interest in flood studies. Analysis of the relationship between precipitation intensity and duration and the response of the stream discharge are aided by the concept of the unit hydrograph, which represents the response of stream discharge over time to the application of a hypothetical "unit" amount and duration of rainfall (e.g., half an inch over one hour). The amount of precipitation correlates to the volume of water (depending on the area of the catchment) that subsequently flows out of the river. Using the unit hydrograph method, actual historical rainfalls can be modeled mathematically to confirm characteristics of historical floods, and hypothetical "design storms" can be created for comparison to observed stream responses.
The relationship between the discharge in the stream at a given cross-section and the level of the stream is described by a rating curve. Average velocities and the cross-sectional area of the stream are measured for a given stream level. The velocity and the area give the discharge for that level. After measurements are made for several different levels, a rating table or rating curve may be developed. Once rated, the discharge in the stream may be determined by measuring the level, and determining the corresponding discharge from the rating curve. If a continuous level-recording device is located at a rated cross-section, the stream's discharge may be continuously determined.
Larger flows (higher discharges) can transport more sediment and larger particles downstream than smaller flows due to their greater force. Larger flows can also erode stream banks and damage public infrastructure.
## Catchment effects on discharge and morphology
G. H. Dury and M. J. Bradshaw are two geographers who devised models showing the relationship between discharge and other variables in a river. The Bradshaw model described how pebble size and other variables change from source to mouth; while Dury considered the relationships between discharge and variables such as stream slope and friction. These follow from the ideas presented by Leopold, Wolman and Miller in Fluvial Processes in Geomorphology.[3] and on land use affecting river discharge and bedload supply.[4]
## Inflow and the Hydrologic Cycle
Visual description of Hydrologic Cycle
Inflow[5] is a process within the hydrologic cycle that helps maintain the water levels within all bodies of water.
The hydrologic cycle,[6] or water cycle, has no true starting point. However, it’s easiest to start with the ocean, as the ocean makes up the majority of Earth’s water. The sun is the main aspect of the hydrologic cycle, as it is responsible for warming the water and causing evaporation. As water evaporates into the air and the rising air currents take the evaporated water into the atmosphere. Once the evaporated water reaches high enough in the atmosphere, it reaches cooler temperatures, which cause the vapor to condense into clouds.
Air currents are capable of moving clouds around the globe, but typically cloud particles collide and fall out of the sky as precipitation. Even though precipitation can fall in many forms and in many locations, most precipitations either ends up back into a body of water or on land as surface runoff.[7] A portion of runoff enters back into streams and rivers, which over time lead back to the ocean. Another portion of runoff soaks into the ground as groundwater seepage, and are stored in freshwater lakes.[8] The other portion of runoff soaks into the ground as infiltration, some of this water will infiltrate deep into the ground and replenish aquifers.[6]
So how does inflow play a role in the hydrologic cycle? Inflow is the adding of water to the different aspects of the hydrologic system. Consequently, outflow is the removal of water from the hydrological cycle. Inflow adds water to different aspects of the hydrologic cycle that returns water storage to an even level. Water storage is the retention of water throughout different aspects of the hydrologic cycle. Due to the fact that water movement is cyclical,[9] inflow, outflow, and storage are all aspects of the hydrologic cycle.
Inflow = Outflow +/- Changes in Storage[9]
## References
1. ^ Buchanan, T.J. and Somers, W.P., 1969, Discharge Measurements at Gaging Stations: U.S. Geological Survey Techniques of Water-Resources Investigations, Book 3, Chapter A8, p. 1.
2. ^ Dunne, T., and Leopold, L.B., 1978, Water in Environmental Planning: San Francisco, Calif., W.H. Freeman, pp. 257–258.
3. ^ L. B. Leopold, M. G. Wolman J. P. and Miller, Fluvial Processes in Geomorphology, W. H. Freeman, San Francisco, 1964.
4. ^ G. M. Kondolf, H. Piégay and N. Landon, "Channel response to increased and decreased bedload supply from land use change: contrasts between two catchments", Geomorphology, 45/1–2, pp. 35–51.
5. ^ "The Hydrologic Cycle | Freshwater Inflows". www.freshwaterinflow.org. Retrieved 2020-12-09.
6. ^ a b "Precipitation and the Water Cycle". www.usgs.gov. Retrieved 2020-12-09.
7. ^ DOC, NOAA. "Description of the Hydrologic Cycle". www.nwrfc.noaa.gov. Retrieved 2020-12-09.
8. ^ "Groundwater Flows Underground". www.usgs.gov. Retrieved 2020-12-09.
9. ^ a b "Water Research Center - Watershed and Water Resource Budgets". water-research.net. Retrieved 2020-12-09.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Discharge_(hydrology)&oldid=1116606432"
|
2022-12-06 10:16:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6426643133163452, "perplexity": 2857.4795565559984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711077.50/warc/CC-MAIN-20221206092907-20221206122907-00165.warc.gz"}
|
https://voer.edu.vn/c/electrical-potential-due-to-a-point-charge/0e60bfc6/21edf45f
|
Giáo trình
# College Physics
Science and Technology
## Electrical Potential Due to a Point Charge
Tác giả: OpenStaxCollege
Point charges, such as electrons, are among the fundamental building blocks of matter. Furthermore, spherical charge distributions (like on a metal sphere) create external electric fields exactly like a point charge. The electric potential due to a point charge is, thus, a case we need to consider. Using calculus to find the work needed to move a test charge $q$ from a large distance away to a distance of $r$ from a point charge $Q$, and noting the connection between work and potential $\left(W=\phantom{\rule{0.25em}{0ex}}–q\Delta V\right)$, it can be shown that the electric potential $V$ of a point charge is
$V=\frac{\text{kQ}}{r}\phantom{\rule{0.25em}{0ex}}\left(\text{Point Charge}\right),$
where k is a constant equal to $9.0×{\text{10}}^{\text{9}}\phantom{\rule{0.25em}{0ex}}\text{N}\phantom{\rule{0.25em}{0ex}}\text{·}\phantom{\rule{0.25em}{0ex}}{\text{m}}^{\text{2}}\text{/}{\text{C}}^{\text{2}}$.
The potential at infinity is chosen to be zero. Thus $V$ for a point charge decreases with distance, whereas $\mathbf{\text{E}}$ for a point charge decreases with distance squared:
$\text{E}=\frac{\text{F}}{q}=\frac{\text{kQ}}{{r}^{2}}.$
Recall that the electric potential $V$ is a scalar and has no direction, whereas the electric field $\mathbf{\text{E}}$ is a vector. To find the voltage due to a combination of point charges, you add the individual voltages as numbers. To find the total electric field, you must add the individual fields as vectors, taking magnitude and direction into account. This is consistent with the fact that $V$ is closely associated with energy, a scalar, whereas $\mathbf{\text{E}}$ is closely associated with force, a vector.
What Voltage Is Produced by a Small Charge on a Metal Sphere?
Charges in static electricity are typically in the nanocoulomb $\left(\text{nC}\right)$ to microcoulomb $\left(\text{µC}\right)$ range. What is the voltage 5.00 cm away from the center of a 1-cm diameter metal sphere that has a $-3.00\phantom{\rule{0.25em}{0ex}}\text{nC}$ static charge?
Strategy
As we have discussed in Electric Charge and Electric Field, charge on a metal sphere spreads out uniformly and produces a field like that of a point charge located at its center. Thus we can find the voltage using the equation $V=\text{kQ}/r$.
Solution
Entering known values into the expression for the potential of a point charge, we obtain
$\begin{array}{lll}V& =& k\frac{Q}{r}\\ & =& \left(\text{8.99}×{\text{10}}^{9}\phantom{\rule{0.25em}{0ex}}\text{N}·{\text{m}}^{2}/{\text{C}}^{2}\right)\left(\frac{\text{–3.00}×{\text{10}}^{–9}\phantom{\rule{0.25em}{0ex}}\text{C}}{\text{5.00}×{\text{10}}^{\text{–2}}\phantom{\rule{0.25em}{0ex}}\text{m}}\right)\\ & =& \text{–539 V.}\end{array}$
Discussion
The negative value for voltage means a positive charge would be attracted from a larger distance, since the potential is lower (more negative) than at larger distances. Conversely, a negative charge would be repelled, as expected.
What Is the Excess Charge on a Van de Graaff Generator
A demonstration Van de Graaff generator has a 25.0 cm diameter metal sphere that produces a voltage of 100 kV near its surface. (See [link].) What excess charge resides on the sphere? (Assume that each numerical value here is shown with three significant figures.)
Strategy
The potential on the surface will be the same as that of a point charge at the center of the sphere, 12.5 cm away. (The radius of the sphere is 12.5 cm.) We can thus determine the excess charge using the equation
$V=\frac{\text{kQ}}{r}.$
Solution
Solving for $Q$ and entering known values gives
$\begin{array}{lll}Q& =& \frac{\text{rV}}{k}\\ & =& \frac{\left(0\text{.}\text{125}\phantom{\rule{0.25em}{0ex}}\text{m}\right)\left(\text{100}×{\text{10}}^{3}\phantom{\rule{0.25em}{0ex}}\text{V}\right)}{8.99×{\text{10}}^{9}\phantom{\rule{0.25em}{0ex}}\text{N}·{\text{m}}^{2}/{\text{C}}^{2}}\\ & =& \text{1.39}×{\text{10}}^{–6}\phantom{\rule{0.25em}{0ex}}\text{C}=\text{1.39 µC.}\end{array}$
Discussion
This is a relatively small charge, but it produces a rather large voltage. We have another indication here that it is difficult to store isolated charges.
The voltages in both of these examples could be measured with a meter that compares the measured potential with ground potential. Ground potential is often taken to be zero (instead of taking the potential at infinity to be zero). It is the potential difference between two points that is of importance, and very often there is a tacit assumption that some reference point, such as Earth or a very distant point, is at zero potential. As noted in Electric Potential Energy: Potential Difference, this is analogous to taking sea level as $h=0$ when considering gravitational potential energy, ${\text{PE}}_{g}=\text{mgh}$.
# Section Summary
• Electric potential of a point charge is $V=\text{kQ}/r$.
• Electric potential is a scalar, and electric field is a vector. Addition of voltages as numbers gives the voltage due to a combination of point charges, whereas addition of individual fields as vectors gives the total electric field.
# Conceptual Questions
In what region of space is the potential due to a uniformly charged sphere the same as that of a point charge? In what region does it differ from that of a point charge?
Can the potential of a non-uniformly charged sphere be the same as that of a point charge? Explain.
# Problems & Exercises
A 0.500 cm diameter plastic sphere, used in a static electricity demonstration, has a uniformly distributed 40.0 pC charge on its surface. What is the potential near its surface?
144 V
What is the potential $0\text{.}\text{530}×{\text{10}}^{–10}\phantom{\rule{0.25em}{0ex}}\text{m}$ from a proton (the average distance between the proton and electron in a hydrogen atom)?
(a) A sphere has a surface uniformly charged with 1.00 C. At what distance from its center is the potential 5.00 MV? (b) What does your answer imply about the practical aspect of isolating such a large charge?
(a) 1.80 km
(b) A charge of 1 C is a very large amount of charge; a sphere of radius 1.80 km is not practical.
How far from a $1\text{.}\text{00 µC}$ point charge will the potential be 100 V? At what distance will it be $\text{2.00}×{\text{10}}^{2}\phantom{\rule{0.25em}{0ex}}\text{V}?$
What are the sign and magnitude of a point charge that produces a potential of $\text{–2.00 V}$ at a distance of 1.00 mm?
$–2\text{.}\text{22}×{\text{10}}^{–13}\phantom{\rule{0.25em}{0ex}}\text{C}$
If the potential due to a point charge is $5\text{.}\text{00}×{\text{10}}^{2}\phantom{\rule{0.25em}{0ex}}\text{V}$ at a distance of 15.0 m, what are the sign and magnitude of the charge?
In nuclear fission, a nucleus splits roughly in half. (a) What is the potential $2\text{.}\text{00}×{\text{10}}^{–14}\phantom{\rule{0.25em}{0ex}}\text{m}$ from a fragment that has 46 protons in it? (b) What is the potential energy in MeV of a similarly charged fragment at this distance?
(a) $3\text{.}\text{31}×{\text{10}}^{6}\phantom{\rule{0.25em}{0ex}}\text{V}$
(b) 152 MeV
A research Van de Graaff generator has a 2.00-m-diameter metal sphere with a charge of 5.00 mC on it. (a) What is the potential near its surface? (b) At what distance from its center is the potential 1.00 MV? (c) An oxygen atom with three missing electrons is released near the Van de Graaff generator. What is its energy in MeV at this distance?
An electrostatic paint sprayer has a 0.200-m-diameter metal sphere at a potential of 25.0 kV that repels paint droplets onto a grounded object. (a) What charge is on the sphere? (b) What charge must a 0.100-mg drop of paint have to arrive at the object with a speed of 10.0 m/s?
(a) $2\text{.}\text{78}×{\text{10}}^{-7}\phantom{\rule{0.25em}{0ex}}\text{C}$
(b) $2\text{.}\text{00}×{\text{10}}^{-10}\phantom{\rule{0.25em}{0ex}}\text{C}$
In one of the classic nuclear physics experiments at the beginning of the 20th century, an alpha particle was accelerated toward a gold nucleus, and its path was substantially deflected by the Coulomb interaction. If the energy of the doubly charged alpha nucleus was 5.00 MeV, how close to the gold nucleus (79 protons) could it come before being deflected?
(a) What is the potential between two points situated 10 cm and 20 cm from a $3\text{.}0 µC$ point charge? (b) To what location should the point at 20 cm be moved to increase this potential difference by a factor of two?
Unreasonable Results
(a) What is the final speed of an electron accelerated from rest through a voltage of 25.0 MV by a negatively charged Van de Graaff terminal?
(a) $2.96×{10}^{9}\phantom{\rule{0.25em}{0ex}}\text{m/s}$
|
2020-10-22 20:07:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 40, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7266874313354492, "perplexity": 374.93119948801166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880038.27/warc/CC-MAIN-20201022195658-20201022225658-00204.warc.gz"}
|
https://openresearchsoftware.metajnl.com/articles/10.5334/jors.cb/print/
|
## (1) Overview
### Introduction
The programming philosophy underlying the software package DiracQ is inspired by P A M Dirac’s notation of c-numbers and q- numbers, denoting objects that are analogous to regular numbers and to non commuting quantum operators [1]. Dirac’s notation pervades much of theoretical physics, and underlies its characteristic informality of syntax, relative to formal mathematical literature. The package DiracQ permits user defined expressions to be similarly informal combinations of (symbolic) commuting variables and non-commuting operators. DiracQ consists of functions designed to extend and compliment the abilities of Mathematica to perform manipulations with non-commuting quantum operators. Once loaded within Mathematica, the package DiracQ allows the user to perform algebraic operations with the most frequently encountered quantum objects. The package enables the user to evaluate commutators, anticommutators or products of expressions, and to manipulate and often greatly simplify the resulting expressions.
In summary, DiracQ works with expressions in a fashion that is very close to a theoretical physicists “natural” way of formulating quantum problems. This feature makes it especially easy to use for physicists and can be very advantageous in pedagogical settings, where it may facilitate student concentration on physics related logic instead of the specificities of programming.
### An Elementary Example
We now provide a simple example of Mathematica input and output, to demonstrate the motivation and usage of the package. In this example, we will take a single spin-1/2 particle in an arbitrary magnetic field. The Zeeman Hamiltonian of this system is given by
(1)
where A, B, and C are the components of the magnetic field in the three Cartesian directions in suitable units, the set {σx, σy, σz } are the usual Pauli matrices. By a specific rotation of the Cartesian axes, H can be diagonalized. However, if we are interested only in the eigenvalues of H, and not its eigenfunctions, we can avoid the diagonalization altogether. A simple shortcut may be used, exploiting the two properties of Pauli matrices (a) for any component j, (σj)2= e the identity matrix, and (b) the anticommuting property {σi, σj}=0 for distinct i, j. Therefore by squaring H, we obtain identity times the square of the net field. This is easily done by hand and yields the net field $h=\sqrt{{A}^{2}+{B}^{2}+{C}^{2}}$ so that the eigenvalues are h and – h. Let us next see how this problem is done using DiracQ. We define this Hamiltonian in Mathematica using the input below1.
Note that H is a typical mix of commuting (A,B,C), and non-commuting (Pauli matrix) objects. The standard Mathematica function NonCommutativeMultiply (**) is not very useful for evaluating the square of the Hamiltonian; H**H leads to a sum over nine terms of the type (A σ[i, x])**(B σ[i, y]). This output can be further processed by the standard Mathematica function Simplify, which gives back the same result. One would like to implement further rules that declare A and B to be commuting objects that can be moved to any position in the product, while the ordering and simplification should only affect the product of Pauli matrices σ[i, x] **σ[i, y]. This is achieved in DiracQ, where one instructs the program that A, B and C are c-numbers, whereas the Pauli matrices are q-number operators, with their well known simplification rules2. With these preliminaries, the expression is immediately simplified to the correct answer3.
This rather elementary example illustrates DiracQ’s ability to distinguish and separate c-numbers from q-numbers, and apply special algebraic properties of quantum operators. The problem at hand is trivial and hardly requires symbolic computation. However, similar tasks involving many copies of the Pauli matrices can compound to unmanageable proportions and require greater processing power4.
### Problems involving Fermions
DiracQ recognizes most common quantum operators and knows their algebraic properties. For example, when instructed by the user, the package will recognize f[i] and f†[j] to represent the Fermi annihilation and creation operators indexed by i and j, representing sites on a lattice. Functions of the package will utilize their algebraic properties, such as product rules and anticommutators appropriately, or as overridden/directed by the user. Input expressions can include standard summation notation using multiple summation indices. Summation indices interact appropriately with Kronecker delta functions in user input, or those that arise during evaluation or simplification of expressions.
Many important problems in quantum many body physics require the diagonalization of standard models, such as the Hubbard model. These models are defined on varying lattices with Fermi operators assigned to each site, and often with different spin or flavor indices. A typical numerical application requires setting up and diagonalizing the Hamiltonian matrix within a subspace defined by a fixed number of particles. While the diagonalization of a numerical matrix is a standard problem in numerical analysis, where much progress has been made, we are interested in the other end of the problem: setting up the matrix. Here the physicist is expected to produce the numerical matrix starting from the abstract Hamiltonian on an appropriate lattice. This is often a tedious and error prone procedure. DiracQ efficiently handles this aspect of the problem. The example notebook provided in the DiracQ package distribution folder shows how to construct such a matrix in a typical case. For a small cluster of four sites, the Hubbard Hamiltonian and the basis states within the Fock space are set up, and the numerical matrix written out at the end. This procedure is easily extended to larger lattices and to other models. The eigensystem of the resulting matrix can be computed either within Mathematica itself, or if required, in a suitable external program.
Users can not only use the predefined operators in DiracQ, but also define additional operators and provide their algebraic properties to DiracQ. Such operators and their algebraic properties will be recognized and implemented by all functions of the package. In this way DiracQ provides a new language for formulating algebraic quantum problems.
The DiracQ package distribution folder includes demonstration problems involving popular systems in statistical mechanics and many body physics. For example, we reproduce some of the crucial algebra contained in the seminal paper by R. J. Baxter on the integrability of the 8-vertex model [2], and B. S. Shastry’s analogous proof of the integrabilty of the 1-dimensional Hubbard model [3], and its later extensions [4]. A recent work by Bukov et. al. illustrates the use of the DiracQ package for the evaluation of commutators in the problem of high-frequency periodically driven systems [5].
### Summary
The DiracQ package should find application in any research that involves manipulations of long string of non-commutating operators. We typically expect that these manipulations will arise in the fields of quantum condensed matter physics, quantum statistical mechanics, quantum field theory and nuclear physics, and also in some problems of quantum chemistry
### Implementation and architecture
The goal of our project was to develop a library of functions that would enable users to perform algebraic manipulations of expressions that include non-commuting operators as well as commuting numbers. The functions of the package all operate within the same underlying framework. User input expressions are first separated into individual components. Input expressions are broken into individual non-commuting operators, commuting symbols, numbers, and summation indices. These components of an input expression are stored in a nested list organized according to the type of objects found in the input expression. All functions of the package utilize this organizational framework for manipulation and combination of expressions. After manipulation, the individual components of an expression are recombined to yield a result in familiar notation.
The package therefore contains two sets of functions: those functions whose purpose is decomposing input expressions into nested lists or composing output expressions from nested lists, and functions that users call to perform manipulations which rely on the former functions. The package is extensible in that users can relatively easily write new functions to manipulate expressions utilizing the foundational organization system.
### Quality control
Each function of the package has been tested individually to ensure that the algebraic manipulations carried out are correct. Combinations of functions have been tested with known examples. Series of manipulations using a large number of functions have been carried out to ensure they produce some known results. A notebook supplied in the package distribution folder provides several examples of the packages use and demonstrations of the functions of the package being used to obtain non-trivial known results.
## (2) Availability
### Operating system
Any system capable of running Mathematica 8 or higher.
### Programming language
Mathematica 8 or higher.
### Dependencies
Mathematica 8 and higher.
### Software location
Archive
Code repository GitHub
English.
## (3) Reuse potential
The DiracQ package will find use in any research that requires the algebraic manipulation of expressions containing non-commuting quantum operators, especially in settings where the expressions or manipulations are particularly large or complex.
## Competing Interests
The authors declare that they have no competing interests.
|
2022-10-06 11:44:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5897551774978638, "perplexity": 828.9123454375053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337803.86/warc/CC-MAIN-20221006092601-20221006122601-00754.warc.gz"}
|
https://mjsaldanha.com/posts/music-normalization
|
How to Normalize Your Music Library
November 15th 2019
Aiming at those who love their music playlist like I do, in this article I'll try to give some hints as to how to normalize your music library so that songs have an uniform loudness. In this way, ideally you would be able to listen to the whole playlist without touching the volume button even once.
I had wanted to normalize my playlist for a long while, but two things inhibited me: first, most off-the-shelf solutions out there miss on some feature I need; second, many of them did not made songs as uniform as I wanted. So let's first point out some requirements we will impose to potential solutions:
1. Songs must keep their artwork after being processed.
2. The numerical samples of a song must be modified in a simple, uniform way. By this we mean that each sample or each window of samples is transformed by the same function $f(x)$.
3. The normalization must be done in a relatively simple way, so that in a worst case scenario we can implement it ourselves.
Note that these requirements are actually personal preferences. However, even if you disagree with any of them, I'm sure you still can make a lot of use of what is discussed in the following.
Some Definitions
A digital music file is composed of sequences of numbers, often signed 16-bit integers. The file might contain more than 1 such sequence, called streams in the digital music field. For example, stereo audio has 2 sequences, one for each side of your headphones, and surround sound has 5 or more streams to play in each loudspeaker in your living room.
These numbers represent oscillations of the https://en.wikipedia.org/wiki/Loudspeaker#Diaphragm of the loudspeaker or the microphone. When you speak or play some music, you cause perturbations in air pressure, which reach the microphone's diaphragm and cause it to shake, which in turn generates voltage that is discretely recorded as a sequence of numbers. For loudspeakers, the difference is that these numbers are received, converted to voltage, which causes the diaphragma to move and then cause pressure changes in the air, generating some sound.
Normalization Methods
The first normalization method that comes to mind is to multiply the song samples by a factor that makes the largest sample
|
2021-10-24 23:56:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25138401985168457, "perplexity": 809.705620045746}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00630.warc.gz"}
|
http://rpg.stackexchange.com/questions/10222/a-story-focused-rules-lite-fantasy-friendly-classless-rpg
|
# A story-focused, rules-lite, fantasy-friendly, classless RPG? [closed]
I've been considering starting up tabletop RPGing again after a 13-year pause. During the time I did RPGing before, I mainly used Advanced Dungeons & Dragons (second edition). In retrospect I think the AD&D system was far too clunky, with completely unnecessary rules and tables for all kinds of situations. I think it was a massive hindrance to storytelling, at least for me.
So this time round I'm looking for an RPG that is:
1) Story-focused. The adventurers should never be confined to a certain subset of allowed actions. They should be able to do whatever they wish and the rules need to be flexible enough to deal with that. The focus should be on a compelling story, not tactical combat, dungeon-crawling or levelling. However neither do I want a system that is pure storytelling - i.e. where absolutely everything is at the gamesmaster's discretion. I don't want to decide the outcome of every fight.
2) Suited to high fantasy. I'm planning on creating my own high fantasy setting, so it needs to be able to work with that (whether because it's completely flexible or because it's specifically focused on high fantasy). Guidance on how to deal with magic is a bonus.
3) Has rules which can be understood within an hour of reading the rulebook. I don't want to ever be scrambling through the rulebook to see what happens in such-and-such a situation, or trying to remember what special bonuses apply in a particular dice roll.
4) Isn't class-based. I want a system with complete flexibility about how the adventurers develop. Any system that requires they choose a class (e.g. mage, warrior, thief) is a no-no.
5) Ideally Free. Not because I'm in poverty, but because I think commercial systems have an inbuilt bias towards unnecessary rules, because it makes it easier to sell things (through expansions and new editions). This is the one criteria that isn't essential.
UPDATE: I'm going to go with FU, as it's got some gushing reviews, is ultra-lite and all the rules directly encourage narrative creativity and excitement. But keep the recommendations coming in as, if my first session goes well, I will investigate all other promising alternatives and consider how they might improve on the experience. Most of the systems recommended here are on my to-do list.
-
## closed as not constructive by Pat LudwigDec 22 '11 at 7:04
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.If this question can be reworded to fit the rules in the help center, please edit the question.
I would like to remind answerers of the quality guidelines for game-rec questions explained here: meta.rpg.stackexchange.com/questions/1070/… – mxyzplk Sep 29 '11 at 13:46
The link to "FU" is broken. Whatever FU is. – DCShannon Apr 20 at 18:05
I would suggest fate as it is story focuses, rules-lite, and has no classes per se as every thing is skill based. Adapting it to a fantasy setting should not take much time. And it is free as a bonus.
Personally, I think that it is one of the simplest system there is even if the SRD is more than 200 pages long. That just gives you lots of examples and clarifications.
-
FATE is, in no way, one of the simplest systems there is. The SotC SRD is 200+ pages long. – mxyzplk Sep 30 '11 at 3:44
I concur. I recently finished a relatively long DFRPG campaign and FATE is a great system. But I wouldn't classify it as light. I would say that it offers a great balance of crunch and story-driven mechanics. I love it. There's another FATE Fantasy Legends of Anglerre you might check if you decide to go with FATE. I won't make it an answer as I haven't played it and don't own it. – gomad Sep 30 '11 at 21:28
While the SotC SRD is 200+ pages long the entire game without examples and advice can fit on a book mark. I can say this with confidence as I was given a book mark that did just that. docs.google.com/… – shaneknysh Oct 4 '11 at 2:52
# Barbarians of Lemuria
This rules light system is focused on sword-and-sorcery. It's a traditional approach, simple to play, and well respected.
It's very straightforward. It's also inexpensive and very well regarded.
# John Wick's Houses of the Blooded and Blood and Honor
While recommending a 436 page tome may seem contraindicated, I'm going to recommend Houses of the Blooded, (and its much smaller sibling, Blood and Honor; only 186 pp.)
Both games have a very straightforward, story oriented set of mechanics. Both are very flexible, very streamlined. And very different from AD&D.
Both can support both social and dungeon adventuring.
The task mechanics can be summed up in a single large type page. You can find my cheat sheet for B&H at my website.
Oh, and the PDF is only $5 for either one... - I looked into Barbarians of Lemuria but decided against it in the end, since the setting it specifically caters for won't appeal to my intended first player. – lumpkin Oct 1 '11 at 19:40 # Dungeon World Dungeon World is a conversion of one of the best, most exciting new RPGs to appear in the last few years, from a designer who constantly pushes the form forward. It's the fantasy version of Vincent Baker's Apocalypse World. Let's take your finalized criteria one at a time: 1) Story-focused: Apocalypse World (and thus, Dungeon World), like Baker's other games (Dogs in the Vineyard, Poison'd, etc.,) is a monumentally story-focused game. The game is built around "moves" - mechanics with defined triggers and results. This may sound limiting, but because these are story-condition triggers and story-results, they're not. It's not as if a move says, "When within 5 feet of a secret door, you may roll to detect it" or something that focused. Let me give you an example from AW (because that's the one I have at hand. I played Dungeon World at GenCon, but they were sold out by the time I knew I deeply needed that game.) Going aggro means using violence or the threat of violence to control somebody else’s behavior, without (or before) fighting. So whenever somebody uses threats or physical means to force someone to do (or not do) something, that's going aggro. See how that's a story-condition trigger? Similarly, the rewards for success are defined as story-results like, "they have to give you what you want or escalate the situation". These moves are instructions for constructing stories! They're story-legos that all snap together to build awesome stuff! ...I don't want to decide the outcome of every fight The Apocalypse World system has an awesome and deadly combat system with plenty of room for storytelling and drama. In Dungeon World, even character death comes with collaborative storytelling. It can be a powerful moment, not just something that ticks off a player. 2) Suited to high fantasy Dungeon World is designed for high fantasy. But it's not particularly setting-bound. It's meant, like AW, to be your story in your vision of that genre. There is already awesome, dangerous magic in the world. I wasn't a mage when I played, so I didn't get to see exactly how it worked, but it was cool. 3)...can be understood within an hour of reading the rulebook There is a small collection of basic moves that everyone has, and each character sheet has the player's particular moves printed right on it. Your reference-space is very small. 4) Isn't class-based. OK. So...Dungeon World fails here. But if you dislike classes because of D&D, at least go check Dungeon World out before you say no. Playbooks in Dungeon World aren't really about shunting character development into a few, well-defined channels. They're about enabling players to have well-defined roles in the story, and to be differentiated in how they are awesome. Freedom of progression is part of Apocalypse World - you can get moves from other playbooks as you develop. But I'm not sure of DW because I only played a 1-shot. Seriously. Don't eliminate DW over this issue without checking it out. 5) Ideally Free...commercial systems have an inbuilt bias towards unnecessary rules...to sell things This is an absolutely unfounded fear in this case. Part of the genius of Apocalypse World is how it stripped RPGs down to the bone, with a brilliant take on what was necessary. Dungeon World follows in this vein. Additionally, Dungeon World began as a free supplement to Apocalypse World, which fully supports hacking the system, including hosting hacks on the forum site. So weighing you down with rules so they can sell you clarifications is not what this game and it's ecosystem are about. - Just a word on the setting. In DW you ad the DM should be ready to let the other player define the setting as much as you. "Hey Master, are there Goblins in this vale?" "I don't know. Are they there?" is a common mechanism of this game. Don't define the world too much, leave space for surprises coming from your fellow players. – Zachiel Nov 22 '12 at 12:54 One game that I have played recently and really liked was Rêve de Dragon. The original is french. There is one core mechanic to resolve most actions. There's a bunch of attributes and a bunch of skills, no classes. The whole setting is very much geared towards narrative episodic-style playing. I really enjoyed the one session that I played and admit I've been looking to a similar perhaps more recent, english-based game like that. - +1 Wow, I am not the only one to have heard of that awesome game... – Sardathrion Oct 12 '11 at 12:29 i don't know about the English version, but the French one is not famous for having lightweight rules... – Guillaume Jun 11 '12 at 12:55 There is Awesome Adventures which I already recommended. It's semi-classic and rather simple with nice character-to-story integration. Since you did not specify that much, there are two very generic games you might want to look at: • The Pool, which is free, super-fast, super-flexible and uses story-motives as ways to resolve situations and determine characters. It also gives the players a great deal of control. You might want to check out the variations of The Pool which move the game in different directions. • Risus, which is the "Bier & Brezel" relative of The Pool, using rather satirical tropes to describe a character and have him do stuff. - I would highly suggest Mouse Guard. Though it is designed to allow people to play out characters in the Mouse Guard world, it can easily be used in any setting. It is super simple, quite elegant and shares the responsibility of storytelling across the players and the narrator. You can find it here on Amazon: http://www.amazon.com/Mouse-Guard-Roleplaying-Game-Crane/dp/1932386882/ref=sr_1_4?ie=UTF8&qid=1317272728&sr=8-4 Something to add: I know this game isn't free, but it just has the one book. The book is beautiful too. I noted your main problem was book proliferation. Mouse Guard doesn't do that. - One word of caution on MG - it is very precisely worded. Run it straight for a while before modding; some of the interactions are inobvious. It's an excellent game, tho'. The only reason I didn't include it in my answer is that it's complexity is hidden. Oh, and don't think of it as BW lite - it's more different than it would first appear. – aramis Sep 29 '11 at 5:36 If MG is anything like BW, then the whole "lite" thing is out the window. – gomad Sep 30 '11 at 20:31 @gomad By comparison to FATE (SOTC, Diaspora, Legends of Anglierre, Starblazer, Dresden Files), BW, BE, and even all versions of D&D, MG plays as a light game, despite 300 pages in the rulebook. And while it is based upon the same core test mechanic as BW, and the same skill ranges, it's truly a MUCH simpler game to learn and to run. Print's larger, too. – aramis Oct 2 '11 at 9:57 @aramis - huh. I thought MG was supposed to be just as hard as BW. You may have just sold a MG boxed set... – gomad Oct 2 '11 at 21:13 There's also a LotR hack for MG too. – Pureferret Dec 20 '11 at 1:04 I've also gotten disgruntled with the huge rules content of many games nowadays, so have some recommendations for you. The Microlite family of games can be a good bet, because they are free, super-stripped-down versions of games you may already know. Microlite20 was the first, providing a 2 page super slimmed down of the D&D-derived d20 ruleset, but a lot of other folks caught the bug and resulted in a lot of different more or less complex d20 variants (Microlite Purest Essence is 17 pages but contains about everything d20 does in slimmed down form), Microlite74 a 0e clone, Microlite Storyteller, Microlite Star Trek... Combines familiarity (and maybe using other stuff you might own with it) with ripping out all the cruft. I really like Microlite20 for "let's play D&D and not have to argue about rules ever," without some of the old "removed for a good reason" game artifacts the retroclones like to reinsert. But if you have a favorite trad system, there may be a Microlite version that'll let you do it with less rules and more story. Bare bones games - Risus is an oldie but goodie. The Window was another attempt. There were a lot of ones like this; if you want to browse some of the major free RPG collections like John Kim's old but comprehensive one, or the Free RPG Blog which promotes various free RPGs, often written as part of online contests. I played Risus back in the day on car trips and occasionally experiment with one of the 1KM1MT games, but I'll be honest, they never hold my attention for more than a one-shot. For a more story oriented short game, there is stuff like PDQ Sharp, which I've played as the system for a Swashbucklers of the Seven Skies game. At 26 pages that's starting to get on the edge of "one hour" (and stuff like FATE may be over that line, at more than 200 pages) but these systems try to have enough story-focused rules content to sustain some interestingness without needing rules lookups in play. PDQ Sharp is fun but suffers, ironically, from some of the gamism that lets people try to stack their aspects/descriptors/tags/whatever the game system calls them in every action. - # Savage Worlds If you're looking to get into a rules-light RPG, I would recommend you at least take a look at Savage Worlds. I used it to run a Firefly game after I got tired of the cracks and sharp edges in the official game, and I think it worked out well. It is well supported, even on this site, fast, and flexible. It used to be cheap - the Explorer's Edition was$10. I'm not sure the state of that anymore, with the new Deluxe Edition coming out so soon.
-
Be wary of the step-die system. While Savage Worlds does mollify some of the complaints of a step die system (by open ending all the dice, and reading best of 2 for PCs), step die systems tend to be "Love it!" or "HATE IT!!!"... – aramis Oct 2 '11 at 9:59
@aramis - Yup! And all of those systems could be smoothed out by electronic RNGs, or a collection of Zocchi dice to extend to d14, d16, etc.. But that's why I said, "take a look at" because it's got a lot of good features for the questioner's needs. – gomad Oct 2 '11 at 21:12
## Dresden Files
Dresden files RPG is a very good example of the, FATE system in a high fantasy setting. In my opinion, it has the best approach to magic/supernatural powers among all the games I've seen, and the two books are good source of inspiration.
Although DF isn't free, FATE is.
-
Thanks for the info. I'm interested in how introduce a decent magic system at some point, so I will look into Dresden Files based on your recommendation. – lumpkin Oct 1 '11 at 19:30
Hmmm ... from the looks of it, this doesn't match the definition of high fantasy in my book. – Martin Jun 30 '12 at 12:37
# Minimus
The best minimialistic game I know of is minimus. It's simple, elegant, and free; offering full rules and gm advice in 4 whole pages. I've run it a number of times and have been generally happy with the results.
It requires that GMs construct their own setting and genre elements, of course, but it offers an excellent framework for player driven narrative through its "relationships, goals, and secrets" system. Players can as "clairifing questions" as part of their skill use, introducing elements of interest, use, and narrative causality into a scene to support the tropes of their character.
-
You might want to check out XD 20. Very light on rules, very simple character generation. Not class based. not really even very experience based, as characters dont really level up. I think there were all of 5 numbers total to write down on the character sheet. This causes things to get very centered on story and role playing. Basically, you roll a d20. gm adds modifiers to the roll based on difficulty and how often you do that sort of thing. You want to roll higher than the stat involved on your character sheet (lower scores are better). And then you roll a d20 again to determine how well you succeeded (or how badly you failed). That's pretty much the entire system.
-
|
2016-07-23 21:23:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28737571835517883, "perplexity": 2012.5387289486505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823670.44/warc/CC-MAIN-20160723071023-00039-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/prove-if-a-b-there-is-an-irrational-inbetween-them.688858/
|
# Prove if a < b, there is an irrational inbetween them
1. Apr 30, 2013
### Zondrina
1. The problem statement, all variables and given/known data
No giving up !
The question : http://gyazo.com/08a3726f30e4fb34901dece9755216f3
2. Relevant equations
A lemma and a theorem :
http://gyazo.com/f3b61a9368cca5a7ed78a928a162427f
http://gyazo.com/ca912b6fa01ea6c163c951e03571cecf
The fact $\sqrt{2}$ and $2^{-1/2}$ are irrational.
3. The attempt at a solution
Suppose 0 < b - a. We must show that $\exists x \in ℝ - \mathbb{Q} \space | \space a < x < b$
Since 0 < b - a, we know we can apply theorem 1.3 to find $r \in \mathbb{Q} \space | \frac{a}{\sqrt{2}} < r < \frac{b}{\sqrt{2}}$ because of the denseness of $\mathbb{Q}$.
We know : $ℝ \setminus \mathbb{Q}$ is the set of irrationals.
Also, since $r \in \mathbb{Q}$, we can take $r = \frac{p}{q}$ for some $p, q \in \mathbb{Z}$
This yields $a < \frac{p \sqrt{2}}{q} < b$.
Therefore, because $\frac{p \sqrt{2}}{q} \in ℝ - \mathbb{Q}$ the claim is proven true.
EDIT : Fixed a small error.
Last edited: Apr 30, 2013
2. Apr 30, 2013
### micromass
Staff Emeritus
The proofs is ok.
But, I really don't get this line:
3. Apr 30, 2013
### Zondrina
I was just trying to highlight that it was the set of irrationals.
Do you know how to write backslashes in latex? I was trying to write R\Q, but it treats '\' like an escape character.
4. Apr 30, 2013
### micromass
Staff Emeritus
OK, but the right-hand side you wrote down really isn't equal to the set of irrationals...
\setminus
5. Apr 30, 2013
### LeonhardEuler
Don't forget that numbers like $\pi$ and e are also irrational (among many others).
6. Apr 30, 2013
### Zondrina
$ℝ \setminus \mathbb{Q}$ Yay :)!
Hmm would general elements look like $c + d \sqrt{e}$ instead then? Trying to get a grasp on what the set elements look like.
7. Apr 30, 2013
### micromass
Staff Emeritus
There is no real good description of general elements of the irrational numbers. Elements like $c+d\sqrt{e}$ are still very special.
The reality is that the set of irrationals is huge. Most elements of the irrational numbers can't be explicitely described.
8. Apr 30, 2013
### VantagePoint72
There is no general way of writing the elements of the irrationals in the way you are trying to do. The transcendental numbers are a subset of the irrationals and they cannot, by definition, be expressed as the roots of polynomials with rational coefficients. We only know the general form of a few families of transcendental numbers. However, we know that almost all real (and complex) numbers are transcendental, so you've only captured a very tiny subset with your definition.
|
2017-08-23 02:54:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8064668774604797, "perplexity": 898.2985933758333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117519.82/warc/CC-MAIN-20170823020201-20170823040201-00300.warc.gz"}
|
https://repositorio.inesctec.pt/items/68428e7d-c5d6-469b-94e6-3cc71683aad5/full
|
Long Term Evaluation of Operating Reserve with High Penetration of Renewable Energy Sources
dc.contributor.author Armando Leite da Silva en dc.contributor.author Mauro Rosa en dc.contributor.author Manuel Matos en dc.date.accessioned 2017-11-16T13:30:57Z dc.date.available 2017-11-16T13:30:57Z dc.date.issued 2011 en dc.description.abstract Due to the high penetration of renewable energy into the energy matrix of today's power networks, the design of generating systems based only on static reserve assessment does not seem to be enough to guarantee the security of power system operation. From the wind power integration perspective, this energy source imposes additional requirements, mainly due to the inherent unpredictable characteristic of the wind. Besides the uncertainties in load and generating unit availabilities, the operating reserve needs also to deal with the fluctuation characteristic of the wind power. Therefore, more flexibility of the conventional generators (hydro and thermal) is required to provide system support services. This paper discusses a new methodology based on chronological Monte Carlo simulation to evaluate the operating reserve requirements of generating systems with large amounts of renewable energy sources, in particular, wind power. en dc.identifier.uri http://repositorio.inesctec.pt/handle/123456789/2328 dc.language eng en dc.relation 214 en dc.relation 4660 en dc.rights info:eu-repo/semantics/openAccess en dc.title Long Term Evaluation of Operating Reserve with High Penetration of Renewable Energy Sources en dc.type conferenceObject en dc.type Publication en
|
2022-08-08 22:09:58
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8159358501434326, "perplexity": 3074.5994553506807}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00701.warc.gz"}
|
https://xml.jips-k.org/full-text/view?doi=10.3745/JIPS.01.0030
|
Kumari* and Jayanna*: Speaker Verification with the Constraint of Limited Data
# Speaker Verification with the Constraint of Limited Data
Abstract: Speaker verification system performance depends on the utterance of each speaker. To verify the speaker, important information has to be captured from the utterance. Nowadays under the constraints of limited data, speaker verification has become a challenging task. The testing and training data are in terms of few seconds in limited data. The feature vectors extracted from single frame size and rate (SFSR) analysis is not sufficient for training and testing speakers in speaker verification. This leads to poor speaker modeling during training and may not provide good decision during testing. The problem is to be resolved by increasing feature vectors of training and testing data to the same duration. For that we are using multiple frame size (MFS), multiple frame rate (MFR), and multiple frame size and rate (MFSR) analysis techniques for speaker verification under limited data condition. These analysis techniques relatively extract more feature vector during training and testing and develop improved modeling and testing for limited data. To demonstrate this we have used mel-frequency cepstral coefficients (MFCC) and linear prediction cepstral coefficients (LPCC) as feature. Gaussian mixture model (GMM) and GMM-universal background model (GMM-UBM) are used for modeling the speaker. The database used is NIST-2003. The experimental results indicate that, improved performance of MFS, MFR, and MFSR analysis radically better compared with SFSR analysis. The experimental results show that LPCC based MFSR analysis perform better compared to other analysis techniques and feature extraction techniques.
Keywords: Gaussian Mixture Model (GMM) , GMM-UBM , Multiple Frame Rate (MFR) , Multiple Frame Size (MFS) , MFSR , SFSR
## 1. Introduction
Speech is one of the communication media between people [1] and it can be used as an example of a biometric authentication to recognize a person [2]. When a speaker is recognized by use of vocal characteristic is called speaker recognition [3]. Speaker recognition contains speaker verification and speaker identification [4]. Accepting or rejecting the identity claim of a speaker is called speaker verification [5]. Speaker verification is one of the present days energizing technologies with very high potential [6]. Speaker verification under limited data conditions defines that verifying speakers with small amount of training and testing data. In the current scenario, speaker verification works very well for sufficient data. Sufficient data refers to speech data of few minutes (greater than 1min) and on the other hand, limited data means speech data of few seconds (less than 15 sec). Speech data can be analyzed in different techniques. The analysis techniques involve selecting suitable frame size and frame shift to extract speaker specific information [7]. The speaker specific information contains vocal tract, excitation source and suprasegmental features like intonation, duration and accent [8]. State-of- the-art speaker verification systems contains segmental, sub-segmental and suprasegmental analysis techniques [7][9] as shown in Fig.1.
Fig. 1.
Speech analysis techniques.
In case of segmental analysis, speech is analyzed using frame size (FS) and frame rate (FR) in the range of 10-30 ms to extract vocal tract information known as single frame size and single frame rate (SFSR) [9] [10]. In case of sub-segmental analysis, speech is analyzed using FS and FR in the range of 3-5 ms is preferred because excitation source information varies more quickly than that of vocal tract information [7][9][11]. In suprasegmental technique, speech is analyzed using FS and FR in the range of 100-300 ms to capture behavioral aspect of the speaker [11][12][13]. The behavioral characteristics vary slowly than the vocal tract information, large frame size and shift can be used to capture speaker specific information [14].
The normal speaker verification systems use features that are extracted using SFSR. In case of SFSR analysis the speech signals are windowed into FS of (20-30 ms) and FR of 10-20 ms [15]. This is because speech signal are non-stationary signals and shows quasi-stationary behavior at shorter durations. The problem in case of SFSR is that sometimes test speaker's speaking rate or pitch frequency does not match with the speaker's data during training [15] [16]. Another problem in case of SFSR is due to single frame size capturing sudden changes in spectral information which may not be possible [16]. The SFSR analysis may not be able to provide sufficient feature vectors for training and testing the speakers in limited data [15].
In the existing speaker verification under limited data condition both FS and FR is fixed throughout the experiment and extracted feature vectors is also less in numbers. To overcome this problem we need to synthetically increase the number of feature vectors by varying FS and FR. In case of variable frame size and rate (VFSR) [17] [18] [19] [20], the spectral information changes with time due to the change of FR. The only disadvantage to this is that it needs additional time for computation. To overcome the problems of computational complexity and capturing time of sudden change in the spectral information, we have used multiple frame size and rate (MFSR) analysis technique [15].
The leftover of the paper is structured as follows: MFSR analysis technique for speaker verification studies described in Section 2. Section 3 represents the speaker verification studies. Experimental results and discussions are present in Section 4. Summary and conclusion and possible future work path for the present work are mentioned in Section 5.
## 2. Speaker Verification using MFSR Analysis
In case of SFSR, speech is analyzed with FS of 20 ms and with FR of 10 ms is considered. But in case of MFS speech is analyzed by varying FS and maintaining FR constant, in case of MFR, maintaining FS constant and FR is varied and in case of MFSR both FS and FR are varied.
##### 2.1 MFS Analysis
In case of MFS analysis, speech data is analyzed by varying different FS and maintaining constant FR. It is also called as multi-resolution analysis technique [15]. The feature vectors extracted from speech data for different FS and magnitude spectra are noticeably dissimilar for different frequency resolutions [15]. The reason is spectral domain information obtained by convolution of spectral domain window and true spectrum of speech [10]. In addition to this, there will be little variation of FS of each speech samples. These two factors demonstrate the speaker information in different feature vectors. From this it is clear that by varying FS, the speaker specific information obtained from feature vector are different and spectral information also varies. The actual feature vectors (Nf) vary from speaker to speaker for the similar quantity of speech data (DS) and is given by [15]
##### (1)
[TeX:] $$N _ { f } = \left( \frac { 1 - F S } { F R } \right) + 1 - N _ { V A D }$$
Number of frames due to energy-based voice activity detection (VAD) technique is represented by NVAD. In case of MFS feature extraction the features are extracted with 4 different FS = {5 ms, 10 ms, 15 ms, 20 ms} and keeping FR constant as 10 ms. The extracted features for different frames of speaker can be represented by equation
##### (2)
[TeX:] $$N _ { f } = \sum _ { i = 1 } ^ { 4 } D S \left( \frac { 1 - F S _ { i } } { F R } \right) + 1 - N _ { V A D }$$
##### 2.2 MFR Analysis
In MFR analysis, speech data is analyzed by maintaining constant FS and varying different FR. We know that speaking rate and pitch varies for different speakers. Speaking rate depends on behavioral aspect of speaker and pitch is gifted characteristic of the excitation source. Pitch rate and speaking rate are different for different speakers and their spectral information is also different [21]. The single FR may not be possible to managing these variations. To overcome this problem, we need to analyze speech data for different FR and feature vectors extracted from vocal tract information will be different. Hence MFR analysis is also called multi-shifting analysis technique [15]. By varying FR spectral resolution remains same and there will be new set of feature vectors for each rate.
In case of MFR feature extraction the features are extracted with 4 different FR= {2.5 ms, 5 ms, 7.5 ms, 10 ms} and keeping FS constant as 20 ms. The extracted features for different FR is given by
##### (3)
[TeX:] $$N _ { f } = \sum _ { j = 1 } ^ { 4 } D S \left( \frac { 1 - F S } { F R _ { j } } \right) + 1 - N _ { V A D }$$
##### 2.3 MFSR Analysis
In MFSR analysis, the speech data is analyzed using multiple FS and FR analysis. In this case, analysis of speech contains both MFS and MFR techniques. We know that MFS and MFR perform multi-resolution and multi-shifting technique respectively [22]. The magnitude spectra will be different in MFS and MFR, resulting feature vectors are also different from each other [15]. The extracted feature vectors are more in number as compared to SFSR, MFS and MFR. In case of MFSR the feature vectors are extracted from MFS and MFR analysis, the feature vectors contain more speaker-specific information. In the present work, feature vectors are extracted using different FS of {5, 10, 15, 20} ms and each of four different FR of {2.5, 5, 7.5, 10} ms. The extracted features for different FS and FR are given by
##### (4)
[TeX:] $$N _ { f } = \sum _ { i = 1 } ^ { 4 } \sum _ { j = 1 } ^ { 4 } D S \left( \frac { 1 - F S _ { i } } { F R _ { j } } \right) + 1 - N _ { V A D }$$
In order to study the MFS, MFR and MFSR analysis techniques, we used mel-frequency cepstral coefficients (MFCC) and linear prediction cepstral coefficients (LPCC) feature extraction methods. Here we considered 13 and 39 dimensional feature vectors.
## 3. Speaker Verification Studies
##### 3.1 Speech Database for Current Work
In present study, NIST-2003 database is used to evaluate the performance of speaker verification system [23]. The database contains 356 and 2556 train and test speakers respectively. The train data contain 149 male and 207 female speakers. The database also contains universal background model (UBM) speech with 502 speech data. In that 251 male and 251 female are available. The database speech data varies from seconds to few minutes. This work belongs to limited data, we have created train and test speakers are of duration three, four, five, six, nine and twelve seconds for the present study. The new database is used to conduct experiments for SFSR, MFR, MFS, and MFSR analysis techniques.
##### 3.2 Feature Extraction using MFCC and LPCC
The state-of-the-art speaker verification systems widely use either the MFCC or LPCC as features [24]. The feature extraction is used to extract speaker-specific information in the form of feature vector [25]. In speaker verification system to verify the speaker these features are extensively used [3]. The MFCC and LPCC techniques are used to extract features in present work. In MFCC, the spectral distortion can be minimized by using hamming window and applying Fourier Transform to windowed frame signal to get magnitude frequency response. In order to get cepstral coefficients, the discrete cosine transform (DCT) is applied to the output of the mel filters.
LPC can be calculated either by the autocorrelation or covariance methods directly from the windowed portion of speech [26]. The LPCC can be easily obtained by Durbin's recursive procedure without computing the discrete Fourier transform (DFT) and the inverse DFT, which are computationally complex and time consuming processes [27]. In both cases features are extracted using SFSR, MFR, MFS, and MFSR methods.
The feature set LPCC or MFCC contains only static properties of a given frame of speech. The dynamic characteristics also contain some speaker specific information, these features also useful for recognition of speakers [28]. There are two types of dynamics in speech processing [28]:
Δfeaturesis the average first-order temporal derivative. This is determined by its velocity of the features.
ΔΔfeaturesis average second order temporal derivative. This is determined by its acceleration of the features.
Literature survey reveals that the significance of MFCC and LPCC feature for speaker verification in the constraint of limited data is not studied. Therefore in this work, effectiveness of both feature extraction techniques is studied.
##### 3.3 Speaker Modeling and Testing
Different modeling techniques are available for speaker modeling including vector quantization (VQ), hidden Markov model (HMM), Gaussian mixture model (GMM) and GMM-UBM, etc. For the present work GMM-UBM modeling method is used. The GMM-UBM normally used when both training and testing data are less in size [29].
The final stage in speaker verification system is testing stage. In testing stage, test feature vectors are compared with reference models [5]. The log likelihood ratio test method [30] is adopted in this work.
## 4. Experimental Results and Discussions
The features are extracted using MFCC and LPCC techniques. The features are in the dimension of 13 and 39. The speakers are modeled using GMM and GMM-UBM techniques. The ideal speaker verification system should accept all the true speakers and reject all the false speakers [4] . Speaker verification performance can be measured in terms of equal error rate (EER). It is an operating point where the false rejection rate (FRR) equals the false acceptance rate (FAR) [31] . NIST-2003 database is used to test trained model.
In the 1st experiment, the features are in the dimension of 13 extracted using MFCC and LPCC techniques and modeling is done using GMM is shown in Fig. 2(a) and (b), respectively. The analysis technique used is SFSR. The experiment is conducted for 3, 4, 5, 6, 9 and 12 sec data for different Gaussian mixtures. Further, the minimum EER is 45.17%, 44.21%, 42.36%, 41.59%, 38.25%, and 36.68% for 3, 4, 5, 6, 9, and 12 seconds, respectively. It was observed that, the minimum EER 36.68% is obtained for 12 seconds data for Gaussian mixtures of 16 compared to remaining data sizes. The average EER can be calculated by considering minimum EER obtained for all the data size of different Gaussian mixtures. The average EER of MFCC-SFSR is 41.39%.
In LPCC-SFSR, the minimum EER is obtained for 3, 4, 5, 6, 9, and 12 seconds data is 43.08%, 41.41%, 39.97%, 38.7%, 31.34%, and 28.18% respectively for different Gaussian mixtures. Further, among these different data size, the least minimum EER is for 12 seconds data. The average EER in case of LPCCSFSR is 37.21%. The average EER of LPCC-SFSR is less compared to MFCC-SFSR analysis. When compared the EER of both analysis, LPCC-SFSR is 4.18% lesser than MFCC-SFSR. Further comparing both MFCC-SFSR and LPCC-SFSR for all data size, minimum EER obtained for 12 seconds data. In limited data both train/test data size is limited to less than or equal to 15 seconds. Therefore remaining experimental results are analyzed only for 12 seconds data. In SFSR analysis both FS and FR are fixed and available train/test data are also limited. Because of this, extracted features are also less in numbers. This will not create good speaker modeling and also speaker testing may not occur accurately.
To overcome this problem, we need to increase the features vectors. This can be done by using MFR, MFS, and MFSR analysis techniques.
Fig. 2.
Performance of SFSR using (a) MFCC and (b) LPCC features and GMM modeling.
Fig. 3.
Performance comparison of speaker verification system using (a) MFCC-MFR (b) MFCC-MFS, and (c) MFCC-MFSR and for GMM modeling.
In the second experiment, MFR along with MFS and MFSR analysis techniques are analyzed with the help of 13 dimensional MFCC features and results are plotted in Fig. 3(a)–(c), respectively.
The modeling is done using GMM. In case of MFCC-MFR, the minimum EER of 39.7% is obtained for 12 seconds data for Gaussian mixtures of 64 compared to other data sizes. The average EER in case of MFCC-MFR is 42.82%. Further, it can be observed that compared to the average EER of MFCC-MFR and MFCC-SFSR, the MFCC-MFR is 1.44% higher than MFCC-SFSR. The experimental results indicate that there is no progress in the performance.
In MFCC-MFS, the minimum EER is 36.04% for Gaussian mixture of 32 for 12 seconds data as compared with remaining data sizes. The average EER is 40.79%. MFCC-MFS performance is better as compared to MFCC-MFR for all data sizes. Compare to the average EER of MFCC-MFS with MFCCMFR, MFCC-MFS is having 2.04% lower in EER than MFCC-MFR. This is because the magnitude spectra and feature vectors extracted from speech data for different FS are different due to different frequency resolution [10].
The MFCC-MFSR gives minimum EER of 35.9% is obtained for 12 seconds for Gaussian mixture of 32 compared with remaining data sizes. The average EER is 40.38%. Compared to the average EER of MFCC-MFSR with MFCC-MFR and MFCC-MFS is 2.45% and 0.41% less in EER, respectively. This is because MFSR is the combination of both MFS and MFR. Further, MFSR is having more feature vectors compared to MFS and MSR.
In third experiment, 13 dimension LPCC features are extracted using MFR, MFS, and MFSR analysis techniques and GMM modeling is used to get the speaker models and the results are shown in Fig. 4(a)–(c), respectively. The results show that minimum EER of LPCC-MFR is 27.95% which is obtained for 12 seconds data for Gaussian mixture of 16 compared with all data sizes. The average EER is 37.14%. Further, when we compared EER of both analysis, it was observed that LPCC-MFR is 0.7% lesser than LPCC-SFSR.
In LPCC-MFS analysis, minimum value of EER is 27.64% obtained for 12 seconds train/test data for Gaussian mixture of 16 compared with remaining data sizes. The average EER is 36.33%. Further, it can be observed that the average EER of LPCC-MFS is 0.81% less as compared to LPCC-MFR. Also for all the data sizes LPCC-MFS has lesser EER than LPCC-MFR.
In LPCC-MFSR analysis, there is considerable improvement in the EER as compared to LPCC-MFS and LPCC-MFR. The least EER in case of LPCC-MFSR is 27.5% for Gaussian mixture of 16 for 12 seconds data size. The average EER of LPCC-MFSR is 35.95%. The LPCC-MFSR is 1.19% and 0.38% lesser in average EER as compared with LPCC-MFR and LPCC-MFS, respectively. Other than average EER in LPCC-MFSR, the individual EER for all data sizes are also substantially lesser than that of LPCC-MFR and LPCC-MFS.
Fig. 4.
Performance comparison of speaker verification system using (a) LPCC-MFR, (b) LPCC-MFS, and (c) LPCC-MFSR for GMM modeling.
From these experimental study, we noticed that when train/test both data are small, MFSR analysis technique improves the verification performance as compared to SFSR in both feature extraction methods. Further, LPCC-MFSR provides an average EER of 4.43% less as compared with EER of MFCC-MFSR.
To study the significance of GMM-UBM modeling, we conducted experiments using GMM-UBM modeling. In GMM-UBM, UBM is usually constructed from large number of speakers’ data and UBM is trained using EM algorithm. The speaker dependent model can be created by performing MAP adaptation technique [25]. UBM should contain equal number of male and female speakers. The total duration of male and female speakers is 1,506 seconds each. We also used NIST-2003 database for training the UBM. In this experiment also features are extracted using LPCC and MFCC by considering different speech analysis technique including MFR, MFS and MFSR.
Fig. 5(a) and (b) show the experimental results of SFSR analysis in case of MFCC and LPCC, respectively. The GMM-UBM modeling is used. The minimum EER in case of MFCC-SFSR is 27.28% obtained for 12 seconds data for Gaussian mixtures of 128 compared to remaining data sizes. The average EER of MFCC-SFSR is 35.12%.
In LPCC-SFSR, the least EER is obtained for 12 seconds data for Gaussian mixture of 32 is 26.91% compared with different data sizes. The average EER in case of LPCC-SFSR is 34.19%. The average EER of LPCC-SFSR is minimum as compared with MFCC-SFSR analysis. When we compare EER of LPCCSFSR with MFCC-SFSR, LPCC-SFSR is 0.93% lower than MFCC-SFSR.
Fig. 5.
Performance of SFSR using (a) MFCC and (b) LPCC features and GMM-UBM modeling.
Fig. 6.
Performance comparison of speaker verification system using (a) MFCC-MFR, (b) MFCC-MFS, and (c) MFCC-MFSR for GMM-UBM modeling.
The results of analysis techniques MFR, MFS and MFSR are shown in Fig. 6(a)–(c), respectively. The features used are MFCC and modeling technique used is GMM-UBM. In case of MFCC-MFR, the least EER is 26.1% obtained for 12 seconds data for Gaussian mixtures of 128 compared to other data sizes. The average EER in case of MFCC-MFR is 34.29%. Further, it can be observed that compare the EER of MFCC-MFR and MFCC-SFSR, MFCC-MFR is 0.83% lower than MFCC-SFSR.
In case of MFCC-MFS, the minimum EER is 25.38% for Gaussian mixture of 128 for 12 seconds data as compared to other data sizes. The average EER is 33.58%. It was observed that, MFCC-MFS performance is better as compared to MFCC-MFR for all data sizes. When we the compare the average EER of MFCC-MFS with MFCC-MFR, MFCC-MFS is having 0.71% lesser than MFCC-MFR.
The MFCC-MFSR gives minimum EER of 25.57% is obtained for 12 seconds for Gaussian mixture of 256. The average EER is 33.42%. Average EER of MFCC-MFSR when compared with MFCC-MFR and MFCC-MFS is 0.87% and 0.16% lesser in EER, respectively.
The LPCC features are extracted using MFR, MFS, and MFSR analysis techniques and results are shown in Fig. 7(a)–(c), respectively. The modeling technique used is GMM-UBM. It shows that least EER of LPCC-MFR is 26.12% which is obtained for 12 seconds data for Gaussian mixture of 64 compared with all data sizes. The average EER is 34.02%. Further, the average EER of LPCC-MFR when compared with LPCC-SFSR, the LPCC-MFR average EER is 0.7% which is lesser than LPCC-SFSR.
In case of LPCC-MFS analysis, the least EER of 26.64% obtained for 12 seconds training and testing data for Gaussian mixture of 64 and average EER is 33.48%. Further, the average EER of LPCC-MFS is 0.10% which is less compared with LPCC-MFR. Also for all the data sizes LPCC-MFS has lesser EER than LPCC-MFR.
In case of LPCC-MFSR methods there is considerable improvement in the EER as compared to LPCC-MFS and LPCC-MFR. The least EER in case of LPCC-MFSR is 26.0% for Gaussian mixture of 128 for 12 seconds data size. The average EER of LPCC-MFSR is 33.39%. If we compare the average EER of LPCC-MFSR with LPCC-MFR and LPCC-MFS, LPCC-MFSR is 0.63% and 0.29% lesser in EER as compared with LPCC-MFR and LPCC-MFS, respectively. Other than average reduction in LPCCMFSR, the individual EER for all the data sizes are also substantially lesser than that of LPCC-MFR and LPCC-MFS.
Fig. 7.
Performance comparison of speaker verification system using (a) LPCC-MFR, (b) LPCC-MFS, and (c) LPCC-MFSR for GMM-UBM modeling.
Table 1 represents the minimum and average EER of MFCC and LPCC analysis techniques. It was observed in Table 1 that the LPCC analysis gives better performance in terms of minimum and average EER which is less as compared to MFCC analysis by using GMM as a modeling technique. The minimum EER of LPCC-SFSR, LPCC-MFR, LPCC-MFS and LPCC-MFSR is 8.5%, 11.75%, 8.4%, and 8.4% less as compared with MFCC-SFSR, MFCC-MFR, MFCC-MFS, and MFCC-MFSR, respectively and the average EER of LPCC-SFSR, LPCC-MFR, LPCC-MFS and LPCC-MFSR is 4.18%, 5.62%, 4.46%, and 4.43% less as compared with MFCC-SFSR, MFCC-MFR, MFCC-MFS, and MFCC-MFSR, respectively, using GMM modeling.
Table 1.
Comparison of average EER (%) of GMM and GMM-UBM modeling for 13 dimensions feature using SFSR and MFSR analysis techniques
Speech Analysis GMM GMM-UBM Min.EER (%) Avg. EER (%) Min. EER (%) Avg. EER (%) MFCC-SFSR 36.68 41.39 27.28 35.12 LPCC-SFSR 28.18 37.21 26.10 34.19 MFCC-MFR 39.70 42.83 26.12 34.29 LPCC-MFR 27.95 37.14 26.10 34.02 MFCC-MFS 36.04 40.79 25.38 33.58 LPCC-MFS 27.64 36.33 26.64 33.48 MFCC-MFSR 35.90 40.38 25.57 33.42 LPCC-MFSR 27.5 35.95 26.01 33.39
Another interesting point observed in Figs. 6 and 7 for GMM-UBM modeling is that, LPCC based MFR, MFS, and MFSR are having lesser EER as compared with MFCC based MFR, MFS, and MFSR for 3, 4, 5, and 6 seconds data. Further, if the train/test speech data are increased to 9 and 12 seconds, MFCC based MFR, MFS, and MFSR are having minimum EER compared to LPCC based MFR, MFS, and MFSR. From this experiment, we observed that when both training and testing data (3–6 seconds) are limited, LPCC performance is better as compared with MFCC. This is because LPCC is able to apprehend more information from speech data this will make a distinction between different speakers [32]. If we increase training and testing data (above 6 seconds), MFCC based feature extraction analysis improves the performance compared to LPCC based feature extraction analysis. The minimum EER of LPCC-SFSR and LPCC-MFR is 1.18%, 0.02% less as compared to MFCC-SFSR and MFCC-MFR, respectively. Further, in case of MFCC-MFR and MFCC-MFSR is having 1.26% and 0.44% less in EER as compared to LPCC-MFS and LPCC-MFSR, respectively. The average EER of LPCC-SFSR, LPCCMFR, LPCC-MFS, and LPCC-MFSR is 0.93%, 0.27%, 0.10%, and 0.03% less as compared with MFCCSFSR, MFCC-MFR, MFCC-MFS, and MFCC-MFSR, respectively.
To justify the above statement, 39 dimensional MFCC and LPCC features are extracted for different analysis techniques and modeled using GMM and GMM-UBM.
These feature vectors contain both static and transitional characteristics of the speaker-specific information [28] . The Δ and ΔΔ coefficients are calculated by capturing the transitional characteristics.
To justify the above statement, 39 dimensional MFCC and LPCC features are extracted for different analysis techniques and modeled using GMM and GMM-UBM.
These feature vectors contain both static and transitional characteristics of the speaker-specific information [28]. The Δ and ΔΔ coefficients are calculated by capturing the transitional characteristics.
Fig. 8(a) and (b) show the experimental results of SFSR analysis using MFCC and LPCC features, respectively. The modeling technique used is GMM. The minimum EER in case of MFCC-SFSR is 30.35% is obtained for 12 seconds data for Gaussian mixtures of 16 compared to other data sizes. The average EER of MFCC-SFSR is 39.52%.
In case of LPCC-SFSR, the least EER is obtained for 12 seconds data for Gaussian mixture of 32 is 29.72% compared with other data sizes. The average EER in case of LPCC-SFSR is 37.95%. The average EER of LPCC-SFSR is minimum as compared with MFCC-SFSR analysis in case of GMM modeling. If we compare the EER of both analysis, LPCC-SFSR is 1.57% lower than MFCC-SFSR.
Fig. 8.
Performance of SFSR using (a) ΔΔMFCC and (b) ΔΔLPCC features and GMM modeling.
The results of the analysis techniques MFR, MFS, and MFSR are shown in Fig. 9(a)–(c), respectively. The features used are MFCC and modeling is done using GMM. In MFCC-MFR, the least EER is 30.30% obtained for 12 seconds data for Gaussian mixtures of 16 compared to remaining data sizes. The average EER in case of MFCC-MFR is 39.39%. Further, the EER of MFCC-MFR when compared with MFCC-SFSR, MFCC-MFR is 0.13% lower than MFCC-SFSR.
Fig. 9.
Performance comparison of speaker verification system for ΔΔ using (a) MFCC-MFR, (b) MFCC-MFS, and (c) MFCC-MFSR for GMM modeling.
In MFCC-MFS, the minimum EER is 30.26% for Gaussian mixture of 16 for 12 seconds data as compared to other data sizes. The average EER is 38.25%. It was observed that MFCC-MFS performance is better as compared to MFCC-MFR for all data sizes. When we compared the average EER of MFCC-MFS with MFCC-MFR, MFCC-MFS is having 1.14% lesser EER than MFCC-MFR.
The MFCC-MFSR gives minimum EER of 30.26% which obtained for 12 seconds for Gaussian mixture of 32. The average EER is 37.89%. When the average EER of MFCC-MFSR is compared with MFCC-MFR and MFCC-MFS is 1.5% and 0.36% less in EER, respectively.
The LPCC features are extracted using MFR, MFS, and MFSR analysis techniques and results are shown in Fig. 10(a)–(c), respectively and modeling used is GMM. The result shows that least EER of LPCC-MFR is 30.08% which is obtained for 12 seconds data for Gaussian mixture of 32 compared with all data sizes. The average EER is 37.79%. Further, compare the average EER of LPCC-MFR with LPCCSFSR, the LPCC-MFR average EER is 0.16% lesser than LPCC-SFSR.
In case of LPCC-MFS analysis, the least EER of 29.31% obtained for 12 seconds training and testing data for Gaussian mixture of 16. The average EER is 37.49%. Further, the average EER of LPCC-MFS when compared with LPCC-MFR, LPCC-MFS average EER is 0.30% less than LPCC-MFR. Also for all the data sizes LPCC-MFS has lesser EER than LPCC-MFR.
In case of LPCC-MFSR analysis there is considerable improvement in the EER as compared to LPCCMFS and LPCC-MFR. The least EER in case of LPCC-MFSR is 29.44% for Gaussian mixture of 64 for 12 seconds data size. The average EER of LPCC-MFSR is 36.81%. If we compared the average EER of LPCC-MFSR with LPCC-MFR and LPCC-MFS, LPCC-MFSR is 0.98% and 0.68% lesser in EER as compared with LPCC-MFR and LPCC-MFS, respectively. Other than average reduction in LPCCMFSR, the individual EER for all the data sizes are also considerably lesser than that the LPCC-MFR and LPCC-MFS.
Fig. 10.
Performance comparison of speaker verification system for ΔΔ using (a) LPCC-MFR, (b) LPCC-MFS, and (c) LPCC-MFSR for GMM modeling.
Fig. 11(a) and (b) shows the experimental results of SFSR analysis in case of MFCC and LPCC, respectively and modeling technique used is GMM-UBM. The minimum EER in case of MFCC-SFSR is 24.48% is obtained for 12 seconds data for Gaussian mixtures of 128 compared to other data sizes. The average EER of MFCC-SFSR is 33.58%.
In case of LPCC-SFSR, the least EER is obtained for 12 seconds data for Gaussian mixture of 64 is 23.71% compared with remaining data sizes. The average EER in case of LPCC-SFSR is 32.72%. The average EER of LPCC-SFSR is minimum as compared with MFCC-SFSR analysis. If we compare the EER of both analysis, LPCC-SFSR is 0.86% lower than MFCC-SFSR.
Fig. 11.
Performance of SFSR using (a) ΔΔMFCC and (b) ΔΔLPCC features and GMM-UBM modeling.
The results of the analysis techniques MFR, MFS, and MFSR are shown in Fig. 12(a)–(c), respectively. The features used are MFCC and modeling technique used is GMM-UBM. In case of MFCC-MFR, the least EER is 22.4% is obtained for 12 seconds data for Gaussian mixtures of 128 compared to other data sizes. The average EER in case of MFCC-MFR is 32.66%. Further, if we compare the average EER of MFCC-MFR with MFCC-SFSR, MFCC-MFR is 0.06% lower than MFCC-SFSR.
In MFCC-MFS, the minimum EER is 23.25% for Gaussian mixture of 128 for 12 seconds data as compared to remaining data sizes. The average EER is 32.13%. Performance of MFCC-MFS is better as compared to MFCC-MFR for all data sizes. When comparing the average EER of MFCC-MFS with MFCC-MFR, MFCC-MFS is having 0.53% lower in EER than MFCC-MFR.
The MFCC-MFSR gives minimum EER of 22% obtained for 12 seconds for Gaussian mixture of 128. The average EER is 31.83%. While comparing the average EER of MFCC-MFSR with MFCC-MFR and MFCC-MFS is 0.83% and 0.3% less in EER, respectively.
Fig. 12.
Performance comparison of speaker verification system for ΔΔ using (a) MFCC-MFR, (b) MFCC-MFS, and (c) MFCC-MFSR for GMM-UBM modeling.
Fig. 13(a)–(c) shows performance analysis of MFR, MFS, and MFSR respectively using LPCC features and modeling is done using GMM-UBM. It shows that least EER of LPCC-MFR is 23.69% which is obtained for 12 seconds data for Gaussian mixture of 64 compared with all data sizes. The average EER is 31.91%. Further, while comparing the average EER of LPCC-MFR with LPCC-SFSR, the LPCC-MFR average EER is 0.81% lesser than LPCC-SFSR.
In LPCC-MFS analysis, the least EER of 23.7% obtained for 12 seconds training and testing data for Gaussian mixture of 64 compared with all remaining data sizes. The average EER is 31.7%. Further, the average EER of LPCC-MFS when compared with LPCC-MFR, LPCC-MFS average EER is 0.21% less than LPCC-MFR. Also for all the data sizes LPCC-MFS has lesser EER than LPCC-MFR.
Fig. 13.
Performance comparison of speaker verification system for ΔΔ using (a) LPCC-MFR, (b) LPCC-MFS, and (c) LPCC-MFSR for GMM-UBM modeling.
In LPCC-MFSR analysis there is considerable improvement in the EER as compared to LPCC-MFS and LPCC-MFR. The least EER in case of LPCC-MFSR is 23.33% for Gaussian mixture of 128 when compared with 12 seconds data size. The average EER of LPCC-MFSR is 31.36%. If we compare the average EER of LPCC-MFSR with LPCC-MFR and LPCC-MFS, LPCC-MFSR is 0.55% and 0.34% lesser in EER as compared with LPCC-MFR and LPCC-MFS, respectively. Other than average EER in LPCCMFSR, the individual EER for all the data sizes are also considerably lesser than that of the LPCC-MFR and LPCC-MFS.
It was observed that, in this experiment also, ΔΔLPCC based MFR, MFS, and MFSR will be having lower EER as compared with MFCC based MFR, MFS and MFSR for 3, 4, 5, and 6 seconds data. If we increase the train/test speech data to 9 and 12 seconds, MFCC based MFR, MFS, and MFSR will have minimum EER compared to LPCC based MFR, MFS, and MFSR. Further, we observed that when both train/test data are limited (3–6 sec), LPCC performance is better as compared with MFCC.
It was observed in the Table 2, even in case of 39 dimension LPCC based analysis techniques will have lesser average EER when compared with MFCC based analysis techniques in case of GMM modeling, but in case of GMM-UBM, MFCC based analysis technique have lesser EER compared to LPCC based techniques. Further, the minimum EER of LPCC-SFSR, LPCC-MFR, LPCC-MFS, and LPCC-MFSR is 0.63%, 0.22%, 0.95%, 0.82% less compared with MFCC-SFSR, MFCC-MFR, MFCC-MFS, and MFCCMFSR, respectively and the average EER of LPCC-SFSR, LPCC-MFR, LPCC-MFS and LPCC-MFSR is 1.57%, 1.6%, 0.76%, and 1.08% less as compared with MFCC-SFSR, MFCC- MFR, MFCC-MFS, and MFCC-MFSR, respectively as modeling done using GMM. Further, the minimum EER of LPCC-SFSR is 0.77% less as compared with MFCC-MFSR, but in other cases MFCC-MFR, MFCC-MFS and MFCCMFSR is having minimum EER of 1.29%, 0.45%, and 1.33% less as compared with EER of LPCC-MFR, LPCC-MFS, and LPCC-MFSR, respectively. The average EER of LPCC-SFSR, LPCC-MFR, LPCC-MFS and LPCC-MFSR is 0.86%, 0.75%, 0.43%, and 0.47% less as compared with MFCC-SFSR, MFCC-MFR, MFCC-MFS, and MFCC-MFSR, respectively.
Table 2.
Comparison of average EER(%) of GMM and GMM-UBM modeling for 39 dimensions feature using SFSR and MFSR analysis techniques
Speech analysis GMM GMM-UBM Min. EER(%) Avg. EER(%) Min. EER(%) Avg. EER(%) MFCC-SFSR 30.35 39.52 24.48 33.58 LPCC-SFSR 29.72 37.95 23.71 32.72 MFCC-MFR 30.30 39.39 22.40 32.66 LPCC-MFR 30.08 37.79 23.69 31.91 MFCC-MFS 30.26 38.25 23.25 32.13 LPCC-MFS 29.31 37.49 23.70 31.70 MFCC-MFSR 30.26 37.89 22.00 31.83 LPCC-MFSR 29.44 36.81 23.33 31.36
## 5. Conclusions
In this paper, we verified the significance of MFR, MFS, and MFSR techniques for speaker verification under the limited data condition. Initially, we analyzed the importance of feature vectors role in SFSR, MFR, MFS, and MFSR methods. Then, we experimentally analyzed to verify the performance for different conditions. The experimental results show that in both feature extraction methods SFSR was unable to capture the more speaker-specific information. The more speaker-specific features are extracted using MFR, MFS, and MFSR methods. The experimental results indicate that, speaker verification performance EER can be moderately improved by adapting appropriate analysis technique. Further, from the experiment results we observed that MFSR gives better verification performance compared with SFSR and other analysis technique. In case of GMM modeling for all data sizes LPCC-MFSR improves the performance of EER as compared to MFCC-MFSR.
We observed that, LPCC based MFR, MFS, and MFSR analysis yields lower EER compared to MFCC based analysis techniques by using GMM-UBM modeling technique. Further, we observed that, if training and testing data are increased, MFCC based analysis techniques improve the performance over LPCC analysis techniques. To verify the significance of various analysis techniques, different feature extraction and modeling technique need to be explored.
## Biography
##### Thyamagondlu Renukamurthy Jayanthi Kumari
https://orcid.org/0000-0003-0020-4655
She received the B.E. degree from Bangalore University in 1997 and M.Tech. degree from Visvesvaraya Technological University in 2006. She is currently pursuing the Ph.D. degree at the Visvesvaraya Technological University, Karnataka, India. She has been working as a Researcher in the Department of Electronics and Communication Engineering, Siddaganga Institute of Technology, Karnataka, India. Her research interests include speech, speaker verification under limited data.
## Biography
https://orcid.org/0000-0002-4342-9339
He received the B.E. and M.E. degrees from Bangalore University in 1992 and 1995, respectively, and Ph.D. from prestigious Indian Institute of Technology, Guwahati, India, in 2009. He has published a number of papers in various national and international journals and conferences apart from guiding a number of UG, PG and research scholars. Currently, he is working as a Professor in the Department of Information Science and Engineering, Siddaganga Institute of Technology, Karnataka, India. His research interests are in the areas of speech, limited data speaker recognition, image processing, computer networks and computer architecture.
## References
• 1 A. K. Jain, A. Ross, S. Prabhakar, "An introduction to biometric recognition," IEEE Transactions on Circuits and Systems for Video T echnology, 2004, vol. 14, no. 1, pp. 4-20, 2004.doi:[[[10.1109/TCSVT.2003.818349]]]
• 2 S. Dey, S. Barman, R. K. Bhukya, R. K. Das, B. C. Haris, S. R. M. Prasanna, R. Sinha, "Speech biometric based attendance system," in Proceedings of 2014 Twentieth National Conference on Communications (NCC), Kanpur, India, 2014;pp. 1-6. doi:[[[10.1109/NCC.2014.6811345]]]
• 3 T. Kinnunen, H. Li, "An overview of text-independent speaker recognition: from features to supervectors," Speech Communication, 2010, vol. 52, no. 1, pp. 12-40, 2010.doi:[[[10.1016/j.specom.2009.08.009]]]
• 4 G. Pradhan, S. M. Prasanna, "Speaker verification under degraded condition: a perceptual study," International Journal of Speech Technology, vol. 14, no. 4, pp. 405-417, 2011.doi:[[[10.1007/s10772-011-9120-6]]]
• 5 A. E. Rosenberg, in " Automatic speaker verification: a review" Proceedings of the IEEE, 1976;vol. 64, no. 4, pp. 475-487. custom:[[[-]]]
• 6 A. Neustein, H. A. Patil, Forensic Speaker Recognition. Heidelberg: Springer, 2012.custom:[[[-]]]
• 7 H. S. Jayanna, S. M. Prasanna, "Analysis, feature extraction, modeling and testing techniques for speaker recognition," IETE Technical Review, vol. 26, no. 3, pp. 181-190, 2009.doi:[[[10.4103/0256-4602.50702]]]
• 8 H. S. Jayanna, "Limited data speaker recognition," Ph.D. dissertationIndian Institute of T echnology Guwahati, India, 2009.custom:[[[http://gyan.iitg.ernet.in/handle/123456789/246]]]
• 9 D. Pati, S. M. Prasanna, "Subsegmental, segmental and suprasegmental processing of linear prediction residual for speaker information," International Journal of Speech Technology, vol. 14, no. 1, pp. 49-64, 2011.doi:[[[10.1007/s10772-010-9087-8]]]
• 10 L. R. Rabiner, B. H. Juang, Fundamentals of Speech Recognition. Englewood Cliffs, NJ: Prentice Hall, 1993.custom:[[[-]]]
• 11 S. M. Prasanna, C. G. Gupta, B. Y egnanarayana, "Extraction of speaker-specific excitation information from linear prediction residual of speech," Speech Communication, vol. 48, no. 10, pp. 1243-1261, 2006.doi:[[[10.1016/j.specom.2006.06.002]]]
• 12 B. Yegnanarayana, S. M. Prasanna, J. M. Zachariah, C. S. Gupta, "Combining evidence from source, suprasegmental and spectral features for a fixed-text speaker verification system," IEEE Transactions on Speech and Audio Processing, vol. 13, no. 4, pp. 575-582, 2005.doi:[[[10.1109/TSA.2005.848892]]]
• 13 F. Farahani, P . G. Georgiou, S. S. Narayanan, "Speaker identification using supra-segmental pitch pattern dynamics," in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, Canada, 2004;pp. 89-92. doi:[[[10.1109/ICASSP.2004.1325929]]]
• 14 A. V. Jadhav, R. V. Pawar, "Review of various approaches towards speech recognition," in Proceedings of 2012 International Conference on Biomedical Engineering (ICoBE), Penang, Malaysia, 2012;pp. 99-103. custom:[[[-]]]
• 15 H. S. Jayanna, S. M. Prasanna, "Multiple frame size and rate analysis for speaker recognition under limited data condition," IET Signal Processing, vol. 3, no. 3, pp. 189-204, 2009.doi:[[[10.1049/iet-spr.2008.0211]]]
• 16 G. L. Sarada, T. Nagarajan, H. A. Murthy, "Multiple frame size and multiple frame rate feature extraction for speech recognition," in Proceedings of 2004 International Conference on Signal Processing and Communications, Bangalore, India, 2004;pp. 592-595. doi:[[[10.1109/SPCOM.2004.1458529]]]
• 17 K. Samudravijaya, "Variable frame size analysis for speech recognition," in Proceedings of the International Conference on Natural Language Processing, Hyderabad, India, 2004;custom:[[[http://www.iitg.ac.in/samudravijaya/publ/04iconVframeSize.pdf]]]
• 18 Q. Zhu, A. Alwan, "On the use of variable frame rate analysis in speech recognition," in Proceedings of 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing, Istanbul, Turkey, 2000;pp. 1783-1786. doi:[[[10.1109/ICASSP.2000.862099]]]
• 19 P. Le Cerf, D. Van Compernolle, "A new variable frame analysis method for speech recognition," IEEE Signal Processing Letters, vol. 1, no. 12, pp. 185-187, 1994.doi:[[[10.1109/97.338746]]]
• 20 R. Pawar, H. Kulkarni, "Analysis of FFSR, VFSR, MFSR techniques for feature extraction in speaker recognition: a review," International Journal of Computer Science, vol. 7, no. 4, pp. 26-31, 2010.custom:[[[https://www.researchgate.net/publication/46093625_Analysis_of_FFSR_VFSR_MFSR_Techniques_for_Feature_Extraction_in_Speaker_Recognition_A_Review]]]
• 21 T. Nagarajan, "Implicit systems for spoken language identification," Ph.D. dissertationIndian Institute of T echnology Madras, India, 2004.custom:[[[https://scholar.google.co.kr/scholar?cluster=2837095863581999535&hl=ko&oi=scholarr]]]
• 22 G. S. Ghadiyaram, N. H. Nagarajan, T. N. Thangavelu, H. A. Murthy, "Automatic transcription of continuous speech using unsupervised and incremental training," in Proceedings of the 8th International Conference on Spoken Language Processing, Jeju Island, Korea, 2004;custom:[[[https://www.isca-speech.org/archive/archive_papers/interspeech_2004/i04_0405.pdf]]]
• 23 National Institute of Standards and Technology, 2013 (Online). Available:, https://www.nist.gov/sites/default/files/documents/2017/09/26/2003-spkrec-evalplan-v2.2.pdf
• 24 S. Nakagawa, L. W ang, S. Ohtsuka, "Speaker identification and verification by combining MFCC and phase information," IEEE Transactions on AudioSpeech, and Language Processing, vol. 20, no. 4, pp. 1085-1095, 2012.doi:[[[10.1109/TASL.2011.2172422]]]
• 25 A. Salman, E. Muhammad, K. Khurshid, "Speaker verification using boosted cepstral features with Gaussian distributions," in Proceedings of IEEE International Multitopic Conference, Lahore, Pakistan, 2007;pp. 1-5. doi:[[[10.1109/INMIC.2007.4557681]]]
• 26 D. Pat, S. M. Prasanna, "Processing of linear prediction residual in spectral and cepstral domains for speaker information," International Journal of Speech T echnology, vol. 18, no. 3, pp. 333-350, 2015.doi:[[[10.1007/s10772-015-9273-9]]]
• 27 W. C. Hsu, W. H. Lai, W. P. Hong, "Usefulness of residual-based features in speaker verification and their combination way with linear prediction coefficients," in Proceedings of the 9th IEEE International Symposium on Multimedia Workshops, Beijing, China, 2007;pp. 246-251. doi:[[[10.1109/ISM.Workshops.2007.49]]]
• 28 S. Furui, "Comparison of speaker recognition methods using statistical features and dynamic features," IEEE Transactions on AcousticsSpeech, and Signal Processing, vol. 29, no. 3, pp. 342-350, 1981.doi:[[[10.1109/tassp.1981.1163605]]]
• 29 V. Prakash, J. H. L. Hansen, "In-set/out-of-set speaker recognition under sparse enrollment," IEEE Transactions on AudioSpeech, and Language Processing, vol. 15, no. 7, pp. 2044-2052, 2007.doi:[[[10.1109/TASL.2007.902058]]]
• 30 T. Hasan, J. H. Hansen, "A study on universal background model training in speaker verification," IEEE Transactions on AudioSpeech, and Language Processing, vol. 19, no. 7, pp. 1890-1899, 2011.doi:[[[10.1109/TASL.2010.2102753]]]
• 31 N. Dehak, P. J. Kenny, R. Dehak, P. Dumouchel, P. Ouellet, "Front-end factor analysis for speaker verification," IEEE Transactions on AudioSpeech, and Language Processing, vol. 19, no. 4, pp. 788-798, 2011.doi:[[[10.1109/TASL.2010.2064307]]]
• 32 E. Wong, S. Sridharan, "Comparison of linear prediction cepstrum coefficients and mel-frequency cepstrum coefficients for language identification," in Proceedings of 2001 International Symposium on Intelligent Multimedia, Video and Speech Processing, Hong Kong, China, 2001;pp. 95-98. doi:[[[10.1109/ISIMP.2001.925340]]]
|
2021-08-03 00:35:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4594525694847107, "perplexity": 4530.045476601113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154408.7/warc/CC-MAIN-20210802234539-20210803024539-00652.warc.gz"}
|
https://cameramath.com/expert-q&a/Algebra/Use-the-graph-to-determine-a-open-intervals-on-which-the-function_6
|
### Still have math questions?
a. Select the correct choice below and, if necessary, fill in the answer box to complete your choice. A. The function is increasing on the interval(s) $$( - 2 , \infty )$$ . B. Thpe your answer in interval notation. Use a comma to separate answers as needed.) b. Select the correct choice below and, if necessary, fill in the answer box to complete your choice. A. The function is decreasing on the interval(s) $$( - \infty , - 2 )$$ . (Type your answer in interval notation. Use a comma to separate answers as needed.)
|
2022-08-14 08:52:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.305930495262146, "perplexity": 733.1254766288201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00560.warc.gz"}
|
http://bablefishfx.comwww.tug.org/pipermail/metapost/2011-July/002365.html
|
# [metapost] Metafun + plainLuaTeX/LuaLaTeX
Arno Trautmann Arno.Trautmann at gmx.de
Sun Jul 31 21:48:10 CEST 2011
Hi Hans,
Hans Hagen wrote:
> On 31-7-2011 12:03, Arno Trautmann wrote:
>> Hi all,
>>
>> maybe this question is a really stupid one, but I hope someone of you
>> can help me: I'd like to learn MetaFun; however, I don't want to use it
>> with ConTeXt, but with plainLuaTeX or LuaLaTeX, using LuaTeXs mplib. As
>> far as I understood, MetaFun is a format for MetaPost and therefore I
>> thought it should be independent of the TeX format used – is this true?
>> Or does ConTeXt offer additional stuff on format level that are
>> necessary for using MetaFun?
>> Finally, if one can use MetaFun with plainLuaTeX – how to do so on a
>> (Linux x86_64) TeX live 2011? In TeX live 2010, I find the file
>> metafun.mp from which I guess it is the format for MetaPost.
>>
>> As you see, I have basically no idea what I need in order to get MetaFun
>> working – I'll be very thankful for any hints where to start …
>
> Quite some of the metafun macros are generic but there are also a couple
> of extensions that work only with ConTeXt and some even only with
> ConTeXt mkiv (the luatex version). Of course I could make most generic
> but it's not worth the trouble and it would cripple further development.
May I ask why MetaFun depends on ConTeXt? In my simple world, MetaFun is
just a collection of macros for MetaPost, so I don't see where the TeX
format comes in.
> However, you can make independent metafun graphics as follows. Make a
> file, say test.tex:
>
> \startTEXpage
> metafun code
> \stopTEXpage
>
> Then run
>
> context test.tex
>
> and you will get a graphic that you can include in whatever tex macro
> package. So, you only need to know two context commands.
That would be applicable in, say, a private document. However, I'd like
to use it in a package, so in the end the user should only need to say a
TeX command and it should result in a MetaPost graphic. And this should
be independent of the TeX format and not require an extra context run
(as this would need shell-escape).
So from your answer I conclude that an extra context run is the best
way; however, is there another way for me, even if some macro are
missing then?
cheers
Arno
|
2022-05-27 14:56:23
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9647150635719299, "perplexity": 3519.8215317050976}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00109.warc.gz"}
|
https://encyclopediaofmath.org/index.php?title=Local_dimension&printable=yes
|
Local dimension
2010 Mathematics Subject Classification: Primary: 54F45 [MSN][ZBL]
of a normal topological space $X$
The topological invariant $\mathrm{locdim}(X)$, defined as follows. One says that $\mathrm{locdim}(X) \le n$, $n = -1,0,1,\ldots$ if for any point $x \in X$ there is a neighbourhood $O_x$ for which the Lebesgue dimension of its closure satisfies the relation $\dim \bar O_x \le n$. If $\mathrm{locdim}(X) \le n$ for some $n$, then the local dimension of $X$ is finite, so one writes $\mathrm{locdim}(X) < \infty$ and puts $$\mathrm{locdim}(X) = \min\{ n : \mathrm{locdim}(X) \le n \}$$
Always $\mathrm{locdim}(X) \le \dim(X)$; there are normal spaces $X$ with $\mathrm{locdim}(X) < \dim(X)$; in the class of paracompact spaces always $\mathrm{locdim}(X) = \dim(X)$. If in the definition of local dimension the Lebesgue dimension $\dim \bar O_x$ is replaced by the large inductive dimension $\mathrm{Ind} \bar O_x$, then one obtains the definition of the local large inductive dimension $\mathrm{locInd}(X)$.
See [a1] for a construction of a space with $\mathrm{locdim}(X) < \dim(X)$ and — as an application — a hereditarily normal space $Y$ with $\dim Y = 0$ yet $Y$ contains subspaces of arbitrary high dimension.
For the notions of the local dimension at a point of an analytic space, algebraic variety or scheme cf. Analytic space; Dimension of an associative ring; Analytic set, and Spectrum of a ring.
References
[a1] E. Pol, R. Pol, "A hereditarily normal strongly zero-dimensional space containing subspaces of arbitrarily large dimension" Fund. Math. , 102 (1979) pp. 137–142 [a2] R. Engelking, "Dimension theory" , North-Holland & PWN (1978) pp. 19; 50
How to Cite This Entry:
Local dimension. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Local_dimension&oldid=35532
|
2022-08-17 13:32:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330796599388123, "perplexity": 439.7705496852869}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572908.71/warc/CC-MAIN-20220817122626-20220817152626-00075.warc.gz"}
|
http://iqtestpreparation.com/daily-test/1406
|
# IQ Contest, Daily, Weekly & Monthly IQ & Mathematics Competitions
#### Question No 1
Choose or find odd word
V.V. Giri , General Zia , General Ershad , Lal Bahadur Shastri
Solution!
All except Lal Bahadur Shastri were Presidents of some country, while Lal Bahadur Shastri was the Prime Minister of India.
.
#### Question No 2
The present ages of 3 persons are in proportion 3 : 4 : 5. After 5 years, the sum of their age is 75. Find their present age in years.
Solution!
Let their present ages be 3x, 4x and 5x years respectively.
Then, (3x+ 5) + (4 x + 5 ) + ( 5 x + 5 ) =75
12 x = 60
x = 5
so, the present ages are 10 years, 15 years and 20 years respectively.
.
#### Question No 3
Find the missing?
0.25/5, 0.16/4, 0.64/8, ___, 0.144/12
Solution!
Numerator is the square of denominator after decimal..
#### Question No 4
In a family, there are six members A, B, C, D, E and F. A and B are a married couple, A being the male member. D is the only son of C, who is the brother of A. E is the sister of D. B is the daughter-in-law of F, whose husband has died. How is F related to C?
Solution!
Option (C) A is a male and married to B. So, A is the husband and B is the wife. C is the brother of A. D is the son of C. E, who is the sister of D will be the daughter of C. B is the daughter-in-law of F whose husband has died means F is the mother of A. F is the mother of A and C is the brother of A. So, F is the mother of C.
.
#### Question No 5
2/4 + 1/8 = ?
Solution!
No explanation available for this question..
#### Question No 6
Find the same relationship as Sand:Mud
Solution!
No explanation available for this question..
#### Question No 7
In following questions, one term in number series is incorrect.
Find out the incorrect number
11, 5, 20, 12, 40, 26, 74, 54
Solution!
The given sequence is a combination of two series:
I. 11, 20, 40, 74 and
II. 5, 12, 26, 54
The pattern in I becomes + 9, + 18, + 36, ... if 40 is replaced by 38.
So, 40 is wrong..
#### Question No 8
What is the unit digit in 7105 ?
Solution!
Unit digit in 7105 = Unit digit in [ (74)26 x 7 ]
But, unit digit in (74)26 = 1
Unit digit in 7105 = (1 x 7) = 7
.
#### Question No 9
What comes next?
1,1/2,3,3/4,5,5/8,7,...?
Solution!
Even positive numinator are as same of odd position number and denominator series is 2,4,8,16. .
#### Question No 10
Find the word which is different from the rest
Solution!
No explanation available for this question..
#### Question No 11
If A+ B means A is the sister of B.
A - B means A is the brother of B.
A x B means A is the daughter of B.
Which of the following shows the relation that E is the maternal uncle of D?
Solution!
Clearly, E is the maternal uncle of D means D is the daughter of sisters (say F) of E.i .e.
D x F+E.
.
#### Question No 12
Which word means the same as proximity?
Solution!
Proximity means the same as nearness..
#### Question No 13
What is the angle between minute hand and hour hand of clock at 10'O clock?
Solution!
No explanation available for this question..
#### Question No 14
Jane and Naura are age fellows. The total age of Miley and Naura is 7 years more than the total age of Jane and Bell
Bell is how many years younger than Miley.
Solution!
Let's
Naura and jane have same age = X
Miley + x = 7 + x + Bell
=> Miley - Bell = 7 + X - x = 7
=> Bell is younger than Miely by 7 years
.
#### Question No 15
Find the same relationship as Mason:Wall
Solution!
No explanation available for this question..
#### Question No 16
The temperature in 0 F on 20 days during the month of June
was as follows:
70 F, 76 F, 76 F, 74 F, 70 F, 70 F, 72 F, 74 F, 80 F,
74F,74F,78F,76F,78 F,76F,74F,78F,80F,76F
What is the mode of the temperature for the month of June?
Solution!
76 and 78 is the highest occurrence in series..
#### Question No 17
There are 6 person A, B, C, D, E and F. C is the sister of F. B is the brother of E' husband. D is the father of A and grandfather of F. There are two fathers, three brothers and a mother in the group.
How is F related to E?
Solution!
D is father of A and grandfather of F. So, A is father of F. Thus, D and A are the two fathers. C is the sister of F. So, C is the daughter of A . Since there is only one mother ,it is evident that We is the wife of A and hence the mother of C and C and F. So, B is brother of A . There are three brothers. So, F is the brother of C. Clearly, F is the son of A..
#### Question No 18
Kilogram : Mass :: _____ : Current
Solution!
No explanation available for this question..
Jame had $21 left after spending 30% of the money he took for shopping. How much money did he take along with him? #### Select the correct answer Solution! Let the money he took for shopping be m. Money he spent = 30% of m = 30/100 x m = 3/10 m Money left with him= m-3/10 m=(10m -3m)/10=7m/10 But money left with him=$21
Therefore 7m/10 =$21 m=$21 x 10/7
m =$30 Therefore, the money he took for shopping is$30.
.
#### Question No 20
Conroy's age before 15 years was half of his today's age. What would be his age after 5 years
Solution!
No explanation available for this question..
#### Question No 21
In alphabet series, some alphabets are missing which are given in that order as one of the alternatives below it. Choose the correct alternative.
_ cb _ ca _ bacb _ ca _ bac _ d
Solution!
The series is acbd / cadb / acbd / cadb / acbd. Thus, the pattern acbd / cadb is repeated.
.
#### Question No 22
How many letters in ABC.?
Solution!
No explanation available for this question..
#### Question No 23
Find the same relationship as Ashes:Fire
Solution!
No explanation available for this question..
#### Question No 24
Choose or find odd number pair
(12 - 144) , (13 - 156) , (15 - 180) , (16 -176)
Solution!
In all other pairs, second number is obtained by multiplying the first number by the 12.
.
#### Question No 25
What comes next? 4 , 9 , 17 , 7 , 11 , 18 , 10 , 13 , 19 , ?
Solution!
No explanation available for this question..
#### Question No 26
Even is to odd as Prime is to
Solution!
No explanation available for this question..
#### Question No 27
In 1992 Jahan was thrice as bell but in 1996 John was only twice as old as well was. How many years was there in 2000?
Solution!
Let's
In 1992.
Jahan age's =x Bell age's=Y
when John was thrice as bell.
X = 3y .......( 1)
but after 4 years jhon was twice.
X + 4 = 2 (y+4)....... (2)
By substituting equations
Y = 4
but after 8 years.
In 2000 Bell was: 12
.
#### Question No 28
A and B are the parents of C and D. E and D are the parents of F. How is E related to B?
Solution!
No explanation available for this question..
#### Question No 29
The average temperature on Wednesday, Thursday and Friday was 25. The average temperature on Thursday, Friday and Saturday was 24. If the temperature on Wednesday was 30, what was the temperature on Wednesday?
Solution!
Total temperature on Wednesday, Thursday and Friday was 25 ×3 = 75
Total temperature on Thursday, Friday and Saturday was 24 ×3 = 72
So, difference b/w the temperature on Wednesday and Saturday = 3
If Wednesday temperatures = 30 ,then Saturday temperature = 30 - 2 = 27
.
#### Question No 30
54, 54, 63, 63, 72, 72, 81....
What's next?
|
2019-07-22 12:45:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39591073989868164, "perplexity": 2871.7037040025575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528013.81/warc/CC-MAIN-20190722113215-20190722135215-00322.warc.gz"}
|
https://dml.cz/handle/10338.dmlcz/140502
|
# Article
Full entry | PDF (0.1 MB)
Keywords:
orbit projection; proper Lie groupoid; fibration
Summary:
Let $\mathcal{G} \rightrightarrows M$ be a source locally trivial proper Lie groupoid such that each orbit is of finite type. The orbit projection $M \to M/\mathcal{G}$ is a fibration if and only if $\mathcal{G}\rightrightarrows M$ is regular.
References:
[1] Dugundji, J.: Topology. Allyn and Bacon, Inc., Boston (1966). MR 0193606 | Zbl 0144.21501
[2] Palais, R. S.: On the existence of slices for actions of non-compact Lie groups. Ann. of Math. 73 (1961), 295-323. DOI 10.2307/1970335 | MR 0126506 | Zbl 0103.01802
[3] Rainer, A.: Orbit projections as fibrations. (to appear) in Czech. Math. J., arXiv: math.DG/0610513. MR 2532388
[4] Weinstein, A.: Linearization of regular proper groupoids. J. Inst. Math. Jussieu 3 (2002), 493-511. MR 1956059 | Zbl 1043.58009
[5] Zung, N. T.: Proper groupoids and momentum maps: linearization, affinity, and convexity. Ann. Sci. Éc. Norm. Supér. 39 (2006), 841-869. MR 2292634
Partner of
|
2017-12-15 10:31:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9942289590835571, "perplexity": 2422.4637293647734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948568283.66/warc/CC-MAIN-20171215095015-20171215115015-00446.warc.gz"}
|
http://www.erisian.com.au/wordpress/category/ecash
|
## Archive for the ‘ecash’ Category
### Bitcoin Fees vs Supply and Demand
Continuing from my previous post on historical Bitcoin fees… Obviously history is fun and all, but it’s safe to say that working out what’s going on now is usually far more interesting and useful. But what’s going on now is… complicated. First, as was established in the previous post, most transactions are still paying 0.1 […]
### Bitcoin Fees in History
Prior to Christmas, Rusty did an interesting post on bitcoin fees which I thought warranted more investigation. My first go involved some python parsing of bitcoin-cli results; which was slow, and as it turned out inaccurate — bitcoin-cli returns figures denominated in bitcoin with 8 digits after the decimal point, and python happily rounds that […]
### The Root of all Evil
The Gnu Hunter writes: Yahoo and Microsoft are looking at ways of imposing a postage fee for emails as a way of reducing the ever increasing number of junk emails or spam. No, Yahoo and Microsoft are looking at ways of making more money by charging for something that was previously “free” and are using […]
### Open Source Betting Market
Some more thoughts on this topic. Ecash is actually something of a distraction in the description; there’s no particular need for people to be able to do anonymous transactions, or to transact without talking to a central market — so you can do this just as effectively with market accounts. In that case, it makes […]
### Open Source versus Capitalism
Martin notes that: One fairly silly argument sometimes advanced against Linux is that by reducing towards zero the cost of getting a good operating system, it is somehow communist or anti-capitalist. He’s right: people do make that argument, and it’s silly. It’s especially silly because people already do it for a profit, and even sillier […]
### Spam and Ecash Risks
One risk of solving spam (by doing cool ecash stuff, or by any other means) is that you might be attacked by people who don’t want it solved. I underestimated both the enemy’s level of sophistication, and also the enemy’s level of brute malevolence. I always knew that spammers had no principals and no ethics, […]
### Measures for Bank Efficiency
Each currency has to have a single dedicated bank to handle transactions. It requires a database containing an entry for every transaction that’s taken place. Each transaction needs to do a single element lookup from that database, and there shouldn’t be any locality of reference. Your storage requirements are O(T), and your transaction overhead is […]
|
2017-06-22 12:08:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27127954363822937, "perplexity": 2408.5909068667725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319265.41/warc/CC-MAIN-20170622114718-20170622134718-00052.warc.gz"}
|
http://mathhelpforum.com/geometry/66969-geometry-problem.html
|
1. ## geometry problem
A picture frame has one side of length 17.2cm, a side measuring 12.9cm and a diagonal length of 21.5cm.
is the frame rectangular??
4 marks
Help!! what topic do i use?
2. Originally Posted by scoobydoo
A picture frame has one side of length 17.2cm, a side measuring 12.9cm and a diagonal length of 21.5cm.
is the frame rectangular??
4 marks
Help!! what topic do i use?
Pythagoras theorem
$a^2+b^2=c^2$ where $a,b$ are sides and $c$ is diagonal.
3. Originally Posted by scoobydoo
A picture frame has one side of length 17.2cm, a side measuring 12.9cm and a diagonal length of 21.5cm.
is the frame rectangular??
4 marks
Help!! what topic do i use?
consider
$a^2+b^2=c^2$
$17.2^2+12.9^2=21.5^2$
$295.84+166.41=462.25$
$462.25=462.25$
what does that tell you?
edit: Air beat me to it!
4. thanks for the help!!
|
2017-05-25 18:28:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5210952758789062, "perplexity": 6227.373140685045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608120.92/warc/CC-MAIN-20170525180025-20170525200025-00514.warc.gz"}
|
https://support.bioconductor.org/p/103076/
|
0
2.1 years ago by
bhgyu30
bhgyu30 wrote:
Hello,
I cannot work out how to make the readMSData function work?
I get an error message. Anyone had this before and know how to solve it? Cheers!
temp = list.files(pattern="*.CDF")
Polarity can not be extracted from netCDF files, please set manually the polarity with the 'polarity' method.
Error in readMSData(temp, pdata = pheno) :
No MS(n>1) spectra in file meta1.CDF
modified 2.1 years ago by Johannes Rainer1.5k • written 2.1 years ago by bhgyu30
1
2.1 years ago by
Johannes Rainer1.5k
Italy
Johannes Rainer1.5k wrote:
readMSData with mode = "inMem" (the default) will by default read only MS level >= 2 from the input files. If you are going to use xcms you should use readMSData(temp, pdata = pheno, mode = "onDisk") which will a) read all MS levels from the input file and b) return a OnDiskMSnExp object that you need for xcms. OnDiskMSnExp objects don't read the full MS data into memory and are thus lightweight.
cheers, jo
|
2019-12-09 22:33:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2586629390716553, "perplexity": 7322.312772370026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540523790.58/warc/CC-MAIN-20191209201914-20191209225914-00154.warc.gz"}
|
http://cds.cern.ch/collection/PS%20Experiments?ln=fr&as=1
|
Due to the Spectre/Meltdown patching the CERN Document Server will be partially unavailable on Wednesday, January 24th between 8:30 and 11:00 AM. Submission of records, comments, etc. will be impacted.
# PS Experiments
Limiter par collection:
DIRAC (333)
HARP-CDP (37)
n_TOF (240)
CLOUD (65)
|
2018-01-23 11:41:49
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.834375262260437, "perplexity": 2939.7221891671884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891926.62/warc/CC-MAIN-20180123111826-20180123131826-00196.warc.gz"}
|
http://www.coa.edu/tagging.htm
|
## Genetic Tagging of Humpback Whales
Per J. Paisbell, Judith Allen, Martine Berube, Phillip J. Clapham, Tonnie P. Feddersen, Philip S. Hammond, Richard R. Hudson, Hanne Jorgensen, Steve Katona, anja Holm Larsen, Finn Larsen, Jon Lien, David K. Mattila, Johann Sigurjonsson, Richard Sears, Tim Smith, Renate Sponer, Peter Stevidk, & Nils Olen
The ability to recognize individual animals has substantially increased our knowledge of the biology and behavior of many taxa. However, not all species lend themselves to this approach, either because of insufficient phenotypic variation or because tag attachment is not feasible. The use of genetic markers (tags') represents a viable alternative to traditional methods of individual recognition, as they are permanent and exist in all individuals. We tested the use of genetic markers as the primary means of identifying individuals in a study of humpback whales in the North Atlantic Ocean. Anaysis of six microsatellite loci (2,3) among 3,060 skin samples collected throughout this ocean allowed the unequivocal identification of individuals. Analysis of 692 recaptures', identified by their genotype, revealed individual local and migratory movements of up to 10,00 km, limited exchange among summer feeding grounds, and mixing in winter breeding areas, and also allowed the first estimates of animal abundance based solely on genotypic data. Our study demonstrates that genetic tagging is not only feasible, but generates data (for example, on sex) that can be valuable when interpreting the results of tagging experiments.
Skin biopsy (4) or sloughed skin (5) samples from free-ranging humpback whales (Megaptera noveaeangliae) were collected across the North Atlantic between 1988 and 1995. Total-cell DNA was extracted (6), and the sex (7) and genotype at six mendelian inherited (8) microsatellite loci (9) were determined for each sample. From the 3,060 samples analysed, we detected 2,368 unique genotypes. The expected number of samples collected from different individuals with identical genotype arising by chance was estimated at less than one (see Methods). Because of this, and the fact that all samples with identical genotypes were of consistent sex, we believe that the 3,060 samples represented 2,368 individual whales. Of the 692 recaptures observed during the study, 216 occurred on the sumer feeding grounds (Fig 1). Of these, 96% (n=207) occurred within the same feeding area (Fig 1), confirming previous behavioural (10,11) and genetic (12, 13) observations of maternally directed fidelity to specific feeding grounds. The remaining 4% 9n=9) of the recaptures on the summer feeding grounds were detected in different but adjacent feeding grounds (Fig 1). However, significantly more recaptures were detected within these sampling areas than would be expected if the areas constituted one intermixing feeding aggregation and is consistent with the notion of maternally directed site fidelity (see Methods).
Of the 114 individuals recorded on both summer feeding and winter breeding ground (Fig 1), two had migrated from the West Indies to either Jan mayen or Bear Island (in the Barents Sea). These genetic recaptures represent the most extensive one-way movements (6,435 and 7,940 km, respectively) recorded in this study, and support recent findings (13, 14) that whales feeding in the Barents Sea share a common breeding ground with other North Atlantic humpback whales. In three other indivduals, which were each sampled on three occasions, movements were documented from a feeding ground to the breeding range and back, involving minimum migration distances of up to 10,00 km between the first and last sampling event. No feeding ground was disproportionately represented among the recaptures in the West Indies (Fig 1), supporting the current view that humpback whales in the North Atlantic constitute a single panmictic population (13, 15, 16) (G-test, G= 4.68, P< 0.46).
As with traditional identification methods (1), microsatellite data lend themselves to abundance estimation using mark-recapture statistical methods (17), although to our knowledge this has not previously been attempted. Using breeding-ground samples collected during 1992 and 1993, we estimated the North Atlantic humpback whale population at 4,894 (95% confidence interval, 3,374-7,123) males and 2,804 (95% confidence in, 1,776-4,463) females. This total of 7,698 whales is substantially (albeit not significantly) higher than the most recent previous photographic based estimate of 5,505 (ref. 10) (95% confidence interval, 2,888- 8,122). Preliminary results from new and more reliable photographic estimates are also larger than previous estimates (T.S. et al., manuscript in preparation), and could partly be due to population growth during the intervening decade since the previous estimate (18). The significntly different estimates for males and females are unexpected given the even sex ration observed on the feeding grounds (19) (Table 1) and among 198 claes that we sampled in the breeding range (data not shown). The estimates are independent of between-sex sampling biases, and so the observed deficit of females probably reflects within-sex behavioural differences, for example that individual females display a higer degree of preferences with respect \to region and/or residence time in the breeding range than do males.
Our results demonstrate that genetic tagging is effective even in a large population of wide-ranging and inaccessible mammals such as cetaceans. Further, the data obtained from genetic tags can be used to address evolutionary (20), demographic (19) and behavioural (21) questions to which traditional tagging methods are unsuited. Because all eukaryotes possess microsatellites (22), individuals within any taxon can in principle be identified reliably from minute quantities of tissue. Such tissue is commonly derived from biopsies (23), but can also come from sloughed skin (5), shed hair (24) or fecal material (25, 26), thus potentially allowing genotyping and individual recognition even of unobserved animals. However, the validity of a genetic tag depends on there being a sufficently low probability of identity (27). An underestimate of this probability can be caused by unrecognized population substructure or linkage disequilibrium among loci. Additional anaylses addressing these issues, as well as checks with other data (such as the sex of recaptures, as in this study) must be performed to ensure the validity of the overall probability of identity. Similarly, a genetic tag consists of data from multiple loci, each of which are prone to handling errors in the laboratory. With proper laboratory procedurs, such errors can be rendered minimal (here estimated to 0.0011 per locus), and so do not represent a serious obstacle to the detection of recaptures. Recaptures with an erroneous genotype will almost certainly be among the samples that match at all but one locus. If a sufficient number of loci has been analysed, none or few samples are expected to match at all but one locus, and so re-analysis of the discrepant locus presents only a minor additional effort.
Methods
Expected number of samples with identical genotypes. The number of samples from different individuals with identical genotypes (across all loci) arising by chance was estimated from the probability of identity (I) (27). No difference in the estimate was observed whether I was estimated from all samples of unique genotypes only. The expected number of samples from different individuals with identical genotypes arising by chance was first estimated separately within each sampling area, and subsequently between sampling areas (after removal of duplicate genotypes within each sampling area). The expected total number of such matches was estimated to be 0.32 and 0.27 within and between areas, respectively. No significant degree of linkage disequilibrium (which could cause an underestimate of I), nor any significant deviations from the expected Hardy-Weinberg proportions of genotypes, was observed after the removal of duplicate genotypes.
Expected number of matches between the Gulf of St Lawrence and Newfoundland/Labrador. The probability of observing six recaptures between years in the Gulf of St Lawrence was assessed by Monte Carlo simulations, under the assumption that the Gulf of St Lawrence and Newfoundland/Labrador constitue one intermixing feeding aggregation. Several tests, each of 1,00 simulations, were conducted over a range of the most likely abundance estimates derived from the data. Each simulation was conditioned on the number of recapturs observed in the sample and the order of sampling int he two areas. The probability of observing six or more individuals sampled more than once in the Gulf of St Lawrence (if part of the same feeding ground as Newfoundland/Labrador) was estimated to less than 0.0001.
Estimation of abundance and log-likelihood ratio test of equal numbers of males and females on the breeding range. Estimates of abundance and 95% confidence intervals were calculated as suggested previously (17). In 1992, 382 males and 231 females were sampled on the breeding range; the corresponding numbers for 1993 were 408 and 265. Between the two years we obsereved 31 and 21 recaptures of males and females, respectively. The statistical significance of the difference in the estimated number of males and females was assessed with a likelihood-ratio test of the null hypothesis that the number of males was equal to the number of females, assuming that the number of recaptures is hypergeometrically distributed. Asymptotically the test statistic (-2ln(A), where A is the likelihood ratio) is chi-squared distributed with one degree of freedom, which implies that eh probability of -2 ln(A) exceeding 4.14 is 0.042. Monte Carlo simulations (10,000 replicates) confirmed this probability.
Estimation of error rate. The error rate was estimated from the samples with identical genotypes at five but not six loci. From 2,368 genotypes just 64 such pairs (in all 117 unique genotypes) were detected in 19 of which the individual from whcih the sample was collected had been identified by its natural markings (28). Of these 19 incidences just 3 from the same individual. Hence the 117 samples were estimated to include 9 samples with an incorrect genotype, which equals an error rate of 0.0011 per locus after inclusion of the 2 times 692 smaples which match on all loci (presumably determined correctly). The upper limit of the 95% confidence interval was estimated to 0.0027 from Monte Carlo simulations, assuming a binomial distribution of errors, and a hupergeometric distribution of photographed whales.
Received 6 March: accepted 16 May 1997
1. Hammond, P.S., Mizroch, S.A. & Donovan, G.P Individual recognition of Cetaceans: Use of Photo-identification and Other Techniques to Estimate Population Parameters (International Whaling Commission. Cambridge, 1990).
2. Tautz, D. Hypervariability of simple sequences as a general source for polymorphic DNA markers. Nucleic Acids Res. 17, 6463-6471 (1989).
3. Weber, J.L. & May, P.E. Abundant class of human DNA polymorphism which can be typed using the polymerase chain reaction. Am. J. Hum. Genet. 44, 388-396 (1989).
4. Palsboll, P.J., Larsen, F. & Sigurd Hansen, E. Sampling of skin biopsies from free-ranging large cetaceans in West Greenland: Development of new biopsy tips and bolt designs. Rep. Int. Whaling Commiss. Spec. Iss. 13, 71-79 (1991).
5. Clapham, P.J., Palsboll, P.J. & Mattila, D.K. High-energy behaviors in humpback whales as a source of sloughed skin for molecular analysis. Mar. Mamm. Sci. 9, 213-220 (1993).
6. Maniatic, T., Fritsch, E.F. & Sambrock, J. Molecular Cloning. A Laboratory Manual (Cold Spring harbour Laboratory Press, New York, 1982).
7. Berube, M. & Palsboll, P.J. Identification of sex in Cetaceans by multiplexing with three ZFX and ZFY specific primers. Mol. Ecol. 5, 283-287 (1996).
8. Clapham, P.J., & Palsboll, P.J. Molecular analysis of paternity shows promiscous mating in female humpback whales (Megaptera novaeangliae, Borowski). Proc. R. Soc. Lond. B 264, 95-98 (1997).
9. Palsboll, P.J., Berube, M., Larsen, A.H. & Jorgensen, H. Primers for the amplification of tri- and tetramer microsatellite loci in cetaceans. Mol. Ecol. (in the press).
10. Katona, S.K. & Beard, J.A. Population size, migrations, and feeding aggregations of the humpback whale, Megaptera novaeangliae, in the western North Atlantic Ocean. Rep. Int. Whaling Commiss. Spec. Iss. 12, 295-305 (1990).
11. Clapham, P.J. et al. Seasonal occurrence and annual return of humpback whales in the southern Gulf of Maine. Can. J. Zool. 71, 440-443 (1993).
12. Palsboll, P.J. et al. Distribution of mtDNA haplotypes in North Atlantic humpback whales: the influence of behaviour on population structure. Mar. Ecol. Prog. Ser. 116, 1-10 (1995).
13. Larsen, A.H., Sigurjonsson, J. Oien, N., Vikingsson, G. & Palsboll, P.J. Population genetic analysis of nuclear and mitochondrial loci in skin biopsies collected from Central and northeastern North Atlantic humpback whales (Megaptera novaeangliae): populaiton identity and migratory destinations. Proc. R. Soc. Lond. B 263, 1611-1618 (1996).
14. Stevick, P.T., Oien, N. & Mattila, D.K. Migration of a humpback whale (Megaptera novaeangliae) between Norway and the West Indies. Mar. Mamm. Sci. (in the press).
15. Clapham, P.J., Mattila, D.K. & Palsboll, P.J. High-latitude-area composition of humpback whale groups in Samana Bay: further evidence for panmixis in the North Atlantic population. Can. J. Zool. 71, 1065-1066 (1993).
16. Mattila, D.K., Clapham, P.J., Katona, S.K. & Stone, G.S. Population composition of humpback whales, Megaptera novaeangliae, on Silver Bank, 1984. Can J. Zool. 67, 281-285 (1989).
17. Seber, G.A.F. The Estimation of Animal Abundance and Related Parameters (Charles Griffin, London, 1982).
18. Barlow, J. & Clapham, P.J. A new birth-interval approach to estimating demographic parameters of humpback whales. Ecology 78, 535-546 (1997).
19. Clapham, P.J., Berube, M. & Mattila, D.K. Sex ratio of the Gulf of Maine humpback whale populaiton. Mar. Mamm. Sci. 11, 227-231 (1995).
20. Roy, M.S., Geffen, E., Smith, D., Ostrander, E.A. & Wayne, R. K. Patterns of differentiation and hybridization in North American wolflike canids, revealed by analysis of microsatellite loci. Mol. Biol. Evol. 11, 553-570 (1994).
21. Amos, B., Schlotterer, C. & Tautz, D. Social structure of pilot whales revealed by analytical DNA profiling. Science 260, 670-672 (1993).
22. Tautz, D. & renz, M. Simple sequences are ubiquitous repetitive components of eukaryote genomes. Nucleic Acids Res. 12, 4127-4138 (1984).
23. Lambertsen, R.H. A biopsy system for large whales and its use for cytogenetics. J. Mamm. 68, 443-445 (1987).
24. Morin, P.A. & Woodruff, D.S. in Paternity in Primates: Genetic Tests and Theories (eds Martin, R.D., Dixon, A.F. & Wickings, E.J.) 63-81 (Karger, Basel, 1992).
25. Constable, J.J., Packer, C., Collins, D.A. & Pusey, A.E. Nuclear DNA from primate dung. Nature 373, 393 (1995).
26. Gerlogg, U. et al. Amplification of hypervariable simple sequence repeats (microsatellites) from excremental DNA of wild living bonobos (Pan paniscus). Mol. Ecol. 4, 515-518 (1995).
27. Paetkau, D. & Strobeck, C. Microsatellite analysis of genetic variation in black bear populations. Mol. Ecol. 3, 489-495 (1994).
28. Katona, S.K. & Whitehead, H.P. Identifying humpback whales using their natural markings. Polar Rec. 20, 439-444 (1981).
Acknowledgements. Most samples were collected during the international collaborative project (Year of the North Atlantic Humpback Whale (YONAH). We thank T.H. Andersen, P. Arctander, C. Berchok, L. Bonnelly, M. Fredholm, J. Jensen, K.B. Pedensen, P. Raahauge, J. Robbins, O. Vasquez and E. Widen for their support and assistaance. Funds were obtained from the Commission for Scientific Research in Greenland, the Greenland Home Rule, the EU Biotechnology Program, the Danish and Norwegian Research Councils, the US National Marine Fisheries Service. The US National Fish and Wildlife Foundation, the Department of Fisheries and Oceans, the International Whaling Commission, the US State Department, the Aage V. Jensen Charity Foundation, the Dorr Foundation, the American-Scandinavian Foundation, the Exxon Corporation, and Feodor Pitcairn.
Published in Nature Volume 388 21 August 1997
|
2015-07-07 13:32:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6270710229873657, "perplexity": 9403.887491928246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099361.57/warc/CC-MAIN-20150627031819-00022-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/168267/formal-expression-of-tildez-is-the-nearest-from-z
|
formal expression of $\tilde{z}$ is the nearest from z
$h = distance(z, \tilde{z})$, where $\tilde{z}$ is the element that is nearest from $z$ (that is, distance(z, $\tilde{z}$) is smaller than distance(z, any_other_z)).
Is it possible to expression this formally, instead of saying "where $\tilde{z}$ is the element that is nearest from $z$ ..." ?
-
1. If you just want an expression for $\tilde{z}$, you can use $\operatorname{argmin}$, like this: $h = \operatorname{distance}(z,\operatorname{argmin}_{\tilde{z} \neq z} \operatorname{distance}(z,\tilde{z}))$.
Formally, $\operatorname{argmin}_{x \in S} f(x)$ is defined as any value $x \in S$ such that $f(x)$ is minimal.
1. Note that your expression is identical to $$\min \{ \operatorname{distance}(z,\tilde{z}) | z \ne \tilde{z} \}.$$ This is probably the best solution.
Great. By the way, is it possible that $z \neq \tilde{z}$ is under argmin in latex (and not just an index) ? – shn Jul 8 '12 at 15:33
|
2016-02-11 15:10:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9582668542861938, "perplexity": 321.84850300930674}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162035.80/warc/CC-MAIN-20160205193922-00056-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://blender.stackexchange.com/questions/21792/how-do-you-make-bullets-do-damagein-the-bge
|
# How do you make bullets do damagein the BGE?
I cannot figure out how to make bullets (physical bullets, not rays) actually do damage to the enemy (in a first-person shooter). I have tried the collision sensor but I cant seem to make it work.
All I am trying to do is make it so when the bullet hits the enemy, then the enemy disappears (Edit Object: End Object Actuator). The enemy has the physic type: Character and the bullet has the physics type: Static. I am using simple motion for the bullet.
What kind of properties does the bullet of enemy need if any? What should the logic bricks look like?
|
2021-09-19 08:20:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.571153998374939, "perplexity": 1737.719569277456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056752.16/warc/CC-MAIN-20210919065755-20210919095755-00308.warc.gz"}
|
https://www.physicsforums.com/threads/inverse-function-theorem.136176/
|
Inverse Function Theorem
1. Oct 11, 2006
Haftred
I'm trying to see near which points of R^3 I can solve for theta, phi, and rho in terms of x,y, and z. I know i need to find the determinant and see when it equals zero; however, I get the determinant to equal zero when sin(phi) = 0, and when tan(theta) = -cot(phi). The first is right, but i've checked my work many times and keep getting the last solution. I just calculated the determinant of the partial derivatives (dx/dtheta, dx / dphi, dx / drho...dy/dtheta, dy/dphi, dy/drho...dz/dtheta, dz/dphi, dz/drho). I've checked my work many times. Am I correct, or am I doing something wrong?
2. Oct 11, 2006
StatusX
That doesn't look right. What is the equation you got for the determinant?
3. Oct 11, 2006
Haftred
p = rho
a = phi
b = theta
p^2[cos(a)sin(a)(cosb)^3 + (sina)^2(sinb)^3 + cos(b)cos(a)sin(a)(sinb)^2 + sin(b)(sina)^2(sinb)^2].
I differentiated with respect to rho in the first column, phi in the second column, and theta in the third.
Thanks.
4. Oct 11, 2006
StatusX
I get something different. All I can suggest is go back through it carefully.
5. Oct 11, 2006
Haftred
thanks statusx for your time..i appreciate it
|
2017-04-24 09:34:18
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8152839541435242, "perplexity": 1414.5294741595062}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119225.38/warc/CC-MAIN-20170423031159-00318-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://www.transtutors.com/questions/assume-that-the-asset-price-follows-geometric-brownian-motion-and-prove-this-relatio-7847173.htm
|
Assume that the asset price follows geometric Brownian motion and prove this relationship using the.
Assume that the asset price follows geometric Brownian motion
and prove this relationship using the Ito’s formula.
|
2021-12-08 19:49:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9658514261245728, "perplexity": 475.3969336214021}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363520.30/warc/CC-MAIN-20211208175210-20211208205210-00156.warc.gz"}
|
http://digitalfreepen.com/2013/04/03/to-log-or-to-not-log.html
|
Apr 3 2013
# To log or to not log
Sometimes, in digital signal processing, we work with the logarithm of a function. The reason why : the human ear can hear rustling leaves quite clearly. It can also obviously hear music at a rock concert. The latter is more than ten million times louder in the intensity of the sound waves. If we were to plot both sounds on a graph based on the intensity, the leaves wouldn't even appear on the chart - but we would probably like it to. Thus, taking the logarithm represents sound on the decibel scale, which is easier to wrap our heads around.
Now say we take a signal and we want to downsample it by a factor of two, by taking the average of two neighbors at the time. Should we take the average before or after taking the logarithm? Well, let's first observe that if we take it before, then :
$Z = \frac{log(x) + log(y)}{2} = \frac{log(x*y)}{2} = log(\sqrt{x*y})$
which is the geometric mean, in contrast to the arithmetic mean
$Z = log(\frac{x+y}{2})$
Intuitively, it seems to make more sense to take the arithmetic mean. This is because the log is only a final adjustment step that rescales data after we are done processing it.
What about analysis of the data. Here, the answer will most likely depend on the analysis we want to do. I was curious to see whether music onset detection and segmentation via self-similarity matrices would work better in the linear scale or the log scale. So I tried a few songs, and here's the result for one of them :
In both images, the first is analysis done on a log scale, the second is analysis done on a linear scale.
The first image shows the gradient of the spectrum of a song in the time axis. The relative brightness varies - that's not important, it's just due to the fact that I had to take the log of the gradient to be able to see everything (same principle as with the leaves vs rock concert). What is important, however, is that we want a musical note to be ideally very localized in time, to appear as a straight vertical line in the gradient. The analysis on the log scale leads to longer, less broken up lines that jiggle less. Which is good.
As for the self-similarity matrix well...it's pretty clear that analysis on the linear scale is a big mess and will produce many false positives.
So in this application, analysis on the log scale is better! My hunch is that taking the log reduces variation among data points, which leads to smoother results.
|
2018-04-25 23:40:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6883484721183777, "perplexity": 373.90528773162094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948029.93/warc/CC-MAIN-20180425232612-20180426012612-00536.warc.gz"}
|
https://proxies123.com/tag/difficult/
|
## Coding, it's difficult, can we fix it?
I still do not understand how to handle coding, we all know that coding is difficult, but no matter how hard you try, BE does not pick up text. UTF-8, Ansi (Windows-1252). Can you explain what is the correct way to make sure that BE can read the characters in the right way? What is important? My text editor, file coding, how do I paste the message in engine.ini to make sure everything works? What should I do to avoid spoiling the file encoding (opening the file in the wrong editor)?
## dnd 3.5e – According to the Rules Compendium, can I load through a square that would make my movement difficult, as long as it is not difficult terrain?
According to the Compendium of rulesCan I load through a square that would impede my movement as long as it is not difficult terrain?
Under "Start a load" (page 27), it says:
If you do not have line of sight with the opponent you want to charge at the beginning of your turn, you can not charge that enemy. To load, you must move at least 10 feet (2 squares) and you can move up to twice your speed. You must be able to reach the closest space from which you can attack the designated opponent. This movement must occur before your attack. If any line from its starting space to the final space passes through a square that blocks movement, is a difficult terrain, or contains a creature (not defenseless), you can not carry. You can not charge if the final space is occupied or blocked. Since you move to load, you can not step 5 feet during the same turn. You provoke attacks of opportunity as is normal for your movement.
It seems that squares that are not difficult terrain, that only hinder movement but do not block it, are legal.
This would include Obstacles, Poor visibility (any time you can not see at least 60 feet under prevailing visibility conditions) and Squeezing.
Also some spells cause a hindered movement that is not difficult terrain.
## dnd 3.5e – According to the Compendium of Rules, can I upload through a box that would impede my movement as long as it is not difficult terrain?
Relevant text
STARTING A CARGO
If you do not have line of sight with the opponent you want to charge at the beginning of your turn, you can not charge that enemy. To load, you must move at least 10 feet (2 squares) and you can move up to twice your speed. You must be able to reach the closest space from which you can attack the designated opponent. This movement must occur before your attack. If any line from its starting space to the final space passes through a square that blocks movement, is a difficult terrain, or contains a creature (not defenseless), you can not carry. You can not charge if the final space is occupied or blocked. Since you move to load, you can not step 5 feet during the same turn. You provoke attacks of opportunity as is normal for your movement.
(Compendium of rules p.27)
It seems that squares that are not difficult terrain, that only hinder movement but do not block it, are legal.
This would include Obstacles, Poor visibility (any time you can not see at least 60 feet under prevailing visibility conditions) and Squeezing.
Also some spells cause a hindered movement that is not difficult terrain.
## Theory of complexity: the problems \$ mathsf { # P} \$ are more difficult than \$ mathsf {NP} \$ problems
As I suggested in the comments, that its reduction exists is not at all surprising. As in my answer to your previous question, your "Expand and Simplify" part takes a potentially exponential time and, therefore, does not qualify as a polynomial time reduction (which is the standard notion used to compare the classes in question). The exponential time reductions here are irrelevant because both $$mathsf {NP}$$ Y $$# mathsf {P}$$ are contained in $$mathsf {EXP}$$, so the reduction is powerful enough to solve The problems that should be reducing.
To address the question in the title, note the following: $$P in mathsf {NP}$$ be some problem $$S_P$$ the set of solutions associated with $$P$$ (that is, the $${0,1 } ^ ast times {0,1 } ^ ast$$ relationship that, for example, associates each $$x in P$$ with its solutions (poly-verifiable)), and $$# S_P colon {0,1 } ^ ast to mathbb {N} _0$$ be the counting problem associated with $$S_P$$, that is to say, $$# S_P (x) = | S_P (x) |$$. by $$P = mathsf {SAT}$$, for example, $$S_P$$ it is simply the relationship of the formulas with their satisfactory tasks, and $$# S_P$$ The number of satisfactory commitments. Then you can merge $$P$$ as the problem (decision) $${x mid #S_P (x) ge 1 }$$. This means having a procedure that counts the solutions (that is, calculates $$# S_P in # mathsf {P}$$) gives directly something that decides $$P in mathsf {NP}$$.
Not only that, but note that up we do not even use the value $$# S_P (x)$$ Apart from proving that it is not zero. We may be able to use that precise value to solve problems much more difficult than $$mathsf {NP}$$ (For example, as Toda's theorem tells us, see David Richerby's answer). This gives us enough reasons to believe that the problems in $$# mathsf {P}$$ are, in general, much harder than those of $$mathsf {NP}$$ (The most conspicuous candidates are the $$# mathsf {P}$$– Complete problems.
## dnd 5e – Is it okay to make a match too difficult for your players?
Read How can I make my PCs run away? I will wait
Welcome back.
The essential problem is that most modern players are trained from birth in video games and modern RPGs that a) will win b) if they do not win, they will reappear. You have to overcome this training to make it happen what you want to happen. This is really hard, even saying "This is me, the DM, out of the game that tells you to" ARRIVE! " Sometimes it's not enough People are really good at maintaining their beliefs and expectations even in the face of overwhelming evidence against them.
If he is satisfied, he can convince his players to run or negotiate, so it does not matter what he throws at them.
Just prepare yourself so that you have to kill your characters several times to retrain them.
## How difficult is it to decipher the key of ENCRYPTION given the message and the clear text? [migrated]
How feasible is the brute force of the ENCRYPTION key if it has
• the clear text message
• The encrypted version of the same message.
• You know that Rijandel is the encryption algorithm.
• you know that the salt is 2 bytes long
## magento2 – Is multivendor something difficult to do in magento 2?
My client wants to build a multi-vendor store based on Magento 2. There is no shortage of extensions to do so, perhaps the most famous was created by webkul, my client was ready to pay a lot of money to webkul to buy many extensions to them.
I wonder if it is a difficult task to perform the coding because I need the rest of the API, the extensions do not have a rest API and, at some time in the future, the plan is to create a mobile application.
Therefore, if doing multivendor is not a big problem in Magento 2, at least I could do it the way I want. The fewer extensions the better.
## Ringtone: How difficult can it be for Google to separate the volume of notifications and ringtones on Android One?
I am tired of the ultra high notification tones and the use of non-optimized third-party software to control something as basic as the volume of notifications.
I think Google has been blessed with the most advanced technical people in the world, so programming a volume control mechanism can be a problem.
Or are they deliberately trying to thwart Android users?
## Image size: thumbnails are difficult to cut, no matter what
I am using WP Job Manager, and I would like logos uploaded by the user to be resized and placed to fit a fixed height and width. Currently, my configuration is in:
This is what I get:
But what I want is to change the size of the image to fit a fixed width and height of, say, 150×150, like this:
I tried to uncheck the option "cut the thubmnail to the exact dimensions …" in the configuration, the image is resized (like 150×68, not 150×150 without cropping)
I tried adding the following to the functions of my theme.php:
``````add_image_size (& # 39; 150x150-crop & # 39 ;, 150, 150, false);
add_theme_support (& # 39; post-thumbnails & # 39;);
set_post_thumbnail_size (150, 150, true);
``````
Nothing makes the difference. Any ideas?
## Graph theory: proof that the shortest path with negative cycles is difficult NP
I'm looking for the shortest path problem and I'm wondering how to try that shorter path with neg. The cycles are NP-hard. (Or is it NPC? Is there a way to validate in time P that the path really is shorter?)
How would you reduce the SAT problem to the shortest route problem in polynomial time?
|
2019-07-23 02:55:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 23, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40696969628334045, "perplexity": 935.5478601747304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528687.63/warc/CC-MAIN-20190723022935-20190723044935-00124.warc.gz"}
|
https://hal.inria.fr/inria-00073896
|
New interface
# Optimal Time and Minimum Space-Time Product for Reversing a Certain Class of Programs
1 SAFIR - Algebraic Formal Systems for Industry and Research
CRISAM - Inria Sophia Antipolis - Méditerranée
Abstract : This report concerns time/space trade-offs for the reverse mode of automatic differentiation on the straight-line programs with nested loops class of programs. In the first part we consider the problem of reversing a finite sequence given by $u_{n+1}=3Df(u_n)$, which can mode lize a certain class of finite loops. We show an optimal time strategy for this problem, the number of available registers being fixed, and a lower bound on the time-space product equal to $\frac{p (\ln p)^2}{(\ln 4)^2}$. We then present an optimal strategy on nested loops with the objective of taking care of the program structure. Finally we consider an application of this storage/recomputation strategy to compute in reverse mode the derivatives of a function represented as a {\sc fortran} program.
Keywords :
Document type :
Reports
Domain :
Cited literature [1 references]
https://hal.inria.fr/inria-00073896
Contributor : Rapport De Recherche Inria Connect in order to contact the contributor
Submitted on : Wednesday, May 24, 2006 - 2:01:23 PM
Last modification on : Friday, February 4, 2022 - 3:16:09 AM
Long-term archiving on: : Monday, April 5, 2010 - 12:01:07 AM
### Identifiers
• HAL Id : inria-00073896, version 1
### Citation
José Grimm, Loïc Pottier, Nicole Rostaing-Schmidt. Optimal Time and Minimum Space-Time Product for Reversing a Certain Class of Programs. RR-2794, INRIA. 1996. ⟨inria-00073896⟩
Record views
|
2022-11-27 19:11:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37286221981048584, "perplexity": 2662.682323174528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00240.warc.gz"}
|
https://deepai.org/publication/differentially-private-summation-with-multi-message-shuffling
|
# Differentially Private Summation with Multi-Message Shuffling
In recent work, Cheu et al. (Eurocrypt 2019) proposed a protocol for n-party real summation in the shuffle model of differential privacy with O_ϵ, δ(1) error and Θ(ϵ√(n)) one-bit messages per party. In contrast, every local model protocol for real summation must incur error Ω(1/√(n)), and there exist protocols matching this lower bound which require just one bit of communication per party. Whether this gap in number of messages is necessary was left open by Cheu et al. In this note we show a protocol with O_ϵ, δ(1) error and O_ϵ, δ((n)) messages of size O((n)). This protocol is based on the work of Ishai et al. (FOCS 2006) showing how to implement distributed summation from secure shuffling, and the observation that this allows simulating the Laplace mechanism in the shuffle model.
## Authors
• 19 publications
• 9 publications
• 14 publications
• 20 publications
• ### Improved Summation from Shuffling
A protocol by Ishai et al. (FOCS 2006) showing how to implement distribu...
09/24/2019 ∙ by Borja Balle, et al. ∙ 0
• ### Private Aggregation from Fewer Anonymous Messages
Consider the setup where n parties are each given a number x_i ∈F_q and ...
09/24/2019 ∙ by Badih Ghazi, et al. ∙ 0
• ### On the Round Complexity of the Shuffle Model
The shuffle model of differential privacy was proposed as a viable model...
09/28/2020 ∙ by Amos Beimel, et al. ∙ 0
• ### Pure Differentially Private Summation from Anonymous Messages
The shuffled (aka anonymous) model has recently generated significant in...
02/05/2020 ∙ by Badih Ghazi, et al. ∙ 0
• ### Differentially Private Histograms in the Shuffle Model from Fake Users
There has been much recent work in the shuffle model of differential pri...
04/06/2021 ∙ by Albert Cheu, et al. ∙ 0
• ### Private Summation in the Multi-Message Shuffle Model
The shuffle model of differential privacy (Erlingsson et al. SODA 2019; ...
02/03/2020 ∙ by Borja Balle, et al. ∙ 0
• ### Tighter Bounds on Multi-Party Coin Flipping via Augmented Weak Martingales and Differentially Private Sampling
In his seminal work, Cleve [STOC '86] has proved that any r-round coin-f...
05/03/2021 ∙ by Amos Beimel, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Preliminaries
#### The shuffle model.
The shuffle model of differential privacy [3, 2] considers a data collector that receives messages from users (possibly multiple messages from each user). The shuffle model assumes that a mechanism is in place to provide anonymity to each of the messages, i.e., in the curator’s view, the message have been shuffled by a random unknown permutation.
Following the notation in [2], we define a protocol in the shuffle model to be a pair of algorithms , where , and , for number of users and number of messages . We call the local randomizer, the message space of the protocol, the analyzer of , and the output space. The overall protocol implements a mechanism as follows. Each user holds a data record
, to which she applies the local randomizer to obtain a vector of messages
. The multiset union of all messages is then shuffled and submitted to the analyzer. We write to denote the random shuffling step, where is a shuffler that applies a random permutation to its inputs. In summary, the output of is given by .
To prove privacy we will refer to the mechanism which captures the view of the analyzer in an execution of the protocol. Therefore we say that is -differentially private if for every pair of -tuples of inputs and differing in one co-ordinate, and every collection of multisets of of size , i.e. every possible subset of views of the analyzer, we have
P(MR(→x)∈T)≤eϵP(MR(→x′)∈T)+δ.
#### Real summation.
In this paper we are concerned with the problem of real summation where each is a real number in
and the goal of the protocol is for the analyser to obtain a differentially private estimate of
.
#### Randomized rounding.
Our proposed protocol uses a fixed point encoding of a real number with integer precision and randomized rounding, which we define as .
For any , .
###### Proof.
Let be , and note that and . It follows that
MSE (n∑i=1\rm fp(xi,p)/p,n∑i=1xi)=E⎡⎣(n∑i=1Δi)2⎤⎦= n∑i=1E[Δ2i]+∑1≤i
#### Differential Privacy from Statistical Distance.
Our argument relies on statistical distance which, for consistency with [6], we define as the maximal advantage of a distinguisher in telling two distributions and apart, namely . We will show that the view of the analyzer in our protocol is close in statistical distance to the output of a differentially private mechanism. The following lemma (also stated by Wang et al. [8], Proposition ) says that this suffices to conclude that our protocol is differentially private.
###### Lemma 1.2.
Let and be protocols such that , for a security parameter and all inputs . If is -DP, then is -DP.
###### Proof.
For any neighboring inputs , satisfies , and satisfies , for any input and . It follows that . ∎
#### The Discrete Laplace
In this work we use a discrete version of the Laplace mechanism, which consists of adding a discrete random variable to the input. We refer to this distribution as the
discrete Laplace distribution. The distribution is over , we write it
and it has probability mass function proportional to
. Adding noise from this distribution to a function with sensitivity provides -differential privacy with , analogously to the Laplace mechanism on . This distribution also appeared in [7] though under the name symmetric geometric.
## 2 Secure Distributed Summation
Ishai et al. [6] showed how to use anonymous communications as a building block for a variety of tasks, including securely computing -party summation over . This setting coincides with the shuffle model presented above, and hence the precise result by Ishai et al. can be restated as follows (we give a detailed proof of this Lemma in Section 5).
Let be a shuffle model protocol, and let be a function. We say that is -secure for computing if, for any such that , we have
SD(MR(i),MR(j))≤2−σ.
###### Lemma 2.1 ([6]).
There exists a -secure protocol in the shuffle model for summation in with communication per party.
The protocol by Ishai et al. is very simple. Let be the input of the th party. Party generates additive shares of ( can be reduced by almost a factor of two as explained in section 5.1), i.e., it generates independent uniformly random elements from denoted and then computes . Party then submits each as a separate message to the shuffler. The shuffler then shuffles all messages together and sends them on to the server who adds up all the received messages and finds the result as required. This is -secure as stated in the lemma.
Given that a communication efficient protocol for secure exact integer summation in the shuffle model exists, we would now like to use it for private real summation. Intuitively, this task boils down to defining a local randomiser that takes a private value and outputs a privatized value in the discrete domain such that is differentially private and can be post-processed to a good approximation of .
A simple solution is to have a designated party add the noise required in the curator model. This is however not a satisfying solution as it does not withstand collusions and/or dropouts. To address this. Shi et al. [7] proposed a solution where each party adds enough noise to provide -differential privacy in the curator model with probability , which results in an -differentially private protocol. However, one can do strictly better: the total noise can be reduced by a factor of if each party adds a discrete random variable such that the sum of the contributions is exactly enough to provide -differential privacy, and this also results in pure differential privacy. A discrete random variable with this property is provided in [4], where it is shown that a discrete Laplace random variable can be expressed as the sum of
differences of two Pólya random variables (the Pólya distribution is a generalization of the negative binomial distribution). Concretely, if
and are independent Pólya random variables then has a discrete Laplace distribution i.e. . This allows to distribute the Laplace mechanism, which is what we shall do in our protocol presented in the next section.
## 4 Private Summation
In this section we prove a lemma which says that given a secure integer summation protocol we can construct a differentially private real summation protocol. We then combine this lemma with Lemma 2.1 to derive a protocol, given explicitly, for differentially private real summation.
###### Lemma 4.1.
Given a -secure protocol in the shuffle model for -party summation in , for any , with communication per party, there exists an
-differentially private protocol in the shuffle model for real summation with standard error
and communication bounded by .
###### Proof.
Let be . We will exhibit the resulting protocol , with and , with defined as follows. executes with , and thus . is the result of first computing a fixed-point encoding of the input with precision , and then adding noise . decodes by returning if , and otherwise. This addresses a potential underflow of the sum in . To see that has error , note that it has the accuracy of the discrete Laplace mechanism when adding integers, except when the total noise added has magnitude greater than , in which case we may incur additional error, but this only happens with probability . Hence, the error of this protocol is bounded by .
To show that this protocol is private we will compare the mechanism to another mechanism (which can be considered to be computed in the curator model) which is -differentially private and such that for all , from which the result follows by Lemma 1.2.
is defined to be the result of the following procedure. First apply to each input , then take the sum and then output the result of with first input and all other inputs .
Note that , and that the sensitivity of is . It follows that is -differentially private and thus by the post processing property so is .
It remains to show that , which we will do by demonstrating the existence of a coupling. First let the noise added to input by be the same in both mechanisms and note that this results in the inputs to within and the inputs to within having the same sum. It then follows immediately from Lemma 2.1 that these two instantiations of can be coupled to have identical outputs except with probability , as required. ∎
The choice was made so that the error in the discretization was the same order as the error due to the noise added, this recovers the same order of error as the curator model. Taking results in the leading term of the total error matching the curator model at the cost of a small constant factor increase to communication.
algocf[t]
algocf[t]
Combining Lemmas 2.1 and 4.1 we can conclude the following theorem.
###### Theorem 4.1.
There exists an -differentially private protocol in the shuffle model for real summation with error and messages per party, each of length bits.
Such a protocol can be constructed from the proofs of these lemmas and is given explicitly by taking the local randomiser given in algorithm LABEL:algo:locrand, and the analyzer given in algorithm LABEL:algo:agg, with parameters , , and . This results in a mean squared error of
2α(1−α)2+n4p2
and communication of bits per party. In section 5.1 we explain how the choice of and thus the required communication can actually be reduced by almost a factor of two.
## 5 Summation by Anonymity
In this section we provide a proof of Lemma 2.1, all the ideas for the proof are provided in [6] but we reproduce the proof here keeping track of constants to facilitate setting parameters of the protocol. The following definition and lemma from [5] are fundamental to why this protocol is secure.
Let be a family of functions mapping to . We say is universal or a universal family of hash functions if, for selected uniformly at random from , for every , ,
P(h(x)=h(y))=2−l.
###### Lemma 5.1 (Leftover Hash Lemma (special case)).
Let , , and let be a universal family of hash functions mapping bits to bits. Let , and be chosen independently uniformly at random from , and respectively. Then
SD((h,h(d)),(h,U))≤2−s
To begin with we consider the case of securely adding two uniformly random inputs . Recall that is the protocol of the statement of the lemma, and let be shorthand for , i.e. the view of the analyzer in an execution of protocol with inputs . We write for and for . Finally let be an independent uniformly random element of .
###### Lemma 5.2.
Suppose . Then, .
###### Proof.
For and let . are a universal family of hash functions from to . Let be an independent uniformly random element of . Note that has the same distribution as , which follows from the intuition that corresponds to random numbers shuffled together, and can be obtained by adding up of them, and letting be the sum of the rest.
The result now follows immediately from the fact that the Leftover Hash Lemma implies that . ∎
Now we can use this to solve the case of two arbitrary inputs.
###### Lemma 5.3.
If satisfy , then we have
SD(V(x,y),V(x′,y′))≤2q2SD((V,X),(V,U)).
###### Proof.
Markov’s inequality provides that
SD(V(x),V)≤qSD((V,X),(V,U))∀x∈Zq
and thus by the triangle inequality
SD(V(x),V(x′))≤2qSD((V,X),(V,U)).
Note that
SD(V(x),V(x′)) =∑t∈ZqSD(V(x)|Y=t−x,V(x′)|Y=t−x′)/q =1q∑y∈ZqSD(V(x,y),V(x′,y+x−x′))
and so for every and we have
SD(V(x,y),V(x′,y′))≤qSD(V(x),V(x′)).
Combining the last two inequalities gives the result. ∎
Combining these two lemmas gives that, for such that ,
SD(V(x,y),V(x′,y′)) ≤2q22−log(2kk)−⌈log(q)⌉2 ≤2−k2+1+5⌈log(q)⌉2 (1)
From which the following lemma is immediate
###### Lemma 5.4.
If and such that then
SD(V(x,y),V(x′,y′))≤2−σ
We will now generalize to the case of -party summation.
###### Proof of Lemma 2.1.
Let be two distinct possible inputs to the protocol, we say that they are related by a basic step if they have the same sum and only differ in two entries. It is evident that any two distinct inputs with the same sum are related by at most basic steps. We will show that if is taken to be and and are related by a basic step then
SD(V(→x),V(→x′))≤2−σn−1 (2)
from which the lemma follows by the triangle inequality for statistical distance.
Let and be related by a basic step and suppose w.l.o.g. that and differ in the first two co-ordinates. Taking , by lemma 5.4, we can couple the values sent by the first two parties on input with the values they send on input so that they match with probability . Independently of that we can couple the inputs of the other parties so that they always match as they each have the same input in both cases. This gives a coupling exhibiting that equation 2 holds. ∎
###### Remark 5.1.
It may seem counter intuitive to require more messages the more parties there are (for fixed ). The addition of the term to is necessary for the proof of Lemma 2.1. The is because we are trying to stop the adversary from learning a greater variety of things when we have more parties. However it may be the case that Theorem 4.1 could follow from a weaker guarantee than provided by Lemma 2.1 and such a property might be true without the presence of this term.
It is an open problem to prove a lower bound greater than two on the number of messages required to get error on real summation. A proof that one message is not enough is given in [1].
### 5.1 Improving the Constants
The constants implied by this proof can be improved by using a sharper bound for in inequality 1. Using the bound gives that taking to be the ceiling of the root of
k=1+σ+5⌈log(q)⌉2+14log(π(k+12))
suffices in the statement of Lemma 5.4. The resulting value of is
52log(q)+σ+14log(log(q)+σ)+O(1).
Adding to the root before taking the ceiling gives a value of for which Lemma 2.1 holds.
## References
• [1] Borja Balle, James Bell, Adrià Gascón, and Kobbi Nissim. The privacy blanket of the shuffle model. abs/1903.02837, 2019.
• [2] Albert Cheu, Adam D. Smith, Jonathan Ullman, David Zeber, and Maxim Zhilyaev. Distributed differential privacy via shuffling. In Advances in Cryptology - EUROCRYPT 2019, 2019.
• [3] Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Abhradeep Thakurta. Amplification by shuffling: From local to central differential privacy via anonymity. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 2468–2479. SIAM, 2019.
• [4] S. Goryczka and L. Xiong. A comprehensive comparison of multiparty secure additions with differential privacy. IEEE Transactions on Dependable and Secure Computing, 14(5):463–477, Sep. 2017.
• [5] Russell Impagliazzo and David Zuckerman. How to recycle random bits. Proc. 30th FOCS, 1989.
• [6] Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky, and Amit Sahai. Cryptography from anonymity. In FOCS, pages 239–248. IEEE Computer Society, 2006.
• [7] Elaine Shi, Richard Chow, T h. Hubert Chan, Dawn Song, and Eleanor Rieffel. Privacy-preserving aggregation of time-series data. In In NDSS, 2011.
• [8] Yu-Xiang Wang, Stephen E. Fienberg, and Alexander J. Smola. Privacy for free: Posterior sampling and stochastic gradient monte carlo. In ICML, volume 37 of JMLR Workshop and Conference Proceedings, pages 2493–2502. JMLR.org, 2015.
|
2021-09-24 18:21:26
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8432754278182983, "perplexity": 1061.824875795834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057564.48/warc/CC-MAIN-20210924171348-20210924201348-00118.warc.gz"}
|
https://tug.org/pipermail/tex-k/2005-April/001273.html
|
# [tex-k] Bug in TeX 3.141592
Heiko Oberdiek oberdiek at uni-freiburg.de
Fri Apr 1 20:20:05 CEST 2005
On Fri, Apr 01, 2005 at 06:08:21PM +0100, Sam Lauber wrote:
> I think I have found a bug in TeX. According to ``TeX: The Program'', the
> construction ``\input\input f'' is explictly prohibited (this is due to the
> implementation of the functions that do \input). But I put in ``file1''
>
> file2
>
> put in ``file2''
>
> \message{In file2}
>
> and ran ``tex \input\input file1''. It did not print ``In file2'', and TeX did not
> give an error. It also had the strange effect of thinking my input file was
> called ``.tex''!! This is definitly a bug, because it is not what ``TeX: The
> Program'' says. Also if I try typing ``\input\input file1'' in response to the
> `*' prompt, it does the same thing, except reverts to ``texput''.
My interpretation is that the second \input finishes the file
name of the first input file. (The same would occur with
\input\relax.) This file does not contain an extension. Therefore
TeX adds ".tex" and looks for the file ".tex" and read it (e.g.
texmf/tex/latex/tools/.tex).
Then the second file is input (f.tex, file1.tex or file2.tex --
there are a lot of spellings above).
Yours sincerely
Heiko <oberdiek at uni-freiburg.de>
--
More information about the tex-k mailing list
|
2023-02-04 06:23:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680112600326538, "perplexity": 11005.288148100466}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500094.26/warc/CC-MAIN-20230204044030-20230204074030-00618.warc.gz"}
|
https://www.mersenneforum.org/showpost.php?s=c9067cc631b69b752ec5123d51ee30de&p=77668&postcount=2
|
View Single Post
2006-04-13, 16:10 #2
R.D. Silverman
Nov 2003
22×5×373 Posts
Quote:
Originally Posted by bearnol I've just posted the following assertion over at M+2: http://groups.google.com/group/Merse...9949e8e2560b6b Anybody feel like helping me test it? J
The time to compute 2^n + 1 mod p will be proportional to n (log p)^2
via naiive multiplication methods and n (log p loglogp logloglog p) via
FFT techniques. We only test numbers that are 1 mod 2n. From 2^x to
2^(x+1) there are 2^x/(2n) = 2^(x-1)/n such numbers. Each takes
the amount of time given above. Now multiply.
I do not know where you get the exponent "13".
|
2020-08-09 10:55:09
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8186632394790649, "perplexity": 9959.557534193074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738552.17/warc/CC-MAIN-20200809102845-20200809132845-00050.warc.gz"}
|
https://gmatclub.com/forum/if-x-3-which-of-the-following-must-be-true-138652-20.html?kudos=1
|
It is currently 13 Dec 2017, 22:37
Decision(s) Day!:
CHAT Rooms | Ross R1 | Kellogg R1 | Darden R1 | Tepper R1
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
If |x|>3, which of the following must be true?
Author Message
TAGS:
Hide Tags
VP
Status: Far, far away!
Joined: 02 Sep 2012
Posts: 1120
Kudos [?]: 2405 [0], given: 219
Location: Italy
Concentration: Finance, Entrepreneurship
GPA: 3.8
Re: If |x|>3, which of the following must be true? [#permalink]
Show Tags
13 May 2013, 01:17
Archit143 wrote:
I too have the same doubt...can anyone address the query
Archit
The question asks is $$x>3$$ or $$x<-3$$?
III tells us that $$x>3$$ or $$x<-1$$. So is $$x>3$$ or $$x<-3$$? YES
$$x>3$$ from question => $$x>3$$from III: Correct
$$x<-3$$ from question => $$x<-1$$ from III: Correct as well. We are asked if x<-3 and III tells us that x<-1 so for sure it will be <-3 also.
Hope it's clear now!
_________________
It is beyond a doubt that all our knowledge that begins with experience.
Kant , Critique of Pure Reason
Tips and tricks: Inequalities , Mixture | Review: MGMAT workshop
Strategy: SmartGMAT v1.0 | Questions: Verbal challenge SC I-II- CR New SC set out !! , My Quant
Rules for Posting in the Verbal Forum - Rules for Posting in the Quant Forum[/size][/color][/b]
Kudos [?]: 2405 [0], given: 219
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 7793
Kudos [?]: 18122 [0], given: 236
Location: Pune, India
Re: If |x|>3, which of the following must be true? [#permalink]
Show Tags
13 May 2013, 08:44
Expert's post
1
This post was
BOOKMARKED
danzig wrote:
If $$|x| > 3$$ , which of the following must be true?
I. $$x > 3$$
II. $$x^2 > 9$$
III. $$|x - 1| > 2$$
A. I only
B. II only
C. I and II only
D. II and III only
E. I, II, and III
I don't understand well III. $$|x - 1| > 2$$ is equivalent to $$x > 3$$ or $$x < -1$$. The last inequality ($$x < -1$$ ) includes integers -2 and -3, integers that are not included in one of the original inequalities ( $$x < -3$$ ). How could III be true?
If some numbers confuse you, don't fixate on them. Go ahead and take some other easier examples.
Let's keep the wording of the question same but make it simple.
If n < 6, which of the following must be true?
I.
II.
III. n < 8
Can we say that III must be true? Yes!
If n is less than 6 then obviously it is less than 8 too.
If n is less than 6, it will take values such as -20, 2, 5 etc. All of these values will be less than 8 too.
Values 6 and 7 are immaterial because n cannot take these values. You are given that n is less than 6 so you only need to worry about values that n CAN take. Those should satisfy n < 8.
Similarly, your question says that x > 3 or x < -3
Then we can say that x > 3 or x < -1. All values that will be less than -3 will be less than -1 too.
Check out my post on a similar tricky question : http://www.veritasprep.com/blog/2012/07 ... -and-sets/
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Get started with Veritas Prep GMAT On Demand for $199 Veritas Prep Reviews Kudos [?]: 18122 [0], given: 236 Intern Joined: 15 Mar 2013 Posts: 5 Kudos [?]: [0], given: 2 Re: If |x|>3, which of the following must be true? [#permalink] Show Tags 13 Aug 2013, 12:40 What if x = -2 that is < -1 but > than -3 so IIImust be out?no Zarrolou wrote: Archit143 wrote: I too have the same doubt...can anyone address the query Archit The question asks is $$x>3$$ or $$x<-3$$? III tells us that $$x>3$$ or $$x<-1$$. So is $$x>3$$ or $$x<-3$$? YES $$x>3$$ from question => $$x>3$$from III: Correct $$x<-3$$ from question => $$x<-1$$ from III: Correct as well. We are asked if x<-3 and III tells us that x<-1 so for sure it will be <-3 also. Hope it's clear now! Posted from GMAT ToolKit Kudos [?]: [0], given: 2 Senior Manager Joined: 10 Jul 2013 Posts: 326 Kudos [?]: 430 [0], given: 102 Re: If |x|>3, which of the following must be true? [#permalink] Show Tags 13 Aug 2013, 13:27 corvinis wrote: If |x| > 3, which of the following must be true? I. x > 3 II. X^2 > 9 III. |x-1|>2 A. I only B. II only C. I and II only D. II and III only E. I, II, and III I can't understand how the official answer can be right. For me is B. Please respond and I'll provide official explanation! Thanks |x| > 3 means x>3 or x<-3 so I is false because x<-3 is true. II. x^2 > 9 satisfies both equations (use x=-4 or x=5 ) III. is also true for x=-4 and x=5. so D _________________ Asif vai..... Kudos [?]: 430 [0], given: 102 Math Expert Joined: 02 Sep 2009 Posts: 42599 Kudos [?]: 135569 [0], given: 12699 Re: If |x|>3, which of the following must be true? [#permalink] Show Tags 13 Aug 2013, 23:56 Expert's post 1 This post was BOOKMARKED rusth1 wrote: If |x| > 3, which of the following must be true? I. x > 3 II. X^2 > 9 III. |x-1|>2 A. I only B. II only C. I and II only D. II and III only E. I, II, and III What if x = -2 that is < -1 but > than -3 so IIImust be out?no Zarrolou wrote: Archit143 wrote: I too have the same doubt...can anyone address the query Archit The question asks is $$x>3$$ or $$x<-3$$? III tells us that $$x>3$$ or $$x<-1$$. So is $$x>3$$ or $$x<-3$$? YES $$x>3$$ from question => $$x>3$$from III: Correct $$x<-3$$ from question => $$x<-1$$ from III: Correct as well. We are asked if x<-3 and III tells us that x<-1 so for sure it will be <-3 also. Hope it's clear now! Posted from GMAT ToolKit x cannot be -2 because we are told that |x|>3, and |-2|=2<3. Hope it helps. _________________ Kudos [?]: 135569 [0], given: 12699 Intern Joined: 17 May 2012 Posts: 45 Kudos [?]: 12 [0], given: 126 Re: If |x|>3, which of the following must be true? [#permalink] Show Tags 23 Nov 2014, 22:46 Hi All, After going through the explanations I could understand why option B was not correct. But I am sure under timed conditions, I might make a similar mistake. Does anyone have a way to solve such problems, so that a mistake can be avoided and an important case like the 3rd option be considered while evaluating answer choices? Thanks, AK Kudos [?]: 12 [0], given: 126 Director Joined: 25 Apr 2012 Posts: 721 Kudos [?]: 872 [0], given: 724 Location: India GPA: 3.21 WE: Business Development (Other) Re: If |x|>3, which of the following must be true? [#permalink] Show Tags 23 Nov 2014, 23:29 aj0809 wrote: Hi All, After going through the explanations I could understand why option B was not correct. But I am sure under timed conditions, I might make a similar mistake. Does anyone have a way to solve such problems, so that a mistake can be avoided and an important case like the 3rd option be considered while evaluating answer choices? Thanks, AK Hi AK, Why will you leave an option out...that's a big no...in this question St 2 is true so you can remove answer options which don't have st 2 as one of the option and see if it reduces your work load... You may want to refresh some basics on how to go about solving such questions.. This is why we need to practice and see where we are going wrong...For instance this is an important pointer for you to never to overlook an option.. Check out below link: math-absolute-value-modulus-86462.html _________________ “If you can't fly then run, if you can't run then walk, if you can't walk then crawl, but whatever you do you have to keep moving forward.” Kudos [?]: 872 [0], given: 724 Intern Joined: 17 May 2012 Posts: 45 Kudos [?]: 12 [0], given: 126 Re: If |x|>3, which of the following must be true? [#permalink] Show Tags 23 Nov 2014, 23:43 Thanks WondedTiger, Maybe I didn't present my question correctly. I didn't leave the 3rd option but came to the conclusion that it was wrong and chose my answer as B. I just want to prevent that in timed conditions for difficult questions like these, which have subtle differences that makes an answer choice right. Kudos [?]: 12 [0], given: 126 Director Joined: 25 Apr 2012 Posts: 721 Kudos [?]: 872 [0], given: 724 Location: India GPA: 3.21 WE: Business Development (Other) Re: If |x|>3, which of the following must be true? [#permalink] Show Tags 23 Nov 2014, 23:53 aj0809 wrote: Thanks WondedTiger, Maybe I didn't present my question correctly. I didn't leave the 3rd option but came to the conclusion that it was wrong and chose my answer as B. I just want to prevent that in timed conditions for difficult questions like these, which have subtle differences that makes an answer choice right. hmm.. Did you solve the 3rd option correctly or you made it a mistake. Identify the step where you made the mistake.. Consider making an error log..That will certainly help... _________________ “If you can't fly then run, if you can't run then walk, if you can't walk then crawl, but whatever you do you have to keep moving forward.” Kudos [?]: 872 [0], given: 724 Intern Joined: 18 May 2014 Posts: 35 Kudos [?]: 38 [0], given: 204 GMAT 1: 680 Q49 V35 Re: If |x|>3, which of the following must be true? [#permalink] Show Tags 22 May 2015, 06:26 III. |x-1|>2 This implies that the distance of x from 1 must be greater than 2. So x is either greater than 3 or less than -1. Now, recall all the values that x can take. For each value, can be say that x is either greater than 3 or less than -1? Yes. 3.00001 - x is greater than 3 3.5 : x is greater than 3 4.2 : x is greater than 3 5.7 : x is greater than 3 67 : x is greater than 3 1000 : x is greater than 3 -3.45 : x is less than -1 -4 : x is less than -1 -8 : x is less than -1 -100 : x is less than -1 For every value that x can take, x will be either greater than 3 or less than -1. Note that we are not saying that every value less than -1 must be valid for x. We are saying that every value that is valid for x (found by using |x| > 3) will be either greater than 3 or less than -1.Hence |x-1|>2 must be true for every value that x can take. VeritasPrepKarishma Thank you for this! I was doing 'must be true' questions wrong! Kudos [?]: 38 [0], given: 204 Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 7793 Kudos [?]: 18122 [0], given: 236 Location: Pune, India Re: If |x|>3, which of the following must be true? [#permalink] Show Tags 24 Nov 2016, 22:53 VeritasPrepKarishma wrote: corvinis wrote: If |x| > 3, which of the following must be true? I. x > 3 II. X^2 > 9 III. |x-1|>2 A. I only B. II only C. I and II only D. II and III only E. I, II, and III I can't understand how the official answer can be right. For me is B. Please respond and I'll provide official explanation! Thanks Responding to a pm: |x| > 3 implies that x is a point whose distance from 0 is more than 3. So x could be greater than 3 or less than -3. Before you move further, think about the values x can take: 3.00001, 3.5, 4.2, 5.7, 67, 1000, -3.45, -4, -8, -100 etc. The only values it cannot take are -3 <= x <= 3 Which of the following must be true? I. x > 3 For every value that x can take, must x be greater than 3? No. e.g. if x takes -3.45, -4 etc, it will not be greater than 3 so this is not true. II. X^2 > 9 This is the same as |x| > 3 so it must be true III. |x-1|>2 This implies that the distance of x from 1 must be greater than 2. So x is either greater than 3 or less than -1. Now, recall all the values that x can take. For each value, can be say that x is either greater than 3 or less than -1? Yes. 3.00001 - x is greater than 3 3.5 : x is greater than 3 4.2 : x is greater than 3 5.7 : x is greater than 3 67 : x is greater than 3 1000 : x is greater than 3 -3.45 : x is less than -1 -4 : x is less than -1 -8 : x is less than -1 -100 : x is less than -1 For every value that x can take, x will be either greater than 3 or less than -1. Note that we are not saying that every value less than -1 must be valid for x. We are saying that every value that is valid for x (found by using |x| > 3) will be either greater than 3 or less than -1. Hence |x-1|>2 must be true for every value that x can take. Responding to a pm: Quote: I couldn't understand the solution for option B. Since |x|>3 we can say that |x|-1>2 ( subtracting 2 from both sides). But how are we saying that |x|-1 is equal to |x-1|. They are not the same: |x - 1| > 2 and |x|-1 > 2 |x - 1| > 2 means x > 3 or x < -1 |x| - 1 > 2 |x| > 3 means x > 3 or x < -3 But not what is given and what is asked. We are GIVEN that |x| > 3 So we KNOW that x is either greater than 3 or it is less than -3. So valid values for x are 3.4, 4, 101, 2398675, -3.6, -5, -78 etc Now the question is: "Is |x - 1| > 2?" "Is x always either greater than 3 or less than -1?" All positive values of x are given to be greater than 3. All negative values of x are given to be less than -3. So obviously they are less than -1 too. Hence, |x - 1| > 2 is true. Helps? _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199
Veritas Prep Reviews
Kudos [?]: 18122 [0], given: 236
Manager
Joined: 17 Aug 2015
Posts: 120
Kudos [?]: 19 [0], given: 647
GMAT 1: 650 Q49 V29
Re: If |x|>3, which of the following must be true? [#permalink]
Show Tags
17 May 2017, 20:05
1
This post was
BOOKMARKED
it is given x<-3 or x>3
so now x can not be -2, but x can be -4
choice 3 doubt is for x<-1
now tell me -4 is <-1 or -4>-1 ?
obviously -4<-1
basically, if x is less than -3 (given in the premise), so x is automatically less than -1 (x<-1)
Kudos [?]: 19 [0], given: 647
Verbal Forum Moderator
Joined: 19 Mar 2014
Posts: 976
Kudos [?]: 261 [0], given: 199
Location: India
Concentration: Finance, Entrepreneurship
GPA: 3.5
Re: If |x|>3, which of the following must be true? [#permalink]
Show Tags
24 Jun 2017, 23:33
This is an interesting question, lets try to solve.
$$|x| > 3$$
$$-3 > x > 3$$
From this we know that value of x is not in between -3 & 3.
I. $$x > 3$$
x>3 is not always true, take x = 4 ==> True and x = -4 ==> False ======> Hence, I will be FALSE
II. $$x^2 > 9$$
As we are squaring either x > 3 or x < -3 it will give us a value > 9 =====> Hence, II will always be TRUE
III. $$|x - 1| > 2$$
This a tricky one.
we can solve this as
$$-2 > x - 1 > 2$$
$$-1 > x > 3$$
With the given information of x < -3 & x > 3 if you add any value in this equation x = 4 or x = -4 it will always be either > 3 or < -1.
Hence, =====> III will always be TRUE
_________________
"Nothing in this world can take the place of persistence. Talent will not: nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not: the world is full of educated derelicts. Persistence and determination alone are omnipotent."
Worried About IDIOMS? Here is a Daily Practice List: https://gmatclub.com/forum/idiom-s-ydmuley-s-daily-practice-list-250731.html#p1937393
Best AWA Template: https://gmatclub.com/forum/how-to-get-6-0-awa-my-guide-64327.html#p470475
Kudos [?]: 261 [0], given: 199
Director
Joined: 04 Dec 2015
Posts: 696
Kudos [?]: 338 [0], given: 261
Location: India
Concentration: Technology, Strategy
Schools: ISB '19, IIMA , IIMB, XLRI
WE: Information Technology (Consulting)
Re: If |x|>3, which of the following must be true? [#permalink]
Show Tags
25 Jun 2017, 23:59
corvinis wrote:
If |x| > 3, which of the following must be true?
I. x > 3
II. x^2 > 9
III. |x - 1| > 2
A. I only
B. II only
C. I and II only
D. II and III only
E. I, II, and III
[Reveal] Spoiler:
I can't understand how the official answer can be right. For me is B. Please respond and I'll provide official explanation! Thanks
Given $$|x| > 3$$,
$$x < 3$$ or $$x < -3$$
$$- 3> x < 3$$
Range of $$x$$ could be for positive integers $$4$$ and above and for negative integers $$-4$$ and below.
Checking the options we get;
I. $$x > 3$$ ------- Not always true ($$x$$ could be less than $$-3$$).
II. $$x^2 > 9$$ ------------ Must be True
Let $$x$$ be $$4$$ and $$-4$$. $$x^2$$ will be $$16$$ in both case which is greater than $$9$$. Hence II must be true.
III. $$|x-1|>2$$
$$x - 1 > 2 ==> x > 3$$ --------- (Given it must be true.)
$$-x + 1 > 2 ==> x < -1$$.
(Given all negative values of $$x$$ is less than $$-3$$ which will in turn be less than $$-1$$). Hence III must be true.
Kudos [?]: 338 [0], given: 261
Manager
Joined: 21 Mar 2017
Posts: 62
Kudos [?]: 11 [0], given: 165
Re: If |x|>3, which of the following must be true? [#permalink]
Show Tags
29 Jul 2017, 14:15
VeritasPrepKarishma wrote:
corvinis wrote:
If |x| > 3, which of the following must be true?
I. x > 3
II. X^2 > 9
III. |x-1|>2
A. I only
B. II only
C. I and II only
D. II and III only
E. I, II, and III
I can't understand how the official answer can be right. For me is B. Please respond and I'll provide official explanation! Thanks
Responding to a pm:
|x| > 3 implies that x is a point whose distance from 0 is more than 3. So x could be greater than 3 or less than -3. Before you move further, think about the values x can take: 3.00001, 3.5, 4.2, 5.7, 67, 1000, -3.45, -4, -8, -100 etc. The only values it cannot take are -3 <= x <= 3
Which of the following must be true?
I. x > 3
For every value that x can take, must x be greater than 3? No. e.g. if x takes -3.45, -4 etc, it will not be greater than 3 so this is not true.
II. X^2 > 9
This is the same as |x| > 3 so it must be true
III. |x-1|>2
This implies that the distance of x from 1 must be greater than 2. So x is either greater than 3 or less than -1. Now, recall all the values that x can take. For each value, can be say that x is either greater than 3 or less than -1? Yes.
3.00001 - x is greater than 3
3.5 : x is greater than 3
4.2 : x is greater than 3
5.7 : x is greater than 3
67 : x is greater than 3
1000 : x is greater than 3
-3.45 : x is less than -1
-4 : x is less than -1
-8 : x is less than -1
-100 : x is less than -1
For every value that x can take, x will be either greater than 3 or less than -1. Note that we are not saying that every value less than -1 must be valid for x. We are saying that every value that is valid for x (found by using |x| > 3) will be either greater than 3 or less than -1. Hence |x-1|>2 must be true for every value that x can take.
So if there were 4th option saying x is integer or that x is real numbers , that would have been true also?
Kudos [?]: 11 [0], given: 165
Manager
Status: love the club...
Joined: 24 Mar 2015
Posts: 213
Kudos [?]: 21 [0], given: 472
Re: If |x|>3, which of the following must be true? [#permalink]
Show Tags
06 Aug 2017, 10:32
VeritasPrepKarishma wrote:
corvinis wrote:
If |x| > 3, which of the following must be true?
I. x > 3
II. X^2 > 9
III. |x-1|>2
A. I only
B. II only
C. I and II only
D. II and III only
E. I, II, and III
I can't understand how the official answer can be right. For me is B. Please respond and I'll provide official explanation! Thanks
Responding to a pm:
|x| > 3 implies that x is a point whose distance from 0 is more than 3. So x could be greater than 3 or less than -3. Before you move further, think about the values x can take: 3.00001, 3.5, 4.2, 5.7, 67, 1000, -3.45, -4, -8, -100 etc. The only values it cannot take are -3 <= x <= 3
Which of the following must be true?
I. x > 3
For every value that x can take, must x be greater than 3? No. e.g. if x takes -3.45, -4 etc, it will not be greater than 3 so this is not true.
II. X^2 > 9
This is the same as |x| > 3 so it must be true
III. |x-1|>2
This implies that the distance of x from 1 must be greater than 2. So x is either greater than 3 or less than -1. Now, recall all the values that x can take. For each value, can be say that x is either greater than 3 or less than -1? Yes.
3.00001 - x is greater than 3
3.5 : x is greater than 3
4.2 : x is greater than 3
5.7 : x is greater than 3
67 : x is greater than 3
1000 : x is greater than 3
-3.45 : x is less than -1
-4 : x is less than -1
-8 : x is less than -1
-100 : x is less than -1
For every value that x can take, x will be either greater than 3 or less than -1. Note that we are not saying that every value less than -1 must be valid for x. We are saying that every value that is valid for x (found by using |x| > 3) will be either greater than 3 or less than -1. Hence |x-1|>2 must be true for every value that x can take.
hi
very beautiful explanation ....I must say ..
anyway ...I quote you " Note that we are not saying that every value less than -1 must be valid for x. We are saying that every value that is valid for x (found by using |x| > 3) will be either greater than 3 or less than -1. Hence |x-1|>2 must be true for every value that x can take."
For option # 3
III. |x-1|>2
we got "-1>x>3"
but the question stem says "-3>x>3"
So, according to the question stem, if we suppose x = -4, and according to the option #3, if you suppose x = -2, then these two values may not match, but the range provided by the option # 3, that is "-1>x>3" will cover the range provided by the question stem that is "-3>x>3", because -4 is certainly less than -1 and so is x...
please correct me if I am missing something you meant...
Kudos [?]: 21 [0], given: 472
Manager
Status: love the club...
Joined: 24 Mar 2015
Posts: 213
Kudos [?]: 21 [0], given: 472
Re: If |x|>3, which of the following must be true? [#permalink]
Show Tags
07 Aug 2017, 20:32
ssislam wrote:
VeritasPrepKarishma wrote:
corvinis wrote:
If |x| > 3, which of the following must be true?
I. x > 3
II. X^2 > 9
III. |x-1|>2
A. I only
B. II only
C. I and II only
D. II and III only
E. I, II, and III
I can't understand how the official answer can be right. For me is B. Please respond and I'll provide official explanation! Thanks
Responding to a pm:
|x| > 3 implies that x is a point whose distance from 0 is more than 3. So x could be greater than 3 or less than -3. Before you move further, think about the values x can take: 3.00001, 3.5, 4.2, 5.7, 67, 1000, -3.45, -4, -8, -100 etc. The only values it cannot take are -3 <= x <= 3
Which of the following must be true?
I. x > 3
For every value that x can take, must x be greater than 3? No. e.g. if x takes -3.45, -4 etc, it will not be greater than 3 so this is not true.
II. X^2 > 9
This is the same as |x| > 3 so it must be true
III. |x-1|>2
This implies that the distance of x from 1 must be greater than 2. So x is either greater than 3 or less than -1. Now, recall all the values that x can take. For each value, can be say that x is either greater than 3 or less than -1? Yes.
3.00001 - x is greater than 3
3.5 : x is greater than 3
4.2 : x is greater than 3
5.7 : x is greater than 3
67 : x is greater than 3
1000 : x is greater than 3
-3.45 : x is less than -1
-4 : x is less than -1
-8 : x is less than -1
-100 : x is less than -1
For every value that x can take, x will be either greater than 3 or less than -1. Note that we are not saying that every value less than -1 must be valid for x. We are saying that every value that is valid for x (found by using |x| > 3) will be either greater than 3 or less than -1. Hence |x-1|>2 must be true for every value that x can take.
hi
very beautiful explanation ....I must say ..
anyway ...I quote you " Note that we are not saying that every value less than -1 must be valid for x. We are saying that every value that is valid for x (found by using |x| > 3) will be either greater than 3 or less than -1. Hence |x-1|>2 must be true for every value that x can take."
For option # 3
III. |x-1|>2
we got "-1>x>3"
but the question stem says "-3>x>3"
So, according to the question stem, if we suppose x = -4, and according to the option #3, if you suppose x = -2, then these two values may not match, but the range provided by the option # 3, that is "-1>x>3" will cover the range provided by the question stem that is "-3>x>3", because -4 is certainly less than -1 and so is x...
please correct me if I am missing something you meant...
VeritasPrepKarishma
hey
according to the question stem, x can never equal to -2, as it must be less than -3. I got it. I meant according to the option 3 (answer choice 3), x can equal to -2...
It doesn't matter, however, as -2 is always less than -1 and so is -3, and hence the option # 3 holds true always...thanks I got it ...
Kudos [?]: 21 [0], given: 472
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 7793
Kudos [?]: 18122 [0], given: 236
Location: Pune, India
Re: If |x|>3, which of the following must be true? [#permalink]
Show Tags
08 Aug 2017, 00:14
ssislam wrote:
VeritasPrepKarishma
hey
according to the question stem, x can never equal to -2, as it must be less than -3. I got it. I meant according to the option 3 (answer choice 3), x can equal to -2...
It doesn't matter, however, as -2 is always less than -1 and so is -3, and hence the option # 3 holds true always...thanks I got it ...
The point is - what is given and what is asked?
You are GIVEN that x is less than -3. (question stem gives you that)
You are ASKED whether x will always be less than -1 too. (this is point III. You need to find this. You are not given this.)
So x cannot be -2.
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Get started with Veritas Prep GMAT On Demand for \$199
Veritas Prep Reviews
Kudos [?]: 18122 [0], given: 236
Manager
Status: love the club...
Joined: 24 Mar 2015
Posts: 213
Kudos [?]: 21 [0], given: 472
Re: If |x|>3, which of the following must be true? [#permalink]
Show Tags
08 Aug 2017, 01:23
VeritasPrepKarishma wrote:
ssislam wrote:
VeritasPrepKarishma
hey
according to the question stem, x can never equal to -2, as it must be less than -3. I got it. I meant according to the option 3 (answer choice 3), x can equal to -2...
It doesn't matter, however, as -2 is always less than -1 and so is -3, and hence the option # 3 holds true always...thanks I got it ...
The point is - what is given and what is asked?
You are GIVEN that x is less than -3. (question stem gives you that)
You are ASKED whether x will always be less than -1 too. (this is point III. You need to find this. You are not given this.)
So x cannot be -2.
hi mam
actually -2 is not pertinent here. I used this number just by the way, nothing else...
since x is less than -3 given, any value less than -3, is obviously less than -1 too, no doubt...
anyway, thanks to you, mam for your valued reply at all times...
Kudos [?]: 21 [0], given: 472
Senior SC Moderator
Joined: 14 Nov 2016
Posts: 1254
Kudos [?]: 1363 [0], given: 440
Location: Malaysia
Re: If |x|>3, which of the following must be true? [#permalink]
Show Tags
08 Aug 2017, 19:19
ssislam wrote:
VeritasPrepKarishma wrote:
ssislam wrote:
VeritasPrepKarishma
hey
according to the question stem, x can never equal to -2, as it must be less than -3. I got it. I meant according to the option 3 (answer choice 3), x can equal to -2...
It doesn't matter, however, as -2 is always less than -1 and so is -3, and hence the option # 3 holds true always...thanks I got it ...
The point is - what is given and what is asked?
You are GIVEN that x is less than -3. (question stem gives you that)
You are ASKED whether x will always be less than -1 too. (this is point III. You need to find this. You are not given this.)
So x cannot be -2.
hi mam
actually -2 is not pertinent here. I used this number just by the way, nothing else...
since x is less than -3 given, any value less than -3, is obviously less than -1 too, no doubt...
anyway, thanks to you, mam for your valued reply at all times...
Possible value of x = -5, -6 -4, 4, 5, 6
When x = (-4), |-4 -1| = 5 > 2
When x = 4, |4 - 1| = 3 > 2
III must be TRUE.
_________________
"Be challenged at EVERY MOMENT."
“Strength doesn’t come from what you can do. It comes from overcoming the things you once thought you couldn’t.”
"Each stage of the journey is crucial to attaining new heights of knowledge."
Kudos [?]: 1363 [0], given: 440
Re: If |x|>3, which of the following must be true? [#permalink] 08 Aug 2017, 19:19
Go to page Previous 1 2 3 Next [ 41 posts ]
Display posts from previous: Sort by
|
2017-12-14 06:37:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6241743564605713, "perplexity": 1453.9781471747658}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948541253.29/warc/CC-MAIN-20171214055056-20171214075056-00621.warc.gz"}
|
https://www.ias.ac.in/listing/bibliography/pmsc/Abdul_Aziz
|
• Abdul Aziz
Articles written in Proceedings – Mathematical Sciences
• On the zeros of polynomials
In this paper we extend a classical result due to Cauchy and its improvement due to Datt and Govil to a class of lacunary type polynomials.
• Inequalities for the derivative of a polynomial
Let P(z) be a polynomial of degreen which does not vanish in ¦z¦ <k, wherek > 0. Fork ≤ 1, it is known that$$\mathop {\max }\limits_{|z| = 1} |P'(z)| \leqslant \frac{n}{{1 + k^n }}\mathop {\max }\limits_{|z| = 1} |P(z)|$$, provided ¦P’(z)¦ and ¦Q’(z)¦ become maximum at the same point on ¦z¦ = 1, where$$Q(z) = z^n \overline {P(1/\bar z)}$$. In this paper we obtain certain refinements of this result. We also present a refinement of a generalization of the theorem of Tuŕan.
• On self-reciprocal polynomials
In this paper we establish a sharp result concerning integral mean estimates for self-reciprocal polynomials.
• Some inequalities for the polar derivative of a polynomial
LetP(z) be a polynomial of degreen which does not vanish in |z|<1. In this paper, we estimate the maximum and minimum moduli of thekth polar derivative ofP(z) on |z|=1 and thereby obtain compact generalizations of some known results, which among other results, yields interesting refinements of Erdos-Lax theorem and a theorem of Ankeny and Rivlin.
• Lp inequalities for polynomials with restricted zeros
LetP(z) be a polynomial of degreen which does not vanish in the disk |z|<k. It has been proved that for eachp>0 andk≥1,$$\begin{gathered} \left\{ {\frac{1}{{2\pi }}\int_0^{2\pi } {\left| {P^{(s)} (e^{i\theta } )} \right|^p d\theta } } \right\}^{1/p} \leqslant n(n - 1) \cdots (n - s + 1) B_p \hfill \\ \times \left\{ {\frac{1}{{2\pi }}\int_0^{2\pi } {\left| {P(e^{i\theta } )} \right|^p d\theta } } \right\}^{1/p} , \hfill \\ \end{gathered}$$ where$$B_p = \left\{ {\frac{1}{{2\pi }}\int_0^{2\pi } {\left| {k^s + e^{i\alpha } } \right|^p d\alpha } } \right\}^{ - 1/p}$$ andP(s)(z) is thesth derivative ofP(z). This result generalizes well-known inequality due to De Bruijn. Asp→∞, it gives an inequality due to Govil and Rahman which as a special case gives a result conjectured by Erdös and first proved by Lax.
• New integral mean estimates for polynomials
In this paper we prove someLP inequalities for polynomials, wherep is any positive number. They are related to earlier inequalities due to A Zygmund, N G De Bruijn, V V Arestov, etc. A generalization of a polynomial inequality concerning self-inversive polynomials, is also obtained.
• # Proceedings – Mathematical Sciences
Volume 131, 2021
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019
|
2021-06-19 02:36:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8923229575157166, "perplexity": 2214.8293695195202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643380.40/warc/CC-MAIN-20210619020602-20210619050602-00103.warc.gz"}
|