url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://crypto.stackexchange.com/questions/30776/key-derivation-function-kdf-can-a-key-derived-from-kdf-be-considered-as-a-sec | # Key Derivation Function (KDF): Can a key derived from KDF be considered as a secure key?
Consider a case where we have a master key MS that is used in pseudo-random function to generate a set of pseudo-random values. Then we use key derivation function to derive a key from each of pseudo-random values. Assume all the keys are long enough (in security context).
Question: In the above scenario can I use the keys derived from the pseudo-random values (or the derived keys) as a encryption key or a key for pseudo-random function?
In other words, is there any situation where we should not consider a derived key as an actual random key?
## 1 Answer
Yes, this is exactly what KDFs and PRFs are designed for. That is, no reasonably efficient attacker will be able to tell if you used an actual random key or something generated from the KDF/PRF. This is of course assuming that your initial seed/master secret was of sufficient entropy, and the way you derive the various values are not done in a silly way.
The practice of deriving many keys from an initial master secret using a KDF is extremely common and you will find it used in almost any standard security protocol like TLS, IPsec or SSH.
In other words, is there any situation where we should not consider a derived key as an actual random key?
Above I only considered security from the perspective of practice, i.e., where we assume that the adversary cannot run for an arbitrarily long time. However, in a theoretical model where we allow the adversary to run arbitrarily long, there are many examples where using anything other than a totally random string will break your security. The most common example of course being the one-time pad.
• Thank you for the answer. My question is now, if I have a proper seed (or key) for pseudorandom function. If I use the key and pseudorandom function, can I consider the outputs of the pseudorandom function as the keys? In other words, can I use the output of PRF instead of output of KDF as the proper keys? – user153465 Nov 24 '15 at 17:03
• Could you also tell me please, what function I can use for KDF. – user153465 Nov 24 '15 at 17:04
• let $f$ be pseudorandom function and $KDF$ be key derivation function. I have a truly random key, $k$. I generate $n$ pseudorandom values: $v_i=f(k,i), 1\leq i \leq n$. Then I generate $n$ keys: $k_i=KDF(v_i)$. hy cannot I consider $v_i$ as a valid key (in this case I do not need KDF anymore? – user153465 Nov 24 '15 at 17:29 | 2020-04-05 10:08:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7385820746421814, "perplexity": 645.0546805367752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371576284.74/warc/CC-MAIN-20200405084121-20200405114121-00206.warc.gz"} |
https://plainmath.net/6780/to-calculate-the-verties-and-foci-of-the-conic-section-x9-2-plus-y4-2-equal | # To calculate the verties and foci of the conic section: (x9)2 + (y4)2=1
Question
Conic sections
To calculate the verties and foci of the conic section: $$\displaystyle{\left({x}{9}\right)}{2}\ +\ {\left({y}{4}\right)}{2}={1}$$
2021-03-08
Step 1 Formula: eccentricity $$\displaystyle{\left({e}\right)}={1}\ -\ {b}{2}{a}{2}$$ Step 2 Calculation: Compare this equation with the standard ellipse equation $$\displaystyle{x}{2}{a}{2}\ +\ {y}{2}{b}{2}={1}$$ and we get: $$\displaystyle{a}{2}={81}\ \Rightarrow\ {a}={9}{b}{2}={16}\ \Rightarrow\ {b}={4}$$ Now, $$\displaystyle{e}={1}\ -\ {b}{2}{a}{2}={1}\ -\ {1681}={659}$$ Vertices are: $$\displaystyle{\left(\pm\ {a},\ {0}\right)}\ \text{and}\ {\left({0},\ \pm\ {b}\right)}$$ Now, put the values of a and b to get the vertices of the conic section and we get: $$\displaystyle{\left(\pm\ {a},\ {0}\right)}\ \text{and}\ {\left({0},\ \pm\ {b}\right)}={\left(\pm\ {9},\ {0}\right)}\ \text{and}\ {\left({0},\ \pm\ {4}\right)}$$ Now, $$\displaystyle\text{Foci}\ ={\left(\pm\ {a}{e},\ {0}\right)}={\left(\pm\ {65},\ {0}\right)}$$ Thus, the vertices and foci of the conic section are: $$\displaystyle{\left(\pm\ {9},\ {0}\right)}\ \text{and}\ {\left({0},\ \pm\ {4}\right)}\ \text{and}\ {\left(\pm\ {65},\ {0}\right)}.$$
### Relevant Questions
To calculate: The vertices and foci of the conic section: $$\displaystyle{x}{29}\ +\ {y}{24}={1}$$
To find the vertices and foci of the conic section: $$\displaystyle{\frac{{{\left({x}\ -\ {4}\right)}^{{{2}}}}}{{{5}^{{{2}}}}}}\ -\ {\frac{{{\left({y}\ +\ {3}\right)}^{{{2}}}}}{{{6}^{{{2}}}}}}={1}$$
Write the equation of each conic section, given the following characteristics:
a) Write the equation of an ellipse with center at (3, 2) and horizontal major axis with length 8. The minor axis is 6 units long.
b) Write the equation of a hyperbola with vertices at (3, 3) and (-3,3). The foci are located at (4, 3) and (-4, 3).
c) Write the equation of a parabola with vertex at (-2,4) and focus at (-4, 4)
Write the equation of each conic section, given the following characteristics:
a) Write the equation of an ellipse with center at (3, 2) and horizontal major axis with length 8. The minor axis is 6 units long.
b) Write the equation of a hyperbola with vertices at (3, 3) and (-3, 3). The foci are located at (4, 3) and (-4, 3).
c) Write the equation of a parabola with vertex at (-2, 4) and focus at (-4, 4)
(a) Given the conic section $$\displaystyle{r}=\frac{5}{{{7}+{3} \cos{{\left(\theta\right)}}}}$$, find the x and y intercept(s) and the focus(foci).
(b) Given the conic section $$\displaystyle{r}=\frac{5}{{{2}+{5} \sin{{\left(\theta\right)}}}}$$, find the x and y intercept(s) and the focus(foci).
Identify the conic section given by $$\displaystyle{y}^{2}+{2}{y}={4}{x}^{2}+{3}$$
Find its $$\frac{\text{vertex}}{\text{vertices}}\ \text{and}\ \frac{\text{focus}}{\text{foci}}$$
To determine: The conic section and to find the vertices and foci: $$\displaystyle{x}{2}\ -\ {y}{2}{y}={4}$$
Find out what kind of conic section the following quadratic form represents and transform it to principal axes. Express $$\displaystyle\vec{{x}}^{T}={\left[{x}_{{1}}{x}_{{2}}\right]}$$ in terms of the new coordinate vector $$\displaystyle\vec{{y}}^{T}={\left[{y}_{{1}}{y}_{{2}}\right]}$$
$$\displaystyle{{x}_{{1}}^{{2}}}-{12}{x}_{{1}}{x}_{{2}}+{{x}_{{2}}^{{2}}}={70}$$
Polar equations for conic sections Graph the following conic sections, labeling vertices, foci, directrices, and asymptotes (if they exist). Give the eccentricity of the curve. Use a graphing utility to check your work. $$\displaystyle{r}=\ {\frac{{{10}}}{{{5}\ +\ {2}\ {\cos{\theta}}}}}$$ | 2021-05-10 22:44:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8371837139129639, "perplexity": 382.21912812321006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00410.warc.gz"} |
http://www.alecjacobson.com/weblog/ | ## Convincing maple to solve an ODE with Neumann conditions at a symbolic valued location
November 17th, 2017
I can use maple to solve a 1D second-order ODE with Dirichlet boundary conditions at symbolic-valued locations:
# Z'' = 0, Z(a)=0, Z(b) = 1
dsolve({diff(Z(r),r,r) = 0,Z(a)=0,Z(b)=1});
This correctly returns
r a
Z(r) = - ----- + -----
a - b a - b
I can also easily convince maple to solve this ODE with some Neumann (normal derivative) boundary conditions at at fixed-value, numeric location:
# Z'' = 0, Z(a) = 1, Z'(0) = 0
dsolve({diff(Z(r),r,r) = 0,Z(a)=1,eval(diff(Z(r),r),r=0)=0});
produces
Z(r) = 1
But if I try naively to use a Neumann condition at a symbolic value location
# Z'' = 0, Z(a) = 1, Z'(b) = 0
dsolve({diff(Z(r),r,r) = 0,Z(a)=1,eval(diff(Z(r),r),r=b)=0});
then I get an error:
Error, (in dsolve) found differentiated functions with same name but depending on different arguments in the given DE system: {Z(b), Z(r)}
After a long hunt, I found the solution. dsolve takes an optional second argument that can tell it what the dependent variable actually is. So the correct call is:
# Z'' = 0, Z(a) = 1, Z'(b) = 0
dsolve({diff(Z(r),r,r) = 0,Z(a)=1,eval(diff(Z(r),r),r=b)=0});
and this gives the correct answer
Z(r) = 1
## MATLAB gotcha inverting a (sparse) diagonal matrix
November 2nd, 2017
Just got burned by a silly Matlab gotcha. Suppose you have a diagonal matrix D and you want to compute the inverse square root matrix:
Disqrt = diag(1./sqrt(diag(D))
But this will be dense!
Disqrt = diag(sqrt(diag(D).^-1)
Or maybe
Disqrt = diag(diag(D).^-0.5)
Not sure if there’s an accuracy difference (hopefully not).
## Eigen performance gotcha calling non-templated function from templated one
July 25th, 2017
I just spent a while tracking down a rather surprising performance bug in my code.
Here’s a minimal example:
#include <Eigen/Dense>
#include <iostream>
int simple_size(const Eigen::MatrixXi & Z)
{
return Z.size();
}
template <typename T> int templated_size(const Eigen::MatrixBase<T> & Y)
{
return simple_size(Y);
}
int main(int argc, const char * argv[])
{
const int s = 40000;
Eigen::MatrixXi X = Eigen::MatrixXi::Zero(40000,40000);
std::cout<<"Compare:"<<std::endl;
std::cout<<(X.size() ?"done":"")<<std::endl;
std::cout<<(simple_size(X) ?"done":"")<<std::endl;
std::cout<<(templated_size(X)?"done":"")<<std::endl;
}
Running this, it will show that the last call to templated_size is taking way too long. Inspection will show that a copy of Y is being created to create a Eigen::MatrixXi & reference.
Now, clearly it’s poor design to call a function expecting a Eigen::MatrixXi & reference with a generic templated type Eigen::MatrixBase<T> &, but unfortunately this happens quite often with legacy libigl functions. My expectation was that since T is Eigen::MatrixXi in this case a simple reference would be passed.
It’s worth noting that const is actually creating/hiding the problem. Because simple_size takes a const reference, the compiler is happy to construct a Eigen::MatrixXi on the fly to create a valid reference. Without the consts the compiler stops at an error.
## Paper-worthy rendering in MATLAB
July 20th, 2017
MATLAB is not a great tool for creating 3D renderings. However, the learning curves for most commercial rendering tools are quite steep. Other tools like Mitsuba can create beautiful pictures, but can feel quite cumbersome for rendering pure geometry rather than the physical scenes their designed for.
Over the years, I’ve developed a way of creating plots of 3D shapes in MATLAB using a few extra functions in gptoolbox. This started as a way to just make images from research prototypes more palatable, but eventually became the usual way that I render images for papers. If the code for my research is already written in MATLAB then one huge advantage is that every image in my paper can have a *.m script that deterministically generates the result and the corresponding image with user intervention. This helps with reproducibility, editing and sharing between collaborators.
Here’s a “VFX Breakdown” of rendering a 3D shape in MATLAB.
t = tsurf(F,V);
set(gcf,'COlor',0.94*[1 1 1]);
teal = [144 216 196]/255;
pink = [254 194 194]/255;
bg_color = pink;
fg_color = teal;
for pass = 1:10
switch pass
case 1
% blank run
axis([-209.4 119.38 -181.24 262.67 -247.28 247.38]);
case 2
axis equal;
axis([-209.4 119.38 -181.24 262.67 -247.28 247.38]);
axis vis3d;
case 3
t.EdgeColor = 'none';
case 4
set(t,fphong,'FaceVertexCData',repmat(fg_color,size(V,1),1));
case 5
set(t,fsoft);
case 6
l = light('Position',[0.2 -0.2 1]);
case 7
set(gca,'Visible','off');
case 8
set(gcf,'Color',bg_color);
case 9
case 10
end
vidObj = VideoWriter(sprintf('nefertiti-%02d.mp4',pass),'MPEG-4');
vidObj.Quality = 100;
vidObj.open;
thetas = linspace(30,-30,450);
for theta = thetas(1:end-1)
view(theta,30);
drawnow;
vidObj.writeVideo(getframe(gcf));
end
vidObj.close;
end
## Inflate Wire Mesh in libigl C++ or gptoolbox MATLAB
July 12th, 2017
For a visualization and 3D printing, it’s often useful to “inflate” a edge-network into a thickened surface mesh. One method to do this is described “Sculptural Forms from Hyperbolic Tessellations” by George C Hart. This method works by adding rotated polygons at the ends of each edge offset a bit from the vertices. Then for each vertex the convex hull of incident edges’ polygons is computed and unioned with the convex hull of the polygons at either end of each edge. Hart writes that polygons shared by “edge hulls” and “vertex hulls” can simply be discarded. This is unfortunately not true, in general. It’s not super easier to categorize which faces can be discarded (even in general position) since the answer depends on the thickness, the number of sides of the polygons, their rotations, their offsets, and the angle between neighbouring edges. Fortunately, libigl is very good at conducting unions. We can just conduct the union explicitly and exactly using libigl.
I’ve written a new function for libigl igl::wire_mesh that takes in a wire network and spits out a solid (i.e., closed, watertight, manifold) mesh of a the inflated surface.
I’ve also wrapped this up in a Matlab Mex function in gptooolbox wire_mesh.
## Read animated gif and convert to rgb frames
June 14th, 2017
Matlab’s built in imread (as of 2017) doesn’t load animated gifs correctly. You can fix this by changing the line:
map = info.ColorTable;
in /Applications/MATLAB_R2017a.app/toolbox/matlab/imagesci/private/readgif.m with
map = reshape([info(:).ColorTable],[],3,n);
For a single frame indexed image you can use ind2rgb to convert to an rgb image. To do this on entire animated gif. You can use an arrayfun for-loop hack:
[X,M] = imread('input.gif');
Y = cell2mat(permute(arrayfun(@(C) ind2rgb(X(:,:,:,C),M(:,:,C)),1:size(X,4),'UniformOutput',false),[1 4 3 2]));
Update:
map = zeros(0,3,n);
for j = 1:n
map(1:size(info(j).ColorTable,1),:,j) = info(j).ColorTable;
end
## Project page for “Generalized Matryoshka: Computational Design of Nesting Objects”
June 14th, 2017
This April I had fun working on a little project that’s been tickling my mind for a while. Can you make any shape into a Matryoshka doll?
I’ll be presenting my paper on this at SGP 2017. It’s entitled
Generalized Matryoshka: Computational Design of Nesting Objects.
## Pause (and then resume) Battery-Guzzling programs
June 7th, 2017
My laptop battery dies quickly these days. Certain apps (cough, cough, Slack), have very high idle CPU-usage. You can pause these programs with
killall -STOP Slack
And later you can resume the application with
killall -CONT Slack
## Convincing LatexIt and Illustrator to use the new SIGGRAPH fonts
May 20th, 2017
The SIGGRAPH Latex style changed to the Libertine font. Here’re the steps to convince Latexit to use the new stylesheet and then to convince Illustrator to use the libertine font for drag and drop math.
mkdir ~/Library/texmf/tex/latex/local/acmart.cls/
cp ~/Dropbox/boundary/Paper/acmart.cls ~/Library/texmf/tex/latex/local/acmart.cls
In Latexit, open up Preferences, add a new SIGGRAPH “Template” containing:
\documentclass[sigconf, review]{acmart}
\pagenumbering{gobble}
If you try to drag and drop these into illustrator you’ll see that illustrator has replaced the nice math font with Myriad or something silly.
Drag this into FontBook.app
cp /usr/local/texlive/2015/texmf-dist/fonts/type1/public/libertine/*.pfb ~/Library/Application\ Support/Adobe/Fonts/
Update: I also had to issue:
cp /usr/local/texlive/2015/texmf-dist/fonts/type1/public/txfonts/*.pfb ~/Library/Application\ Support/Adobe/Fonts/
If you see boxes with X’s replacing symbols after dragging and dropping from LaTeXit, then drag into Finder instead (to create a .pdf file), then open this directly and Illustrator will give a warning and tell you which font it’s (still) missing.
## Mex wrapper for graph segmentation
May 4th, 2017
I wrote a small mex wrapper for the graph segmentation part of the “Graph Based Image Segmentation” code. Most of the previously matlab implementations/wrappers worked on images. I want to apply this to geometry so I needed access to the graph segmentation directly. Here’s the wrapper (soon to be part of gptoolbox):
// mexopts = gptoolbox_mexopts('Static',false,'Debug',true);
// mex('segment_graph.cpp',mexopts{:});
#ifdef MEX
# include <mex.h>
# include <igl/C_STR.h>
# include <igl/matlab/mexErrMsgTxt.h>
# undef assert
# define assert( isOK ) ( (isOK) ? (void)0 : (void) ::mexErrMsgTxt(C_STR(__FILE__<<":"<<__LINE__<<": failed assertion "<<#isOK<<"'"<<std::endl) ) )
#endif
#include "segment-graph.h"
#include <igl/matlab/mexErrMsgTxt.h>
#include <igl/matlab/parse_rhs.h>
#include <igl/unique.h>
#include <igl/matlab/prepare_lhs.h>
#include <igl/matlab/requires_arg.h>
#include <igl/matlab/validate_arg.h>
#include <igl/matlab/MexStream.h>
#include <Eigen/Sparse>
void mexFunction(
int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
using namespace igl::matlab;
using namespace Eigen;
using namespace std;
igl::matlab::MexStream mout;
std::streambuf *outbuf = std::cout.rdbuf(&mout);
mexErrMsgTxt(nrhs>0,"Too few inputs");
mexErrMsgTxt(mxIsSparse(prhs[0]),"Matrix should be sparse");
const mxArray * mx_data = prhs[0];
const int m = mxGetM(mx_data);
const int n = mxGetN(mx_data);
mexErrMsgTxt(n == mxGetM(prhs[0]), "Matrix should be square");
assert(mxIsSparse(mx_data));
assert(mxGetNumberOfDimensions(mx_data) == 2);
// TODO: It should be possible to directly load the data into the sparse
// matrix without going through the triplets
// Copy data immediately
double * pr = mxGetPr(mx_data);
mwIndex * ir = mxGetIr(mx_data);
mwIndex * jc = mxGetJc(mx_data);
const int num_edges = mxGetNzmax(mx_data);
edge * edges = new edge[num_edges];
int k = 0;
for(int j=0; j<n;j++)
{
// Iterate over inside
while(k<(int)jc[j+1])
{
//cout<<ir[k]<<" "<<j<<" "<<pr[k]<<endl;
assert((int)ir[k]<m);
assert((int)j<n);
edges[k].a = ir[k];
edges[k].b = j;
edges[k].w = pr[k];
k++;
}
}
// defaults
int min_size = 0;
// Threshold
int c = sqrt((double)n);
{
int i = 1;
while(i<nrhs)
{
mexErrMsgTxt(mxIsChar(prhs[i]),"Parameter names should be strings");
// Cast to char
const char * name = mxArrayToString(prhs[i]);
if(strcmp("Threshold",name) == 0)
{
requires_arg(i,nrhs,name);
validate_arg_scalar(i,nrhs,prhs,name);
validate_arg_double(i,nrhs,prhs,name);
c = (double)*mxGetPr(prhs[++i]);
}else if(strcmp("MinSize",name) == 0)
{
requires_arg(i,nrhs,name);
validate_arg_scalar(i,nrhs,prhs,name);
validate_arg_double(i,nrhs,prhs,name);
min_size = (int)((double)*mxGetPr(prhs[++i]));
}
i++;
}
}
universe *u = segment_graph(n, num_edges, edges, c);
// post process small components
for (int i = 0; i < num_edges; i++) {
int a = u->find(edges[i].a);
int b = u->find(edges[i].b);
if ((a != b) && ((u->size(a) < min_size) || (u->size(b) < min_size)))
u->join(a, b);
}
switch(nlhs)
{
case 1:
{
plhs[0] = mxCreateDoubleMatrix(m,1, mxREAL);
Eigen::VectorXi C(m);
for(int i = 0;i<m;i++)
{
C(i) = u->find(i);
}
Eigen::VectorXi uC,I,J;
igl::unique(C,uC,I,J);
prepare_lhs_index(J,plhs);
}
default: break;
}
delete[] edges;
delete u;
std::cout.rdbuf(outbuf);
}
It takes the graph as a sparse matrix and outputs the component ids:
C = segment_graph(A);
` | 2017-11-19 04:50:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49004054069519043, "perplexity": 10040.80249280524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805362.48/warc/CC-MAIN-20171119042717-20171119062717-00775.warc.gz"} |
https://puzzling.stackexchange.com/questions/16642/fitting-rectangles-into-square-optimal-perfect-rectangle-packing | # Fitting rectangles into square (optimal/perfect rectangle packing)
I gave the puzzle you can see on the image below to a friend of mine for christmas last year. I thought it would be fun to dump it out in front of him so he would not know the solution. Unfortunately I did not write down the solution myself.
Until now multiple people have put countless hours into this puzzle and still nobody has managed to solve it. So please help me!
The puzzle is complete and definitely solvable, the tiles were all inside the frame when I bought it.
I have also started implementing a program to brute force the solution, but I thought I would post here as well.
I have taken pixel dimensions for all the tiles from the image (not 100% accurate, but should suffice):
• Large wrench: 164x655
• Blank: 164x234
• Pliers: 491x422
• Hammers: 750x234
• Pipe wrenches: 422x491
• Small screwdriver: 327x94
• Tenon saws: 327x491
• Screw wrench: 258x750
• Screwdriver: 140x655
• Hacksaw: 491x327
• Carpenter's rule: 397x327
• Wrenches: 327x655
I hope I chose the right terms for the tools, English is not my native language.
Does anybody have any tips for solving puzzles like this? If you think you can solve it, I would love to see your attempt!
Cheers and happy puzzling!
• Can we have the dimensions of the pieces? – TroyAndAbed Jun 18 '15 at 20:31
• @TroyAndAbed I only have the pixel dimensions I took from the image. I'll add them to the question. – muenchdo Jun 18 '15 at 20:34
• Any chance you can measure the actual pieces? I expect that most of them are a multiple of the same size. Or at least measure the pixels of the inside of the frame? – Bobson Jun 18 '15 at 20:50
• @Bobson Sorry, I don't have the actual pieces here. For the measurements I have taken into account that e.g. the hacksaw and tenon saws are of the same size. Also, since the puzzle is laser-cut, I am afraid the dimensions might be completely arbitrary. I'll ask my friend to get the dimensions, might take a while though. – muenchdo Jun 18 '15 at 20:55
• This looks like an incarnation of the Calibron 12-Block Puzzle. The internet is full of pages of people who talk about how hard it is and how they haven't solved it, but solutions seem pretty sparse. This page claims to have some solutions in Python, but I haven't checked them out: puzzles.bostonpython.com/blockparty.html (It also lists the dimensions of the blocks, for those following along at home...) – GentlePurpleRain Jun 18 '15 at 22:00
After GentlePurpleRain gave the hint that this is a variation of the Calibron 12-Block puzzle I did some research and I'm pretty sure I have found a (or even the only) solution. Since the original puzzle is laser-cut and therefore fits quite precisely into the frame I will still have to verify with my friend if the solution is actually correct. However, with my DIY print-out version of the puzzle it works!
Thanks to everyone who put time into this!
• So, was this the solution? :) – Rubio May 24 '18 at 3:55
## This is not an answer - just reorganizing the information
Small screwdriver 94 x 327
Screwdriver 140 x 655
Blank 164 x 234
Large wrench 164 x 655 (the one in the top-left corner)
Hammers 234 x 750
Screw wrench 258 x 750 (the really big one on the right side)
Carpenter's rule 327 x 397
Tenon saws 327 x 491
Hacksaw 327 x 491
Wrenches 327 x 655 (the set of many on the left)
Pliers 422 x 491
Pipe wrenches 422 x 491 (the pair in the middle)
By working with these numbers, I get a width of 1309-1311 and a height of 1310-1311, depending on where I measure.
It looks like we can work with a unit of $1 \approx 165$ pixels and get some reasonable numbers.
Using that, and rounding each pixel value to one decimal point of new units produces the new grid:
Small screwdriver 94 x 327 -> 0.6 x 2.0
Screwdriver 140 x 655 -> 0.8 x 4.0
Blank 164 x 234 -> 1.0 x 1.4
Large wrench 164 x 655 -> 1.0 x 4.0
Hammers 234 x 750 -> 1.4 x 4.5
Screw wrench 258 x 750 -> 1.6 x 4.5
Carpenter's rule 327 x 397 -> 2.0 x 2.4
Tenon saws 327 x 491 -> 2.0 x 3.0
Hacksaw 327 x 491 -> 2.0 x 3.0
Wrenches 327 x 655 -> 2.0 x 4.0
Pliers 422 x 491 -> 2.6 x 3.0
Pipe wrenches 422 x 491 -> 2.6 x 3.0
As you can see, most of these numbers work out surprisingly cleanly. This also produces a grid of 8.0 x 8.0 using the previous measurements.
The actual dimensions of the original Calibron 12-block puzzle (published in 1933 by Theodore Edison, son of Thomas) were apparently a little tricky to work out. A 56x56 version was recreated by Pavel Curtis in 2010, and in 2014 Jean-Claude Constantin sold a 40x40 version, Toolbox, which appears to be your version.
The 40x40 version appears to be the original intended dimensions, as you can add a 1x20, a 2x10 or a 4x5 block to tile a 36x45 rectangle (non-uniquely in each case), and this appears to have been part of the original design. I don't believe this works with the 56x56 version.
Both versions with dimensions shown here. The tiling in both versions is unique as you suggest. The 56x56 version is not simply a scaled version of the 40x40, the aspect ratio of some rectangles is significantly different. | 2021-06-13 00:32:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3412914276123047, "perplexity": 1190.9958734498414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586465.3/warc/CC-MAIN-20210612222407-20210613012407-00216.warc.gz"} |
https://math.stackexchange.com/questions/780086/grammar-extraction-using-pre-existing-grammars | Grammar extraction using pre-existing grammars.
Given a set of strings $s_1, \dots, s_n$ over $\Sigma$ it isn't clear what a good generalization of the strings would be using a regular language, that extends the set infinitely. This comes from the many interpretations you could have of the set of strings.
What about taking pre-existing grammars like the following two
DIGITS = [0-9]+
ID = (ALPHA | _ ) (ALPHA | _ | DIGITS)*
used in programming language parsing, call them $g_1, g_2$.
Consider permuting the terminals of a grammar, for instance $\phi : a \mapsto b, \ b \mapsto c$ and $g$ is: $$g \to b AB\\ A \to aaA + B \\ B \to abab$$
Then $\phi \circ g$ is $$g' \to c A B \\ A \to bbA + B \\ B \to bcbc$$
Then these alphabet permutations (aka terminal permutations) preserve the structure of the grammars. So now ask whether $s_i \in \phi\circ g_j$ (language membership), for some $j,\phi$, or even if $s_i \in (\phi_1 g_1) \cdot \dots \cdot (\phi_m g_m)$, for some given set of $g_i$ and for some permutations $\phi_i$.
Or what if we took $\Sigma' = \Sigma \cup \{g_1, \dots, g_m\}$ and considered permutations of $\Sigma'$.
Any grammar containing the $\{s_1, \dots, s_n\}$ we construct using these methods has structure derived from the given grammrs $g_i$, and this essentially answers the question as to what a good generalization of a set of strings would be: it's the one that is built from specified, preferrable grammar structures. The specified grammars act like hints to the algorithm as to what grammars to choose. Has anyone investigated this?
Given any finite set $S$ of strings, you can define an extension just by saying that your language is the set of all strings that contain a member of $S$ as a prefix and/or suffix. Or since $S$ is finite, you can take the union of $S$ with any regular language and get a regular language. So it seems like your question is a bit ill-posed. Maybe the "minimal" such language that contains your strings in $S$ is the set you get from applying the pumping lemma criteria to $S$, trying to get as many strings as possible from $S$ into the same pumping class? | 2021-01-19 21:33:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6485320329666138, "perplexity": 185.25900292501294}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519784.35/warc/CC-MAIN-20210119201033-20210119231033-00386.warc.gz"} |
https://msp.org/gt/2007/11-3/p09.xhtml | Volume 11, issue 3 (2007)
Recent Issues
The Journal About the Journal Editorial Board Editorial Interests Editorial Procedure Subscriptions Submission Guidelines Submission Page Policies for Authors Ethics Statement ISSN (electronic): 1364-0380 ISSN (print): 1465-3060 Author Index To Appear Other MSP Journals
The Extended Bloch Group and the Cheeger–Chern–Simons Class
Sebastian Goette and Christian K Zickert
Geometry & Topology 11 (2007) 1623–1635
arXiv: 0705.0500
Abstract
We present a formula for the full Cheeger–Chern–Simons class of the tautological flat complex vector bundle of rank $2$ over $BSL\left(2,{ℂ}^{\delta }\right)$. This improves the formula by Dupont and Zickert [Geom. Topol. 10 (2006) 1347–1372], where the class is only computed modulo 2–torsion.
Keywords
Extended Bloch group, Cheeger-Chern-Simons class, Rogers dilogarithm
Mathematical Subject Classification 2000
Primary: 57R20, 11G55 | 2022-12-08 10:13:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5799586176872253, "perplexity": 7750.6735814444055}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711286.17/warc/CC-MAIN-20221208082315-20221208112315-00412.warc.gz"} |
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=4381 | ## WeBWorK Problems
### calculations within a do-until loop
by Bruce Yoshiwara -
Number of replies: 2
I’m trying to code a compound interest problem A=P(1+r/n)^(nt) asking students to find the least value of n, the number of times interest is compounded, to attain a specific value A (when the initial principal P, interest rate r, and number of years t are fixed).
I defined a function that seems to evaluate correctly when asked for values of A for n = 1, 2, 12, 52, and 365.
I then tried to use a do-until loop for WeBWorK to determine a desired value of n. My code seemed to work ok to find n for a value of A attained at n=16. But I got an incorrect value of n=33 when the desired value of A should have required n=30; I got an incorrect value of n=69 when the correct value was n=59, and WeBWorK timed out in another case when n should have been 553.
Here’s a partial version I’ve been trying to debug.
DOCUMENT();
"PGstandard.pl",
"MathObjects.pl",
"PGML.pl",
"PGchoicemacros.pl",
"niceTables.pl",
);
Context()->flags->set( reduceConstants => 0, reduceConstantFunctions => 0 );
Context()->variables->add( n => "Real" );
Context()->flags->set(limits=>[1,365]);
$p0 = 1000;$t = 5;
$r0 = 12;$r = $r0/100;$f = Formula("$p0*(1 +$r/n)^($t*n)"); ####################################################### ### this is just trying to find my error in defining f$a = $f->eval(n=>(15));$b = $f->eval(n=>(16));$one = $f->eval(n=>(30));$two = $f->eval(n=>(31));$three = $f->eval(n=>(58));$four = $f->eval(n=>(59));$five = $f->eval(n=>(552));$six = $f→eval(n=>(553)); ### or maybe limitations in WeBWorK’s evaluating f ################################################ # here is a loop trying to find a value of n$p1 = 1818;
$n1 = 12; do{$temp1 = $f->eval(n=>($n1+2));
$n1 =$n1+1;
} until ( $temp1 >$p1 );
# here is a loop trying to find a value of n
$p2 = 1820;$n2 = $n1; do{$temp2 = $f->eval(n=>($n2+2));
$n2 =$n2+1;
} until ($temp2 >$p2);
# here is a loop trying to find a value of n
$p3 = 1821; #want 1822 but get timeout$n3 = $n2; do{$temp3 = $f->eval(n=>($n3+2));
$n3 =$n3+1;
} until ($temp3 >$p3);
Context()->{format}{number} = "%.6f#";
BEGIN_PGML
[n=15] gives [$a], [n=16] gives [$b]
[n=30] gives [$one], [n=31] gives [$two], [n=58] gives [$three], [n=59] gives [$four],
[n=552] gives [$five], [n=553] gives [$six],
Use the formula for compound interest,
>>[A=P\left(1+\dfrac{r}{n} \right)^{nt} ]<<
Suppose you invest $[$p0] at [$r0]% annual interest for [$t] years. In this problem, we will investigate how the number of compounding periods, [n], affects the amount, [A].
c.
What value of [ n ] is necessary to produce an amount [ A\gt [$p1]]? [n=][___] To produce [ A\gt [$p2] ]? [n=][___] To produce [ A\gt [$p3] ]? [n=][___] END_PGML ANS(Compute("$n1")->cmp( tolType => 'absolute', tolerance => .5,) );
ANS(Compute("$n2")->cmp( tolType => 'absolute', tolerance => .5,) ); ANS(Compute("$n3")->cmp( tolType => 'absolute', tolerance => .5,) );
ENDDOCUMENT(); # This should be the last executable line in the problem.
In reply to Bruce Yoshiwara
### Re: calculations within a do-until loop
by Davide Cervone -
The reason your loop runs longer than you expect is because the computations that you perform with MathObjects produce MathObject results. That means that your values of $temp1, $temp2, and $temp3, for example, are all MathObject Reals, not Perl reals. That means that when you do comparisons with them, they are fuzzy comparisons. That is, they respect the tolerance and tolType values in the context. Note that this applies not just to equality checks, but also to inequality checks as well. This is because you don't want both $a == $b and $a > $b to be true at the same time. That is, $a < $b, $a == $b, and $a > $b should be mutually exclusive. The MathObject comparisons are arranged to work that way. In your case, if you get a value like 1822.0004823, this is considered to be equal to 1822 by the fuzzy comparison, so $tmp3 > $p3 is false even though $p3 = 1822. Indeed, the limit as n goes to infinity of your formula is $p0*e^(t*r), which is 1822.1188, which is fuzzy-equal to 1822, so you never get $temp3 > $p3, and the is why the problem times out. You probably want to convert to perl reals rather than MathObjects for the computation. You can use $temp3->value for that, as this is the internal Perl real that underlies the MathObject Real. Also, if you are going to be evaluating a MathObject formula multiple times, you may want to create a perl function from the formula rather than calling eval() repeatedly, as that will be much more efficient. For example, $F =$f->perlFunction will get you a perl subroutine reference as $F and you can call $temp3 = &$F($n3)->value to get the Perl real result of the function at $n3. Even so, however, your loop is pretty inefficient. You could use a faster algorithm, for example the bisection algorithm (or Newton's method if you want to get fancy about it). Here is one approach: $p0 = 1000;
$t = 5;$r0 = 12;
$r =$r0/100;
$f = Formula("$p0*(1 + $r/n)^($t*n)");
$F =$f->perlFunction;
$p1 = 1822; # # Assume the value you want is between these two # (they should be a power of two apart). #$n0 = 12; $n1 =$n0 + 2**10;
#
# Use bisection to locate the two values that are
# one apart on opposite sides of the desired value.
# (In this case, it will take 10 iterations to find it
# as opposed to the 541 iterations for your version
# that simply increments $n3, which is 54 times faster.) # while ($n1 - $n0 > 1) {$n2 = ($n0 +$n1) / 2;
$f2 = &$F($n2)->value; if ($f2 > $p1) {$n1 = $n2} else {$n0 = $n2} } # #$n0 is below the value, $n1 is above it. # TEXT($n1);
Hope that clears up the issues for you.
In reply to Davide Cervone
### Re: calculations within a do-until loop
by Bruce Yoshiwara -
Thanks Davide, all is now well! | 2023-04-02 03:16:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7008622288703918, "perplexity": 3577.3960491805688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00219.warc.gz"} |
https://www.albert.io/ie/linear-algebra/unit-vector-one-unknown-coordinate | ?
Free Version
Easy
# Unit Vector: One Unknown Coordinate
LINALG-VS4Q14
For which real numbers $x$ is $\begin{bmatrix} \frac{1}{2} \\\ \frac{1}{3} \\\ x \end{bmatrix}$ a unit vector?
Select ALL that apply.
A
$\dfrac{\sqrt{23}}{6}$
B
$\dfrac{25}{6}$
C
$-\dfrac{\sqrt{23}}{6}$
D
$-\dfrac{5}{6}$
E
$\dfrac{1}{6}$
F
$-\dfrac{1}{6}$ | 2016-12-03 00:23:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5082263946533203, "perplexity": 13894.288739360303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540798.71/warc/CC-MAIN-20161202170900-00369-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://mathematica.stackexchange.com/questions/26811/seeking-faster-method-than-using-table-and-if-together?noredirect=1 | # Seeking faster method than using Table and If together [duplicate]
See the example below. Is there any way to make this execute much faster and still get the same output ? I have read Conditionals slower than operators? but it didn't really help me as I can't seem to apply these methods to my problem. I have to execute about 100 of these operations in succession and each one can't take about a second to execute as running the algorithm would take way too long for the end user.
CurrentEquipID = 40
40
AbsoluteTiming[
Table[If[HTimeModelSelection[CurrentEquipID][[i]] ==
0, (1 - HOperatingEfficiency[CurrentEquipID][[i]])*
HUtilizedTime[CurrentEquipID][[i]],
Flatten[ConstantArray[
Select[EquipParams, #[[colEquipID]] == CurrentEquipID &][[;; ,
colOperatingDelayTime]], 20]][[i]]], {i, 1, 20}]]
{1.851185, {744.6512332, 744.6512332, 746.713979, 744.6512332,
744.6512332, 744.6512332, 746.713979, 744.6512332, 744.6512332,
744.6512332, 746.713979, 744.6512332, 744.6512332, 744.6512332,
746.713979, 744.6512332, 744.6512332, 744.6512332, 746.713979,
744.6512332}}
## marked as duplicate by rcollyer, Sjoerd C. de Vries, Yves Klett, Oleksandr R., m_goldbergJun 11 '13 at 21:29
• I'm not going to try to work out exactly what your incomplete code is doing but Select inside a loop may be slow. Consider creating a hash table for each value (or position) rather than finding it with Select. – Mr.Wizard Jun 11 '13 at 13:05
• @Mr.Wizard its worse than that, that entire branch is redundant, and should be excised. See my answer. – rcollyer Jun 11 '13 at 13:40
• Considering how common an issue this is, I'm voting to close in favor of a comprehensive question that you should read. – rcollyer Jun 11 '13 at 13:48
I was going to leave a comment, but without seeing the full code, I think it can be improved drastically.
At fault is that you are performing the same search every time through the loop. (If has the attribute HoldRest, so the branches are not executed unless they are used which means the search is re-executed every through.) So, at a minimum move the Select statement outside of the loop.
currentEP = Select[EquipParams, #[[colEquipID]] == CurrentEquipID &][[;; ,
colOperatingDelayTime]];
Additionally, your use of ConstantArray[..., 20][[i]] is redundant, and should be eliminated in its entirety. Replace it with currentEP. Lastly, using Table to index a List is inelegant; there are better ways. Consider this use of MapThread:
MapThread[
If[ #1 == 0, (1 - #2) #3, Evaluate@Flatten[currentEP]]&,
{
HTimeModelSelection[CurrentEquipID],
HOperatingEfficiency[CurrentEquipID],
HUtilizedTime[CurrentEquipID]
}
]
The nice thing about this construct is you do not need to know what iteration your on, simplifying your code. Also, if the lists change in size, you do not need to change the code. A caveat is that they must all be the same size, or MapThread will complain, loudly. But, there are other methods ...
• I think that Evaluate@Flatten@currentEP should be a real number -- the OP extracts part i from the flattened list. Probably Select returns just one element, but who knows. – Michael E2 Jun 11 '13 at 16:31
• @MichaelE2 you are likely right, but I was not assuming anything, particularly with Flatten used after the OP extracts the info from the list. But, Evaluate could be removed, as I believe Flatten is very fast, and the list is very short. – rcollyer Jun 11 '13 at 16:38
• Anyway, I posted alternatives, but I think your approach is easier to understand. Unitize, UnitStep, etc. and vectorized arithmetic seems a bit harder to me. – Michael E2 Jun 11 '13 at 19:25
Without knowing much about the data, it seems likely that it consists of numbers, and the the times and efficiencies are positive real numbers. Further I have to guess that
Select[EquipParams, #[[colEquipID]] == CurrentEquipID &][[;; , colOperatingDelayTime]]
returns a list consisting of a single number; otherwise, I cannot see how one would get predictable results picking the i-th element in the flattened array.
Here is some made-up data, on which your function works (i.e. runs without error and returns a list of real numbers):
nEquip = 10000;
numEquipmentStats = 100;
colEquipID = 1; (* index *)
colOperatingDelayTime = 3; (* random index *)
CurrentEquipID = 40;
HTimeModelSelection[CurrentEquipID] = RandomInteger[{0, 2}, nEquip];
HOperatingEfficiency[CurrentEquipID] = RandomReal[1, nEquip];
HUtilizedTime[CurrentEquipID] = RandomReal[1, nEquip];
EquipParams = Transpose @ Join[{Range[nEquip]}, RandomReal[1, {numEquipmentStats, nEquip}]];
Your function takes 0.136795 sec. (Perhaps the slowness your function has to do with the functions HTimeModelSelection, HOperatingEfficiency, or HUtilizedTime -- your code calls them repeatedly on the same input -- something to avoid if your functions take an appreciable amount of time to evaluate.)
If the data in this calculation, except HTimeModelSelection, are positive numbers, then the following will be fast.
AbsoluteTiming[
Unitize[HTimeModelSelection[CurrentEquipID]] (1. -
HOperatingEfficiency[CurrentEquipID]) HUtilizedTime[
CurrentEquipID] /.
0 -> Select[EquipParams,
#[[colEquipID]] == CurrentEquipID &, 1][[1, colOperatingDelayTime]];]
{0.002531, Null}
If the data is not all positive numbers, then here is a variation that works:
Transpose[{
N@Unitize[HTimeModelSelection[CurrentEquipID]],
(1. - HOperatingEfficiency[CurrentEquipID]) HUtilizedTime[CurrentEquipID]
}] /.
{{0., _} -> Select[EquipParams,
#[[colEquipID]] == CurrentEquipID &, 1][[1, colOperatingDelayTime]],
{1., x_} :> x}; // AbsoluteTiming
{0.006147, Null}
If that's not fast enough, then perhaps compiling will help:
cf = Compile[{{model, _Real, 1}, {eff, _Real, 1}, {time, _Real, 1}, {delay, _Real}},
If[#[[1]] == 0., #[[2]], delay] & /@ Transpose[{model, (1. - eff) time}]];
cf[Unitize[HTimeModelSelection[CurrentEquipID]],
HOperatingEfficiency[CurrentEquipID],
HUtilizedTime[CurrentEquipID],
Select[EquipParams,
#[[colEquipID]] == CurrentEquipID &, 1][[1, colOperatingDelayTime]]]; // AbsoluteTiming
{0.001194, Null}
• You've got the order wrong, the ith element of ConstantArray[...] is picked then it is flattened. So, consistent results can be acheived. Oh, and +1 for pointing out that the OPs functions may be at fault, also. – rcollyer Jun 11 '13 at 19:46
• @rcollyer Aren't the [[;;, colOperatingDelayTime]] elements picked, ConstantArrayed, and flattened; and then the i-th element is picked? Seems like we're getting who-knows-which delay time, unless there's just one or they're all the same. – Michael E2 Jun 11 '13 at 20:49
• You were right, I was reading the closing ] of If as the closing ] of Flatten. But, a lot of duplication ... – rcollyer Jun 11 '13 at 20:53 | 2019-06-26 17:08:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27625229954719543, "perplexity": 2872.9419084508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000367.74/warc/CC-MAIN-20190626154459-20190626180459-00141.warc.gz"} |
http://tiny-death-star.wikia.com/wiki/Biscuit_Baron | # Biscuit Baron
301pages on
this wiki
This level is not yet available for construction. All information presented on this page is based on data mined from the game files, and is subject to change at any time in the final versions.
Food Biscuit Baron
Products Product A Product B Product C
Base Amounts Icon Credits Stock Time n/a n/a n/a
Food Levels
Biscuit Baron is a Food level in Star Wars: Tiny Death Star. The level is unreleased and cannot be built by any means.
## ProductsEdit
Credits Icon Name Description
Product A
Product B
Product C
## GalleryEdit
Level Pictures Decorations Christmas 2013 Propaganda 2014 Level Completion:
## Level InformationEdit
### Detailed Stock Times and ProductivityEdit
Each bitizen skill point decreases the amount of time needed to order each item by 1% per point, for a maximum of 27%. Discounted times are rounded down to the nearest whole minute. When a bitizen is matched with their dream job, output doubles without any change to production time.
Bitizen
Skill
Base
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Production time
Production time
Credits per minute
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Credits per minute
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Division by zero. Division by zero. Division by zero.
Every upgrade level increases the amount of items that can be stocked by a fixed amount, which is 10% of the base levels. The cost to upgrade starts at 3 credits (from Level 1 to Level 2) and increases by 1 credit per level. Production times are not affected by upgrading the level. Although the table lists only 50 upgrade levels, there is no maximum level cap.
Credits
Base
Level Cost
1 -
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
10 11
11 12
12 13
13 14
14 15
15 16
16 17
17 18
18 19
19 20
20 21
21 22
22 23
23 24
24 25
25 26
26 27
27 28
28 29
29 30
30 31
31 32
32 33
33 34
34 35
35 36
36 37
37 38
38 39
39 40
40 41
41 42
42 43
43 44
44 45
45 46
46 47
47 48
48 49
49 50
50 51
Products
0 0 0
0 0 0
Production level
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
Products
0 0 0
0 0 0
Dream Production level
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
## ScenesEdit
This level currently has no scenes.
-- | 2017-05-22 15:40:26 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8812338709831238, "perplexity": 496.3991992878517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463605188.47/warc/CC-MAIN-20170522151715-20170522171715-00011.warc.gz"} |
http://mathoverflow.net/questions/110944/what-does-the-numerically-verified-part-of-the-riemann-hypothesis-tell-about-pri?sort=votes | What does the numerically verified part of the Riemann Hypothesis tell about prime numbers?
I'm curious about the following question:
As of 2005(?) the Riemann hypothesis is verified for the first 10 trillion zeroes, they are all on the critical line. Does this verification gives us any information about prime number?
In particular, are there any results saying if all the non-trivial zeroes whose imaginary part is < N and > 0 are on the critical line, then we understand something about prime number < M, where M is a number depend on N?
-
The magic words are "explicit formula". The short answer is "no". – Igor Rivin Oct 28 '12 at 23:29
Per terrytao.wordpress.com/2012/02/01/…, it tells us that every odd integer larger than 1 is the sum of at most five primes. – Will Sawin Oct 28 '12 at 23:42
Knowing the location of first 2000 or so zeros of the zeta-function above the real axis to 75 digits of accuracy seems to have been essential in Odlyzko and te Riele's disproof of the Merten's conjecture. See: oai.cwi.nl/oai/asset/1823/1823A.pdf – Micah Milinovich Oct 29 '12 at 0:15
@Will, well, it tells us every odd integer larger than one up to a certain 'ceiling' is such a sum. The rest of the odd integers are dealt with by other methods that only work above that ceiling. – David Roberts Oct 29 '12 at 4:06
If you look at the explicit formula, then you can get a bound for the error term in the PNT: If $$\psi(x) = \sum_{p^k\le x}\log p,$$ then formula (9) in page 109 of Davenport's book (multiplicative number theory) implies that $$\psi(x) = x + \sum_{-T\le \gamma\le T}\frac{x^\rho}{\rho} + O\left(1+\frac{ x\log^2(xT) }{T}\right),$$ for every $T\ge1$, where the implied constant is completely effective. Now, if we know that all the zeroes of $\zeta$ up to height $T$ lie on the critical line, then this automatically implies that $$|\psi(x) - x | \le x^{1/2}\sum_{-T\le \gamma\le T} \frac{1}{\sqrt{1/4+\gamma^2}} + O\left(1+ \frac{ x\log^2(xT)}{T} \right) \ll x^{1/2}\log^2T + \frac{ x\log^2(xT)}{T},$$ for some effective implied constants. So, in certain ranges of $x$, depending on $T$, you can get very good bounds on the size of $\psi(x)$ and therefore on how many primes there are up to $x$.
-
To put it a bit more loosely (ignoring the log factors): if one has verified RH up to height $T_0$, then one can accurately count primes in intervals of the form $[x,x+y]$, so long as $y \gg \max( x/T_0, x^{1/2} )$, by optimising the above bound in $T \in [1,T_0]$. So numerical RH up to height $T_0$ is roughly "as good as" full RH for counting primes up to about $T_0^2$, and gives a partial substitute for RH beyond that scale which becomes increasingly strong as $T_0$ increases. – Terry Tao Oct 29 '12 at 17:24
p.s. the paper of Ramare and Saouter ams.org/mathscinet-getitem?mr=1950435 managed to obtain a completely effective version of the above inequality that saves a logarithm and is useful for a number of effective analytic number theory purposes (as mentioned in comments, for instance, I used it to show every odd number up to $8.7 \times 10^{36}$ was the sum of at most five primes, and used some other arguments to cover other ranges). – Terry Tao Oct 29 '12 at 17:37
$\gamma$ is the imaginary part of the zeroes right? – 36min Oct 30 '12 at 3:10
@36min: Yes, usually we write $\rho=\beta+i\gamma$ for a zero of the Riemann $\zeta$ function. – Dimitris Koukoulopoulos Nov 1 '12 at 16:13
The disproof of Mertens' conjecture (cited above) was certainly a computations tour de force using explicit values of the zeros of $\zeta(s)$. Another good example is the paper of Rosser and Schoenfeld "Sharper Bounds for the Chebyshev Functions $\theta(x)$ and $\psi(x)$" Math. Comp., v. 29 1975, pp. 243-269.
We know by the Prime Number Theorem that $\Psi(x)\sim x$. Rosser and Schoenfeld use values of zeros of $\zeta(s)$ to show, for example, that for $\log(x)>105$, we have $|\Psi(x)-x|<x\epsilon(x)$, where, for $X=(\log(x)/9.6459 08801)^{1/2}$ $$\epsilon(x)= 0.257634 \left(1 + \frac{0.96642}{X} \right) X^{3/4}\exp(-X).$$ The paper contains a number of results of this flavor, about the Chebyshev function $\theta(x)$, and about asymptotics of the $n$th prime $p_n$.
The reason it is difficult to convert results about low lying zeros to results about small primes is that the Explicit Formula, (mentioned in comments above) has the primes and zeros lying on opposite sides of a Fourier Transform. The Heisenberg Uncertainty Principle applies
http://en.wikipedia.org/wiki/Fourier_transform#Uncertainty_principle
-
Wait, the formula for $\epsilon(x)$ can't be right as written. The $\exp(-x)$ factor damps everything else, and would yield $|\Psi(x) - x| \ll x^{7/4} \exp(-x)$ for all $x$, an absurdly tight inequality. [Also, it's the Uncertainty Principle, not "Principal".] – Noam D. Elkies Oct 29 '12 at 5:54
@Noam: Thanks, corrected via adding the definition in R&S of X as a function of x. – Stopple Oct 29 '12 at 15:18
Another application of the computation of large numbers of Riemann zeros (beyond verification of the Riemann Hypothesis) is towards bounding the deBruijn-Newman constant $\Lambda$:
deBruijn introduced a deformation parameter $t$ in the Riemann $\Xi$ function so that $\Xi_0(x)=\Xi(x)$ and the Riemann zeros $x(t)$ flow according to the "backward heat equation." Together their work shows the existence of a constant $\Lambda$ such that, for $\Lambda\le t$ the function $\Xi_t(x)$ has only real zeros, while for $t<\Lambda$ there exist complex zeros. The Riemann Hypothesis is the conjecture that $\Lambda\le 0$. Newman made the complementary conjecture that $\Lambda\ge 0$, writing "This new conjecture is a quantitative version of the dictum that the Riemann hypothesis, if true, is only barely so." Csordas, Smith, and Varga were able to analyze the ODEs governing the motion of the zeros, and use the fact that an very close pair of zeros, so-called Lehmer pairs, would give a lower bound on $\Lambda.$
The current best bound via this approach, due to Saouter, Gourdon, and Demichel, is that $$-1.14\times 10^{-11}<\Lambda$$ based on a Lehmer pair at height about $7.95\times 10^{12}$
- | 2014-10-01 00:27:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.911591112613678, "perplexity": 342.1213851750927}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663218.28/warc/CC-MAIN-20140930004103-00328-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://projecteuclid.org/euclid.ejs/1537257627 | ## Electronic Journal of Statistics
### Mass volume curves and anomaly ranking
#### Abstract
This paper aims at formulating the issue of ranking multivariate unlabeled observations depending on their degree of abnormality as an unsupervised statistical learning task. In the 1-d situation, this problem is usually tackled by means of tail estimation techniques: univariate observations are viewed as all the more ‘abnormal’ as they are located far in the tail(s) of the underlying probability distribution. It would be desirable as well to dispose of a scalar valued ‘scoring’ function allowing for comparing the degree of abnormality of multivariate observations. Here we formulate the issue of scoring anomalies as a M-estimation problem by means of a novel functional performance criterion, referred to as the Mass Volume curve (MV curve in short), whose optimal elements are strictly increasing transforms of the density almost everywhere on the support of the density. We first study the statistical estimation of the MV curve of a given scoring function and we provide a strategy to build confidence regions using a smoothed bootstrap approach. Optimization of this functional criterion over the set of piecewise constant scoring functions is next tackled. This boils down to estimating a sequence of empirical minimum volume sets whose levels are chosen adaptively from the data, so as to adjust to the variations of the optimal MV curve, while controlling the bias of its approximation by a stepwise curve. Generalization bounds are then established for the difference in sup norm between the MV curve of the empirical scoring function thus obtained and the optimal MV curve.
#### Article information
Source
Electron. J. Statist., Volume 12, Number 2 (2018), 2806-2872.
Dates
First available in Project Euclid: 18 September 2018
https://projecteuclid.org/euclid.ejs/1537257627
Digital Object Identifier
doi:10.1214/18-EJS1474
#### Citation
Clémençon, Stephan; Thomas, Albert. Mass volume curves and anomaly ranking. Electron. J. Statist. 12 (2018), no. 2, 2806--2872. doi:10.1214/18-EJS1474. https://projecteuclid.org/euclid.ejs/1537257627
#### References
• Cadre, B. (2006). Kernel Estimation Of Density Level Sets., Journal of Multivariate Analysis 97 999–1023.
• Cadre, B., Pelletier, B. and Pudlo, P. (2013). Estimation of density level sets with a given probability content., Journal of Nonparametric Statistics 25 261–272.
• Cavalier, L. (1997). Nonparametric Estimation of Regression Level Sets., Statistics 29 131–160.
• Clémençon, S. and Jakubowicz, J. (2013). Scoring Anomalies: a M-estimation formulation. In, Proceedings of the 16-th International Conference on Artificial Intelligence and Statistics, Scottsdale, USA.
• Clémençon, S. and Robbiano, S. (2014). Anomaly ranking as supervised bipartite ranking. In, Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China 343–351.
• Clémençon, S. and Vayatis, N. (2009). Adaptive Estimation of the Optimal ROC Curve and a Bipartite Ranking Algorithm. In, Algorithmic Learning Theory. Lecture Notes in Computer Science 5809 216–231. Springer Berlin Heidelberg.
• Csörgő, M. (1983)., Quantile Processes with Statistical Applications. Society for Industrial and Applied Mathematics.
• Csörgő, M. and Révész, P. (1978). Strong Approximations of the Quantile Process., The Annals of Statistics 6 882–894.
• Csörgő, M. and Révész, P. (1981)., Strong Approximations in Probability and Statistics. Academic Press.
• DeVore, R. (1987). A note on adaptive approximation., Approx. Theory Appl. 3 74–78.
• DeVore, R. A. (1998). Nonlinear Approximation., Acta Numerica 7 51–150.
• Donoho, D. and Gasko, M. (1992). Breakdown properties of location estimates based on half space depth and projected outlyingness., The Annals of Statistics 20 1803–1827.
• Efron, B. (1979). Bootstrap methods: another look at the jacknife., Annals of Statistics 7 1–26.
• Egan, J. P. (1975)., Signal Detection Theory and ROC Analysis. Academic Press.
• Einmahl, J. H. J. and Mason, D. M. (1992). Generalized Quantile Processes., The Annals of Statistics 20 1062–1078.
• Embrechts, P. and Hofert, M. (2013). A note on generalized inverses., Mathematical Methods of Operations Research 77 423–432.
• Falk, M. and Reiss, R. (1989). Weak convergence of smoothed and nonsmoothed bootstrap quantile estimates., Annals of Probability 17 362–371.
• Giné, E. and Guillou, A. (2002). Rates of strong uniform consistency for multivariate kernel density estimators., Ann. Inst. Poincaré (B), Probabilités et Statistiques 38 907–921.
• Hall, P. (1986). On the number of bootstrap simulations required to construct a confidence interval., Annals of Statistics 14 1453–1462.
• Koltchinskii, V. (1997). M-estimation, convexity and quantiles., The Annals of Statistics 25 435–477.
• Koltchinskii, V. (2006). Local Rademacher complexities and oracle inequalities in risk minimization (with discussion)., The Annals of Statistics 34 2593–2706.
• Lifshits, M. A. (1987). On the distribution of the maximum of a Gaussian process., Theory of Probability and its Applications 31 125–132.
• Liu, R. Y., Parelius, J. M. and Singh, K. (1999). Multivariate analysis by data depth: descriptive statistics, graphics and inference., Ann. Statist. 27 783–858.
• Lovász, L. and Vempala, S. (2006). Simulated annealing in convex bodies and an $O(n^4)$ volume algorithm., Journal of Computer and System Sciences 72 392–417.
• Mallat, S. (1990)., A Wavelet Tour of Signal Processing. Academic Press.
• Massart, P. (1990). The Tight Constant in the Dvoretzky-Kiefer-Wolfowitz Inequality., Ann. Probab. 18 1269–1283.
• Muller, D. W. and Sawitzki, G. (1991). Excess Mass Estimates and Tests for Multimodality., Journal of the American Statistical Association 86 738–746.
• Pitt, L. D. and Tran, L. T. (1979). Local Sample Path Properties of Gaussian Fields., Ann. Probab. 7 477–493.
• Polonik, W. (1995). Measuring Mass Concentrations And Estimating Density Contour Clusters – An Excess Mass Approach., The Annals of Statistics 23 855–881.
• Polonik, W. (1997). Minimum volume sets and generalized quantile processes., Stochastic Processes and their Applications 69 1–24.
• Polonik, W. (1999). Concentration and goodness-of-fit in higher dimensions: (asymptotically) distribution-free methods., The Annals of Statistics 27 1210–1229.
• Rigollet, P. and Vert, R. (2009). Fast rates for plug-in estimators of density level sets., Bernoulli 14 1154–1178.
• Sargan, J. D. and Mikhail, W. M. (1971). A General Approximation to the Distribution of Instrumental Variables Estimates., Econometrica 39 131–169.
• Scott, C. and Nowak, R. (2006). Learning Minimum Volume Sets., Journal of Machine Learning Research 7 665–704.
• Silverman, B. and Young, G. (1987). The bootstrap: to smooth or not to smooth., Biometrika 7 469–479.
• Steinwart, I., Hush, D. and Scovel, C. (2005). A classification framework for anomaly detection., J. Machine Learning Research 6 211–232.
• Stute, W. (1982). A Law of the Logarithm for Kernel Density Estimators., The Annals of Probability 10 414–422.
• Tsirel’son, V. S. (1976). The Density of the Distribution of the Maximum of a Gaussian Process., Theory of Probability & Its Applications 20 847–856.
• Tsybakov, A. (1997). On nonparametric estimation of density level sets., Annals of Statistics 25 948–969.
• Tukey, J. (1975). Mathematics and picturing data. (R. D. James, ed.) 523–531. Canadian Math., Congress.
• Viswanathan, K., Choudur, L., Talwar, V., Wang, C., Macdonald, G. and Satterfield, W. (2012). Ranking Anomalies in Data Centers. In, Network Operations and System Management (R. D. James, ed.) 79–87. IEEE.
• Wand, M. P. and Jones, M. C. (1994)., Kernel Smoothing. Chapman & Hall/CRC Monographs on Statistics & Applied Probability. Taylor & Francis.
• Zuo, B. Y. and Serfling, R. (2000). General notions of statistical depth function., The Annals of Statistics 28 461–482. | 2018-11-21 06:18:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5671324729919434, "perplexity": 4414.121399217332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747215.81/warc/CC-MAIN-20181121052254-20181121074254-00382.warc.gz"} |
http://vm.udsu.ru/issues/archive/issue/2014-1-5 | +7 (3412) 91 60 92
## Archive of Issues
Russia Izhevsk
Year
2014
Issue
1
Pages
58-65
Section Mathematics Title “Layerwise” scattering for a difference Schrödinger operator Author(-s) Morozova L.E.a, Chuburin Yu.P.b Affiliations Izhevsk State Technical Universitya, Physical Technical Institute, Ural Branch of the Russian Academy of Sciencesb Abstract In modern physics literature, the need for formulas that permit, in a quantum one-dimensional problem, to reduce a calculation of the reflection (transmission) probability for the potential consisting of some “barriers” to the reflection and transmission probabilities over these “barriers” repeatedly occurred. In this paper, we study the scattering problem for the difference Schrödinger operator with the potential which is the sum of $N$ functions (describing the “barriers” or “layers”) with pairwise disjoint supports. With the help of the Lippmann-Schwinger equation, we proved the theorem which reduces the calculation of the reflection and transmission amplitudes for this potential, to the calculation of the ones for these barriers. For $N=2$ simple explicit formulas which realized this reduction were obtained. The particular cases for the even first barrier and two identical even (after appropriate shifts) barriers were studied. Of course, the similar results hold for the reflection (transmission) probabilities. We obtained the simple equation for the double-barrier structure resonances in terms of the amplitudes of each of the two barriers. In the paper, we also present the alternative scheme of the proof of the obtained results which are based on the series expansion of the T-operator. This approach substantiates the physical understanding of the scattering by a multilayer structure as multiple scattering on separate layers. To proof the theorems, the known method of reduction of the Lippmann-Schwinger equation to the “modified” equation in a Hilbert space is used. Of course, all the results remain valid for the “continuous” Schrödinger operator, and the choice of the discrete approach is due to its growing popularity in the quantum theory of solids. Keywords difference Schrödinger operator, Lippmann-Schwinger equation, reflection and transmission coefficients UDC 517.958, 530.145.6 MSC 81Q10, 81Q15 DOI 10.20537/vm140105 Received 14 January 2014 Language Russian Citation Morozova L.E., Chuburin Yu.P. “Layerwise” scattering for a difference Schrödinger operator, Vestnik Udmurtskogo Universiteta. Matematika. Mekhanika. Komp'yuternye Nauki, 2014, issue 1, pp. 58-65. References Lousse V., Vigneron J.P. Use of Fano resonances for bistable optical transfer through photonic crystal films, Phys. Rev. B., 2004, vol. 69, 155106 (11 p). Broer W., Hoenders B.J. Natural modes and resonances in a dispersive stratified N-layer medium, J. Phys. A: Math. Theor., 2009, vol. 42, 245207 (18 p). Gain J., Sarkar M.D., Kundu S. Energy and effective mass dependence of electron tunnelling through multiple quantum barriers in different heterostructures, 2010, 8 p., arXiv: 1002.1931. http://arxiv.org/abs/1002.1931 Pendry J.B. Low energy electron diffraction, London: Academic Press, 1974. Datta S. Kvantovyi transport: ot atoma k tranzistoru (Quantum transport: from the atom to the transistor), Moscow-Izhevsk: Regular and Chaotic Dynamics, Institute of Computer Science, 2009, 532 p. Reed M., Simon B. Metody sovremennoi matematicheskoi fiziki. I. Funktsionalnyi analiz (Methods of modern mathematical physics, I. Functional analysis), Moscow: Mir, 1977, 360 p. Baranova L.Y., Chuburin Y.P. Quasi-levels of the two-particle discrete Schrödinger operator with a perturbed periodic potential, J. Phys. A.: Math. Theor., 2008, vol. 41, 435205 (11 p). Fadeev L.D., Yakubovskii О.А. Lektsii po kvantovoi mekhanike dlya studentov-matematikov (Lectures on quantum mechanics for students of mathematics), Leningrad: Leningrad State University, 1980, 200 p. Reed M., Simon B. Metody sovremennoi matematicheskoi fiziki. III. Teoriya rasseyaniya (Methods of modern mathematical physics, III. Scattering theory), Moscow: Mir, 1982, 446 p. Reed M., Simon B. Metody sovremennoi matematicheskoi fiziki. IV. Analiz operatorov (Methods of of modern mathematical physics, IV. Analysis of operators), Moscow: Mir, 1982, 428 p. Taylor J. Teoriya rasseyaniya. Kvantovaya teoriya nerelyativistskikh stolknovenii (Scattering theory: the quantum theory of non-relativistic collisions), Moscow: Mir, 1975, 567 p. Tinyukova T.S. The Lippmann-Schwinger equation for quantum wires, Vestn. Udmurt. Univ. Mat. Mekh. Komp'yut. Nauki, 2011, no. 1, pp. 99-104 (in Russian). Full text | 2020-11-25 14:16:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6600894927978516, "perplexity": 5289.07760100851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141182794.28/warc/CC-MAIN-20201125125427-20201125155427-00224.warc.gz"} |
https://dml.cz/handle/10338.dmlcz/143394?show=full | # Article
Title: Hammerstein–Nemytskii Type Nonlinear Integral Equations on Half-line in Space $L_1(0,+\infty )\cap L_{\infty }(0,+\infty )$ (English) Author: Khachatryan, Aghavard Kh. Author: Khachatryan, Khachatur A. Language: English Journal: Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica ISSN: 0231-9721 Volume: 52 Issue: 1 Year: 2013 Pages: 89-100 Summary lang: English . Category: math . Summary: The paper studies a construction of nontrivial solution for a class of Hammerstein–Nemytskii type nonlinear integral equations on half-line with noncompact Hammerstein integral operator, which belongs to space $L_1(0,+\infty )\cap L_{\infty }(0,+\infty )$. This class of equations is the natural generalization of Wiener-Hopf type conservative integral equations. Examples are given to illustrate the results. For one type of considering equations continuity and uniqueness of the solution is established. (English) Keyword: Wiener–Hopf operator Keyword: Hammerstein–Nemytskii equation Keyword: Caratheodory condition Keyword: one-parameter family of positive solutions Keyword: iteration Keyword: monotonic increasing and bounded solution MSC: 45G05 MSC: 47H30 idZBL: Zbl 1290.45001 idMR: MR3202752 . Date available: 2013-08-02T08:00:42Z Last updated: 2014-07-30 Stable URL: http://hdl.handle.net/10338.dmlcz/143394 . Reference: [1] Arabadjyan, L. G., Yengibaryan, N. B.: Convolution equations and nonlinear functional equations. Itogi nauki i teckniki, Math. Analysis 4 (1984), 175–242 (in Russian). MR 0780564 Reference: [2] Gokhberg, I. Ts., Feldman, I. A.: Convolution Equations and Proections Methods of Solutions. Nauka, Moscow, 1971. MR 0355674 Reference: [3] Khachatryan, A. Kh., Khachatryan, Kh. A.: Existence and uniqueness theorem for a Hammerstein nonlinear integral equation. Opuscula, Mathematica 31, 3 (2011), 393–398. Zbl 1228.45007, MR 2802902, 10.7494/OpMath.2011.31.3.393 Reference: [4] Khachatryan, A. Kh., Khachatryan, Kh. A.: On solvability of a nonlinear problem in theory of income distribution. Eurasian Math. Jounal 2 (2011), 75–88. Zbl 1258.45004, MR 2910832 Reference: [5] Khachatryan, Kh. A.: On one class of nonlinear integral equations with noncompact operator. J. Contemporary Math. Analysis 46, 2 (2011), 71–86. MR 2828824 Reference: [6] Khachatryan, Kh. A.: Some classes of Urysohn nonlinear integral equations on half line. Docl. NAS Belarus 55, 1 (2011), 5–9. MR 2932258 Reference: [7] Kolmogorov, A. N., Fomin, V. C.: Elements of Functions Theory and Functional Analysis. Nauka, Moscow, 1981 (in Russian). Reference: [8] Lindley, D. V.: The theory of queue with a single sever. Proc. Cambridge Phil. Soc. 48 (1952), 277–289. MR 0046597 Reference: [9] Milojevic, P. S.: A global description of solution to nonlinear perturbations of the Wiener–Hopf integral equations. El. Journal of Differential Equations 51 (2006), 1–14. MR 2226924 .
## Files
Files Size Format View
ActaOlom_52-2013-1_8.pdf 252.6Kb application/pdf View/Open
Partner of | 2020-07-03 20:08:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5012844204902649, "perplexity": 8717.588546306855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882934.6/warc/CC-MAIN-20200703184459-20200703214459-00186.warc.gz"} |
http://math.stackexchange.com/questions/226628/orders-of-growth-between-polynomial-and-exponential?answertab=active | # Orders of Growth between Polynomial and Exponential
What is known in contemporary mathematics about orders of growth for functions that exceed any degree polynomial, but fall short of exponential? This is a subject for which I've found little literature in the past.
An example: $Ae^{a\sqrt x}$ clearly will outrun any finite degree polynomial, but will be outrun by $Be^{bx}$.
If we replace $x$ with $y^2$ then that example doesn't seem so deep. Are there functions that exceed polynomial growth yet fall short of $Ae^{ax^p}$ for any power $0<p<1$? What classes of functions can we distinguish with different kinds of in-between orders of growth? What can we know about their power series expansions, or behavior in the complex plane? Those are examples of the kinds of questions I have, and would like to find literature on.
Have any definitions or terminology been established concerning this? The right jargon will facilitate searching.
-
I added the computational-complexity tag since that's such a locus of interest in questions like these. – Kevin Carlson Nov 1 '12 at 8:24
One of the best-known classes is the "quasi-polynomials", which are exponentials of polynomials in logs, e.g. $e^{\log^2(x)+\log x}$, which you might also write as $x^{\log(x)+1}$. As long as the degree of the exponent is greater than $1$, these fit between polynomial and exponential.
One has also the "sup-exponentials," which grow as $e^\phi$ where $\lim\limits_{x\to \infty}\frac{x}{\phi(x)}=0$. The most obvious examples that aren't quasi-polynomial are along the lines of the one you gave.
These don't exhaust the possibilities, though. You may be interested in a considerable volume of discussion over at MO of functions $f$ such that $f(f(x))$ is exponential. | 2015-05-30 19:17:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6674050688743591, "perplexity": 515.099623633716}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932596.84/warc/CC-MAIN-20150521113212-00186-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://aviation.stackexchange.com/questions/38027/does-lift-drag-generally-improve-at-higher-reynolds-numbers | Does Lift/Drag generally improve at higher Reynolds numbers?
I have been looking at the lift/drag polars for various airfoil profiles. The L/D ratio seems to improve at higher Reynolds numbers. Obviously different wings are optimized for different speeds, and there are many other factor to consider, but in general terms do wings get more efficient at higher speeds?
I believe a 747 is more efficient than a model airplane because of the higher Reynolds number it operates at. Is this correct?
Yes, the boundary layer that surrounds the airfoil is known to decrease its thickness as the Reynolds number increases; its relative thickness to the characteristic length (the cord for example) scales as $Re^{-1/2}$ in steady flow, and the polar curve becomes more horizontal when Re increases. Meanwhile, lift coefficient increases just a little bit (check this post).
$$\mathrm{Re} = \frac{\rho u L}{\mu} = \frac{u L}{\nu}$$
One can intuitively feel that an object moving through a less vicious fluid will experience less drag. While the lift induced by momentum change in the fluid remains similar (constant $$u$$ and $$\rho$$). | 2020-01-29 21:34:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7359983325004578, "perplexity": 944.3311947786353}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251802249.87/warc/CC-MAIN-20200129194333-20200129223333-00238.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-10-radical-expressions-and-equations-10-3-operations-with-radial-expressions-practice-and-problem-solving-exercises-page-618/62 | ## Algebra 1
$\frac{\sqrt2-3}{\sqrt2-3}$ $\frac{2(\sqrt2-3)}{2(\sqrt2-3)}$ $\frac{10(\sqrt2-3)}{10(\sqrt2-3)}$ All of these will be the same since any number divided by itself is 1.
$\frac{\sqrt2-3}{\sqrt2-3}*\frac1{\sqrt2+3}$ $\frac{\sqrt2-3}{2-9}$ $\frac{\sqrt2-3}{-7}$ $\frac{10(\sqrt2-3)}{10(\sqrt2-3)}*\frac1{\sqrt2+3}$ $\frac{10(\sqrt2-3)}{10*({2-9})}$ $\frac{\sqrt2-3}{{2-9}}$ $\frac{\sqrt2-3}{{-7}}$ | 2020-09-30 14:57:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8282731771469116, "perplexity": 109.02042334756763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402127075.68/warc/CC-MAIN-20200930141310-20200930171310-00620.warc.gz"} |
https://gamedev.stackexchange.com/questions/131946/implementing-an-ai-controller-for-pacman | # Implementing an AI controller for Pacman
I'm currently implementing an AI controller class that is being used to determine the moves that ms.pacman should make to collect pills and avoid ghosts. In order to determine which is the best move to make in each state (configuration) of the game, I'm using Breadth-First Search to search the decision tree at each state. I've gotten the controller to work (at least somewhat) for Depth-First Search so I was trying to apply the same logic here.
The idea for DFS was to start with a "root" or driver function which would iterate through the possible moves ms. pacman could make at each state in the decision tree and then make a call to a recursive function that searches the tree to find the terminal state (point at which all pills have been eaten) with the highest value (e.g. highest score). This value is returned from the recursive function and stored in a variable in the root function. The root function the returns the move (out of possible moves) that had the highest value in the tree.
Since BFS is not recursive, I was thinking that it wouldn't be necessary to have a driver function like this, so I decided to put it all in one function. Based on DFS, I'm implementing BFS using a queue which stores all of the neighboring states of the current state and visits each state in FIFO order. Once a state is reached in which all pills have been eaten, the current Move should be returned. I'm not sure if this is the best approach, but its what I could come up with. The problem is that the code for the function takes too long to run and ms.Pacman immediately just goes left and runs into a wall. I'm assuming this is because the terminal state is never being found. Here is the function (in Java):
public MOVE bfs(Game state){
EnumMap<GHOST,MOVE> ghostMove = new EnumMap<>(GHOST.class);
MOVE bestMove = MOVE.NEUTRAL;
while(!q.isEmpty()){
Game current = q.peek();
q.remove();
for (MOVE move : current.getPossibleMoves (current.getPacmanCurrentNodeIndex())) {
Game neighbor = state.copy();
if ((current.getNumberOfActivePills() == 0) && (current.getNumberOfActivePowerPills() == 0)) {
return move;
}
}
}
return bestMove;
}
Is there anything wrong with the design of the algorithm that could be causing this problem?
• Am I wrong here, in thinking that Pacman logic is pseudo random based direction? The ghosts don't actually hunt you, they just box you into bottle necks and choke points. Now I have to go and pull up the wikipedia page. Oct 24 '16 at 2:24
• Did some reading. gameinternals.com/post/2072558330/… They function off of targeted tiles. Good read. Oct 24 '16 at 2:28
• The question is: "Does your AI know the strategy of the ghosts?" If yes, then there might be an optimal strategy which can be followed without requiring a game-tree to traverse. Oct 24 '16 at 13:47
You are using brutal force to find the solution, so you can't expect that to run fast.
I'll list some improvements for you:
• Implement a fitness function which evaluates how good is a node.
• Sort the node list with the next formula: V = p*f + (1-p)*d. p is a constant between 0 and 1. f is the fitness function and d is the actual node depth(not the tree depth, the node depth).
• Be carefull about the fitness function, if you want it to find a solution, it should never evaluate better a worse node, and it should return 0 for a solution node. You can start for example with 1 for all the nodes and 0 for the solution ones, and then start thinking about some penalties to make a decision worse than another in every case by adding some values. For example, if a decision is a move which will kill the character, you can set that value to infinite. Another example, a move which take a ball is better than a move that doesn't take it. It is just a example, think about it and you will find more examples.
• You should also take in account that you will have invalid moves. For example if you are in front of a wall, you cant move forward.
It is just A* algorithm, it will solve your problem easily.
Pseudo-code:
Node A_Star_Algorithm(Node initial)
{
List open-nodes;
List close-nodes;
Node actual;
while(open-nodes is not empty)
{
actual = open-nodes[0];
open-nodes.erase(0);
if(actual is solution)
{
return actual-node;
}
else
{
List new-nodes = getChilds(actual, open-nodes, close-nodes);
sort(open-nodes, A_STAR_ECUATION); // we sort the open list placing first the smaller values
// this will make that in our next iteration we will analize
// first the best values
}
}
return NULL;
}
const float p = 0.5f // Value between 0 and 1
bool A_STAR_ECUATION(Node A, Node B)
{
float H_A = p*FITNESS_FUNCTION(Node A) + (1-p)*A.GetCost();
float H_B = p*FITNESS_FUNCTION(Node B) + (1-p)*B.GetCost();
return H_A < H_B;
}
List getChilds(Node actual, List open-nodes, List close-nodes)
{
List childs = getPosibilities(actual);
for(int i = 0;i<childs.size();i++)
{
for(int k = 0;k<open-nodes.size();k++)
{
if(open-nodes[k] == childs[i])
{
if(childs[i].GetCost() > open-nodes[k].GetCost())
childs.remove(i);
break;
}
}
}
for(int i = 0;i<childs.size();i++)
{
for(int k = 0;k<close-nodes.size();k++)
{
if(close-nodes[k] == childs[i])
{
if(childs[i].GetCost() > close-nodes[k].GetCost())
childs.remove(i);
break;
}
}
}
return childs;
}
The cost of a node is the number of moves required to reach it. The initial node for example is 0, and his childs have cost 1. The childs of this childs have cost 2....
The p value is for give more importance to the heuristic or to the cost of the node. In this case i chose to give equal importance, but if your heuristic is good, you can increase this value from 0.5f to 0.8f for example.
If your heuristic is admissible, which means that it doesn't give better values to nodes which are worse than anothers, it will find the best solution.
• I was actually initially keeping track of which terminal state had the best value (the state where all pills had been eaten and the score was the highest) but that didn't seem to really pan out. Would you mind providing at least a codes sample to make it more concrete what you are suggesting? Oct 24 '16 at 3:55
• I'll prepare a pseudo-code for you. Ill edit my original comment. I think that in 20 minutes it will be done Oct 24 '16 at 13:07
• I was thinking about one thing. The enemies, the moves are aleatory? If they are then it wont give you a good solution. If the enemies are aleatory then the machine should be able to learn. The only thing that will solve this if the enemies moves are aleatory will be a general purpose algorithm, and you can discard that possibility. Oct 24 '16 at 14:18
• If the moves are predictable then a genetic algorithm may be able to solve it, but it won´t solve it always. Just sometimes will win. Oct 24 '16 at 14:22
This algorithm assumes that the game state progresses unavoidably towards an end state. However, it is possible for ms pac-man to move around (e.g. in a circle); with ghosts chasing her, whilst no pills are being consumed. If this happens, the algorithm will not terminate.
So, I think you have to figure out a way to detect (or avoid) cycles. There are better ways to prevent cycles, but I think the simplest way is to limit the number of moves (doing something like the code below).
...
while(!q.isEmpty()){
Game current = q.peek();
q.remove();
if (q.moveCount() < MAX_MOVES) {
for ...
...
If you want to consider a alternative way to tackle this problem, have a look at behaviour trees, and work in the A* algorithm to aid behaviour decisions and improve behaviour actions. | 2021-09-18 14:20:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.379500150680542, "perplexity": 965.4099258466723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056476.66/warc/CC-MAIN-20210918123546-20210918153546-00703.warc.gz"} |
https://blog.spaceresearch.top/page/2/ | 0%
John L. Crassidis, and John L. Junkins, Optimal Estimation of Dynamic Systems, CRC Press, 2011.
Corrections to the book can be found at here.
# Chapter 2 Probability Concepts in Least Squares
## 2.5. Maximum Likelihood Estimation
Jonathan Ko, “Gaussian Process for Dynamic Systems”, PhD Thesis, University of Washington, 2011.
Bayes filter equation in Eq. 4.1 (p.34) has a typo (should be $\propto$, not $=$)
$p(x_t|z_{1:t},u_{1:t-1}) \propto p(z_t|x_t) \int \textcolor{red}{p(x_t|x_{t-1},u_{t-1})} \textcolor{green}{p(x_{t-1}|z_{1:t-1},u_{1:t-2})} dx_{t-1}$
• $\textcolor{red}{Red}$ part is dynamics model, describing how the state $x$ evolves in time based on the control input $u$ (p.34)
• $\textcolor{green}{Green}$ part is observation model, describing the likelihood of making an observation $z$ given the state $x$
• GP-BayesFilter improves these two parts.
The dynamics model maps the state and control $(x_t,u_t)$ to the state transition $\Delta x_t = x_{t+1} - x_t$. So, the training data is
$D_p = <(X,U),X'>$
The observation model maps from the state $x_t$ to the observation $z_t$. So, the training data is
$D_o = $
The resulting GP dynamics and observation models are (p.44)
$p(x_t|x_{t-1},u_{t-1}) \approx \mathcal{N}(\text{GP}_\mu([x_{t-1},u_{t-1}],D_p), \text{GP}_\Sigma([x_{t-1},u_{t-1}],D_p))$
and
$p(z_t|x_t) \approx \mathcal{N}(\text{GP}_\mu(x_t,D_o), \text{GP}_\Sigma(x_t,D_o))$
almosallam_heteroscedastic_2017
Heteroscedastic Gaussian processes for uncertain and incomplete data
Ibrahim Almosallam
PhD Thesis, University of Oxford, https://ora.ox.ac.uk/objects/uuid:6a3b600d-5759-456a-b785-5f89cf4ede6d
If you are looking at this post, it means you are also pretty much a newbie to TensorFlow, like me, as of 2020-07-29.
# tensorflow.keras
Keras is already part of TensorFlow, so, use from tensorflow.keras import ***, not from keras import ***.
TensorFlow backend
## Early stopping
EarlyStopping
model.fit(..., callbacks=[EarlyStopping(monitor='val_loss', patience=5, verbose=1, mode='min', restore_best_weights=True)], ...)
# Reproducibility of results
TL;DR
Set all random seeds
Use tensorflow.keras instead standalone keras
Use model.predict_on_batch(x).numpy() for predicting speed.
I use CNN for time series prediction (1D), not for image works (2D or 3D).
# Learning Materials
• How to Develop 1D Convolutional Neural Network Models for Human Activity Recognition
• time series classification
• two 1D CNN layers, followed by a dropout layer for regularization, then a pooling layer. 为什么这样?
• It is common to define CNN layers in groups of two in order to give the model a good chance of learning features from the input data. 为什么这样?
• CNNs learn very quickly, so the dropout layer is intended to help slow down the learning process
• The pooling layer … consolidating them to only the most essential elements.
• After the CNN and pooling, the learned features are flattened to one long vector
• a standard configuration of 64 parallel feature maps and a kernel size of 3 (Where comes this “standard” configuration?)
• a multi-headed model, where each head of the model reads the input time steps using a different sized kernel.
I ran across this document page of pytransform3d, and it claims:
There are two different quaternion conventions: Hamilton’s convention defines ijk = -1 and the JPL convention (from NASA’s Jet Propulsion Laboratory, JPL) defines ijk = 1. We use Hamilton’s convention.
It’s not new to know about different definitions (mostly the sequency differs), but what is this ijk=1 definition? First time to hear about.
Then I continue diving into the reference source it provided.
Only after this, I found that the problem is not only about the sequence of the components, but about something more fundamental. So I put down this summary for my future reference.
# $(q_0, q_1, q_2, q_3)$ or $(q_1, q_2, q_3, q_4)$ ?
The answer is it doesn’t matter that much. This is not a mathematical or fundamental difference.
Equations can be easily converted. Codes can be easily modified.
# $ij=k$ or $ij=-k$
1. Harold L. Hallock, Gary Welter, David G. Simpson, and Christopher Rouff, ACS without an attitude, London: Springer, 2017.
• (p.16) Alternatively, one could follow a different convention with quaternion multiplication. Many authors prefer a convention that, although not expressed as such, essentially redefines Hamilton’s hyper-complex commutation relations (Eq. 1.5b above) into $i j = -k, k j = -i, ki = -j$
The quaternion representation is one of the best characterizations, and this chapter will focus on this representation. The presentation in this chapter follows the style of [99, 205, 219].
# Which one is used in references?
Will keep updating as I read more references…
## Using $ij=k$ and $(q_0, q_1, q_2, q_3)$
1. Yaguang Yang, Spacecraft Modeling, Attitude Determination, and Control Quaternion-based Approach, Boca Raton, FL : CRC Press, 2019. | “A science publishers book.”: CRC Press, 2019. [Link].
## Using $ij=k$ and $(q_1, q_2, q_3, q_4)$
1. Harold L. Hallock, Gary Welter, David G. Simpson, and Christopher Rouff, ACS without an attitude, London: Springer, 2017.
## Using $ij=-k$ and $(q_1, q_2, q_3, q_4)$
1. F. Landis Markley, and John L. Crassidis, Fundamentals of Spacecraft Attitude Determination and Control, New York, NY: Springer New York, 2014.
2. Malcolm D. Shuster, “The nature of the quaternion”, The Journal of the Astronautical Sciences, vol. 56, Sep. 2008, pp. 359–373.
3. Hanspeter Schaub, and John L. Junkins, Analytical Mechanics of Space Systems (Second Edition), Reston, VA: American Institute of Aeronautics and Astronautics, 2009.
(p.107) 似乎是默认了与 Rotation matrix 顺序一致的一种,即 $ij=-k$
# Book:
Probabilistic Programming & Bayesian Methods for Hackers (Version 0.1)
PyMC3 is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book’s main goals is to solve that problem, and also to demonstrate why PyMC3 is so cool.
We assign them to PyMC3’s stochastic variables, so-called because they are treated by the back end as random number generators.
Excerpt some information about the attitude subsystem of CubeSats.
Learned something about the attitude estimation EKF used in several books and papers. Try to note something here to clarify their relationships.
The only thing I’m sure about is:
The quaternion attitude + gyro bias estimator is widely used in practice. | 2021-10-26 21:53:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 30, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7379034161567688, "perplexity": 2747.771082439549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00154.warc.gz"} |
http://datahacker.rs/category/other/ | datahacker.rs@gmail.com
# Category: Other
### # K An implementation of a Shallow Neural Network in Keras – MNIST dataset
In this post we will see how we can classify handwritten digits using shallow neural network implemented in Keras. Our model will have 2 layers, with 64(height x width) neurons in the input layer and 10 neurons in the output layer.We will use normal initializer that generates tensors with a normal distribution. The optimizer we’ll use is Adam .It is an optimization algorithm that can be used instead of the classical stochastic gradient descent procedure…
### #K An implementation of a Shallow Neural Network in Keras – Spirals dataset
In this post we will see how we can classify a spirals dataset with a shallow neural network implemented in Keras. Let’s start by importing libraries that we will need in the our code. Here, we will make our dataset and divide it into training and testing set. Let’s now create a shallow neural network! Next, we will make predictions and plot the accuracy and loss function of our model. Now, we will make some…
In this post we will see how to implement Gradient Descent using TensorFlow. First we will import all libraries that we will need in our code. Next, we will define our variable $$\omega$$ and we will initialize it with $$-3$$. With the following peace of code we will also define our cost function $$J(\omega) = (\omega – 3)^2$$. With the next two lines of code we specify the initialization of our variables… | 2019-03-25 16:13:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46902716159820557, "perplexity": 554.4350094570219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204077.10/warc/CC-MAIN-20190325153323-20190325175323-00199.warc.gz"} |
http://www.cfd-online.com/W/index.php?title=Reichardt_profile&oldid=8840 | # Reichardt profile
$u^+$ Dimensionless velocity $y^+$ Dimensionless wall distance $\kappa$ von Karman's constant ($\approx 0.41$) | 2015-10-07 11:25:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9014078974723816, "perplexity": 14763.607982851137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737833893.68/warc/CC-MAIN-20151001221713-00178-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://www.gamedev.net/forums/topic/624042-musicsounds-over-network-sdl-net/ | • 12
• 12
• 9
• 10
• 13
# Music\Sounds over Network (SDL_Net)
This topic is 2151 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Alright, my library of choice is SDL_Net, documentation isn't that great, but I bet it's pretty simple once I learn how transfer things between the client. I've got a simple console application using the little tutorial on the SDL_Net Wiki. What it does is create a TCP connection between the client and server and the client can send messages to the server, and the server prints them out on the server console. The tutorial came with two commands that it does a strcmp with this char buffer[512] which are exit and quit. Obvious as they are, "exit" closes the client; "quit" closes both the server and the client.
So I added in my own commands and allowed the server to play music\sounds, but what good is that if I'm away from the server? I also followed another guide to get the IP to appear in the x.x.x.x format as opposed to the hex format which was made for winsock, but still works all the same.
Anyway, we come to the problem that I'm trying to figure out is how can I play music\sounds from the server and play them on the client? (Streaming) I assume I need a buffer or some sort to send a little bit at a time, and free the memory once it's done, but to be perfectly honest, I have no idea how.
Anyhow, here's my current server code (Since I'm new to this, I left the comments from the SDL_Net Wiki page there):
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <SDL.h> #include <SDL_mixer.h> #include <SDL_net.h> Mix_Music* alien = NULL; TCPsocket sd, csd; /* Socket descriptor, Client socket descriptor */ IPaddress ip, *remoteIP; void Clean() { SDLNet_TCP_Close(sd); SDLNet_Quit(); if (Mix_PlayingMusic() == 1) Mix_HaltMusic(); Mix_FreeMusic(alien); Mix_CloseAudio(); } int main(int argc, char **argv) { if( Mix_OpenAudio(22050, MIX_DEFAULT_FORMAT, 2, 4096) < 0) { FILE* lerr = fopen("errlog.txt", "wb+"); fprintf(lerr, "Error opening audio.\n"); fclose(lerr); exit(EXIT_FAILURE); } atexit(Clean); alien = Mix_LoadMUS("alien.mp3"); if (!alien) { FILE* lerr = fopen("errlog.txt", "wb+"); fprintf(lerr, "Error opening alien.mp3.\n"); fclose(lerr); exit(EXIT_FAILURE); } int quit, quit2; char buffer[512]; if (SDLNet_Init() < 0) { FILE* lerr = fopen("errlog.txt", "wb+"); fprintf(lerr, "SDLNet_Init: %s\n", SDLNet_GetError()); fclose(lerr); exit(EXIT_FAILURE); } /* Resolving the host using NULL make network interface to listen */ if (SDLNet_ResolveHost(&ip, NULL, 2000) < 0) { FILE* lerr = fopen("errlog.txt", "wb+"); fprintf(lerr, "SDLNet_ResolveHost: %s\n", SDLNet_GetError()); fclose(lerr); exit(EXIT_FAILURE); } /* Open a connection with the IP provided (listen on the host's port) */ if (!(sd = SDLNet_TCP_Open(&ip))) { FILE* lerr = fopen("errlog.txt", "wb+"); fprintf(lerr, "SDLNet_TCP_Open: %s\n", SDLNet_GetError()); fclose(lerr); exit(EXIT_FAILURE); } /* Wait for a connection, send data and term */ quit = 0; while (!quit) { /* This check the sd if there is a pending connection. * If there is one, accept that, and open a new socket for communicating */ if ((csd = SDLNet_TCP_Accept(sd))) { /* Now we can communicate with the client using csd socket * sd will remain opened waiting other connections */ /* Get the remote address */ if ((remoteIP = SDLNet_TCP_GetPeerAddress(csd))) { /* Print the address, converting in the host format */ unsigned char IP[4] = {0,0,0,0}; for (int i=0; i<4; i++) { IP = ( SDLNet_Read32(&remoteIP->host) >> (i*8) ) & 0xFF; } printf("Client connected: %d.%d.%d.%d:%d\n", IP[3],IP[2],IP[1],IP[0], SDLNet_Read16(&remoteIP->port)); } else { FILE* lerr = fopen("errlog.txt", "wb+"); fprintf(lerr, "SDLNet_TCP_GetPeerAddress: %s\n", SDLNet_GetError()); fclose(lerr); } quit2 = 0; while (!quit2) { if (SDLNet_TCP_Recv(csd, buffer, 512) > 0) { printf("Client said: %s\n", buffer); if(strcmp(buffer, "exit") == 0) /* Terminate this connection */ { quit2 = 1; printf("Client terminated the connection...\n"); } if(strcmp(buffer, "quit") == 0) /* Quit the program */ { quit2 = 1; quit = 1; printf("Client terminated server...\n"); } if (strcmp(buffer, "alien") == 0) { if (Mix_PlayingMusic() == 0) Mix_PlayMusic(alien, -1); } if (strcmp(buffer, "stop") == 0) { if (Mix_PlayingMusic() == 1) Mix_HaltMusic(); } if (strcmp(buffer, "pause") == 0) { if (Mix_PlayingMusic() == 1) Mix_PauseMusic(); } if (strcmp(buffer, "resume") == 0) { if (Mix_PausedMusic() == 1) Mix_ResumeMusic(); } } } /* Close the client socket */ SDLNet_TCP_Close(csd); } } return EXIT_SUCCESS; }
And here is my client code:
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <SDL_net.h> int main(int argc, char **argv) { IPaddress ip; /* Server address */ TCPsocket sd; /* Socket descriptor */ int quit, len; char buffer[512]; /* Simple parameter checking */ if (argc < 3) { FILE* lerr = fopen("lerr.txt", "w+"); fprintf(lerr, "Usage: %s host port\n", argv[0]); fclose(lerr); exit(EXIT_FAILURE); } if (SDLNet_Init() < 0) { FILE* lerr = fopen("lerr.txt", "w+"); fprintf(lerr, "SDLNet_Init: %s\n", SDLNet_GetError()); fclose(lerr); exit(EXIT_FAILURE); } /* Resolve the host we are connecting to */ if (SDLNet_ResolveHost(&ip, argv[1], atoi(argv[2])) < 0) { FILE* lerr = fopen("lerr.txt", "w+"); fprintf(lerr, "SDLNet_ResolveHost: %s\n", SDLNet_GetError()); fclose(lerr); exit(EXIT_FAILURE); } /* Open a connection with the IP provided (listen on the host's port) */ if (!(sd = SDLNet_TCP_Open(&ip))) { FILE* lerr = fopen("lerr.txt", "w+"); fprintf(lerr, "SDLNet_TCP_Open: %s\n", SDLNet_GetError()); fclose(lerr); exit(EXIT_FAILURE); } /* Send messages */ quit = 0; while (!quit) { printf(">"); scanf("%s", buffer); len = strlen(buffer) + 1; if (SDLNet_TCP_Send(sd, (void *)buffer, len) < len) { FILE* lerr = fopen("lerr.txt", "w+"); fprintf(lerr, "SDLNet_TCP_Send: %s\n", SDLNet_GetError()); fclose(lerr); exit(EXIT_FAILURE); } if(strcmp(buffer, "exit") == 0) quit = 1; if(strcmp(buffer, "quit") == 0) quit = 1; } SDLNet_TCP_Close(sd); SDLNet_Quit(); return EXIT_SUCCESS; }
Any help is appreciated
##### Share on other sites
Is there a particular reason you want to stream the sound to the client? Typically you would just send a command from the server to the client that says "play sound #3845" or whatever, and the client uses a local sound resource to actually fulfill the command.
##### Share on other sites
Is there a particular reason you want to stream the sound to the client? Typically you would just send a command from the server to the client that says "play sound #3845" or whatever, and the client uses a local sound resource to actually fulfill the command.
Well, I understand that, that's simple to do, but I want to understand for one, how streaming works, and two, how to copy files over a network. If I can do those and I understand how, I feel pretty much golden. Except the fact that I made local file pointers in the server and opened them with "wb+" if there was an error, aye, what a lamebrain mistake.
##### Share on other sites
Alright after working at something related to printing to the client's console, I realized something. It can't be done with SDLNet_TCP_Send, as it's a client only function, at least according to the documentation.
SDLNet_TCP_Send sends the data over the socket. If the data cannot be sent immediately, the routine waits until all of the data may be delivered properly (it is a blocking operation). This routine is not used for server sockets.[/quote]
The only possible way I see that I can send messages to the client through a buffer, or any data to the client, I would need some sort of double connection using multiple sockets. I tried that already, but I couldn't get it to work. Unless there's another function I'm missing.
##### Share on other sites
Answer this question: "How do I play music from a FILE on the client, when all I have is the file handle (not the path)?"
Once you have that answer, you should be very close to the answer for "how do I play music from a socket handle?"
##### Share on other sites
Answer this question: "How do I play music from a FILE on the client, when all I have is the file handle (not the path)?"
Once you have that answer, you should be very close to the answer for "how do I play music from a socket handle?"
Well, the way I see it is that I can have multiple Mix_Music pointers and I can do something like mus1 = Mix_LoadMUS("mus.mp3"); then have mus2 = mus1, and so on. I'm not 100% sure on that, as pointers are just now becoming a nightmare. The simple answer would be to read something into "musClient" (assuming we have the Mix_Music* musClient) that is sent over from the server that has "musServer" and loads it so the client just has to play it. A file transfer could likely be achieved the same way. The main problem with this is right now is that apparently the server in SDL_Net can't send data to the client, and can only receive data from it. The second problem is that I need the length in bytes of the Mix_Music pointer that would be sent, which as I understand would only return 4 with any given pointer.
Aye, I'm only 16 maybe I should just give up until I have an actual teacher.
##### Share on other sites
That's a filename, not a file pointer.
It *may* be that the library you are using doesn't actually support playing from a socket handle. If so, see if it supports playing from raw memory, and if so, download the file to memory and play from there.
Aye, I'm only 16 maybe I should just give up until I have an actual teacher. [/quote]
Age is just a number! Don't give up when it gets hard. Instead, I find I make progress if I can step back and figure out *why* something is hard, and then figure out how to remove that underlying problem. Books, tutorials, tutors, forums, reading other people's code, and sleeping on the problem are all important parts of learning, and no one solution will always be right.
To become a programmer, you need a really high tolerance level for frustration, though :-)
##### Share on other sites
That's a filename, not a file pointer.
It *may* be that the library you are using doesn't actually support playing from a socket handle. If so, see if it supports playing from raw memory, and if so, download the file to memory and play from there.
Aye, I'm only 16 maybe I should just give up until I have an actual teacher. [/quote]
Age is just a number! Don't give up when it gets hard. Instead, I find I make progress if I can step back and figure out *why* something is hard, and then figure out how to remove that underlying problem. Books, tutorials, tutors, forums, reading other people's code, and sleeping on the problem are all important parts of learning, and no one solution will always be right.
To become a programmer, you need a really high tolerance level for frustration, though :-)
[/quote]
Thanks. The file pointer I talking about is when you create a Mix_Music*. This can be found under the SDL_Mixer library. There's a reason I made a thread before this asking about which libraries to use for networking and SDL_Net was appealing for it's portability and its ability to be able to use plain C or C++.
Either way, I'm not trying to recreate the same topic. I've been thinking a bit backwards here, I'm trying to figure out how to send music from a server to a client without first trying to figure out how to so much as send information to the client. The server can receive information fine, and the client can send information fine, but not the other way around. The server cannot send information, and the client can't receive information. I tried the simple creating a second char to send to the client. It would actually be more appropriate to send an int for this particular situation. Either way I tried sending a char to the client containing something like '0' or '1' and then have the client print out a message corresponding to the number it got.
This is impossible as I said before, the SDLNet_TCP_Send() is designed for the client only (according to the documentation). So then, how can I possibly send data to the client? The odd part is I'm using TCP, and looking at the documentation, there doesn't appear to be such a limitation to UDP.
##### Share on other sites
the SDLNet_TCP_Send() is designed for the client only (according to the documentation).[/quote]
It is?
Perhaps this is confusing:
... such as the client disconnecting.[/quote]
I prefer the term "peer" for this purpose, since it doesn't confuse terminology or imply direction.
But no, client here means "the thing on the other side of TCP connection", nothing more.
The server cannot send information, and the client can't receive information.[/quote]
If you can send in one direction, you can send in other. But hard to say without more detail.
Have you tried first going through the SDLNet tutorial? Edited by Antheus
##### Share on other sites
the SDLNet_TCP_Send() is designed for the client only (according to the documentation).
It is?
[/quote]
Under the description read this SDLNet_TCP_Send sends the data over the socket. If the data cannot be sent immediately, the routine waits until all of the data may be delivered properly (it is a blocking operation). This routine is not used for server sockets."
The server cannot send information, and the client can't receive information.
If you can send in one direction, you can send in other. But hard to say without more detail.
Have you tried first going through the SDLNet tutorial?
[/quote]
Everything I have is from the SDL_Net tutorial and messing with the code.
Anyhow, here's my current server code (Since I'm new to this, I left the comments from the SDL_Net Wiki page there):
After all that is kind of why I'm here, because there's that one tutorial, and barely any others if any. (As in no others that I could find) Edited by Spirrwell | 2018-03-22 08:28:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28249606490135193, "perplexity": 3689.9398149479516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647782.95/warc/CC-MAIN-20180322073140-20180322093140-00346.warc.gz"} |
https://labs.tib.eu/arxiv/?author=Satoshi%20Kuriki | • ### Optimal experimental design that minimizes the width of simultaneous confidence bands(1704.03995)
March 30, 2019 math.ST, stat.TH, stat.ME
We propose an optimal experimental design for a curvilinear regression model that minimizes the band-width of simultaneous confidence bands. Simultaneous confidence bands for curvilinear regression are constructed by evaluating the volume of a tube about a curve that is defined as a trajectory of a regression basis vector (Naiman, 1986). The proposed criterion is constructed based on the volume of a tube, and the corresponding optimal design that minimizes the volume of tube is referred to as the tube-volume optimal (TV-optimal) design. For Fourier and weighted polynomial regressions, the problem is formalized as one of minimization over the cone of Hankel positive definite matrices, and the criterion to minimize is expressed as an elliptic integral. We show that the M\"obius group keeps our problem invariant, and hence, minimization can be conducted over cross-sections of orbits. We demonstrate that for the weighted polynomial regression and the Fourier regression with three bases, the tube-volume optimal design forms an orbit of the M\"obius group containing D-optimal designs as representative elements.
• ### Simultaneous confidence bands for contrasts between several nonlinear regression curves(1510.05077)
Jan. 10, 2017 math.ST, stat.TH
We propose simultaneous confidence bands of the hyperbolic-type for the contrasts between several nonlinear (curvilinear) regression curves. The critical value of a confidence band is determined from the distribution of the maximum of a chi-square random process defined on the domain of explanatory variables. We use the volume-of-tube method to derive an upper tail probability formula of the maximum of a chi-square random process, which is asymptotically exact and sufficiently accurate in commonly used tail regions. Moreover, we prove that the formula obtained is equivalent to the expectation of the Euler-Poincare characteristic of the excursion set of the chi-square random process, and hence conservative. This result is therefore a generalization of Naiman's inequality for Gaussian random processes. As an illustrative example, growth curves of consomic mice are analyzed.
• ### Use of spurious correlation for multiplicity adjustment(1612.06029)
Dec. 19, 2016 stat.ME
We consider one of the most basic multiple testing problems that compares expectations of multivariate data among several groups. As a test statistic, a conventional (approximate) $t$-statistic is considered, and we determine its rejection region using a common rejection limit. When there are unknown correlations among test statistics, the multiplicity adjusted $p$-values are dependent on the unknown correlations. They are usually replaced with their estimates that are always consistent under any hypothesis. In this paper, we propose the use of estimates, which are not necessarily consistent and are referred to as spurious correlations, in order to improve statistical power. Through simulation studies, we verify that the proposed method asymptotically controls the family-wise error rate and clearly provides higher statistical power than existing methods. In addition, the proposed and existing methods are applied to a real multiple testing problem that compares quantitative traits among groups of mice and the results are compared.
• ### The Bivariate Lack-of-Memory Distributions(1606.05097)
Dec. 6, 2016 math.ST, stat.TH
We first review the univariate and bivariate lack-of-memory properties (LMPs). The univariate LMP is a remarkable characterization of the exponential distribution, while the bivariate LMP is shared by the famous Marshall and Olkin's, Block and Basu's as well as Freund's bivariate exponential distributions. We treat all the bivariate lack-of-memory (BLM) distributions in a unified approach and develop some new general properties of the BLM distributions, including joint moment generating function, product moments and dependence structure. Necessary and sufficient conditions for the survival functions of BLM distributions to be totally positive of order two are given. Some previous results for specific BLM distributions are improved. In particular, we show that both the Marshall--Olkin survival copula and survival function are totally positive of all orders, regardless of parameters. Besides, we point out that Slepian's inequality also holds true for the BLM distributions.
• ### Exact ZF Analysis and Computer-Algebra-Aided Evaluation in Rank-1 LoS Rician Fading(1507.07056)
May 20, 2016 cs.IT, math.IT
We study zero-forcing detection (ZF) for multiple-input/multiple-output (MIMO) spatial multiplexing under transmit-correlated Rician fading for an N_R X N_T channel matrix with rank-1 line-of-sight (LoS) component. By using matrix transformations and multivariate statistics, our exact analysis yields the signal-to-noise ratio moment generating function (m.g.f.) as an infinite series of gamma distribution m.g.f.'s and analogous series for ZF performance measures, e.g., outage probability and ergodic capacity. However, their numerical convergence is inherently problematic with increasing Rician K-factor, N_R , and N_T. We circumvent this limitation as follows. First, we derive differential equations satisfied by the performance measures with a novel automated approach employing a computer-algebra tool which implements Groebner basis computation and creative telescoping. These differential equations are then solved with the holonomic gradient method (HGM) from initial conditions computed with the infinite series. We demonstrate that HGM yields more reliable performance evaluation than by infinite series alone and more expeditious than by simulation, for realistic values of K , and even for N_R and N_T relevant to large MIMO systems. We envision extending the proposed approaches for exact analysis and reliable evaluation to more general Rician fading and other transceiver methods.
• ### Chi-Square Mixture Representations for the Distribution of the Scalar Schur Complement in a Noncentral Wishart Matrix(1512.08159)
Dec. 27, 2015 math.ST, stat.TH
We show that the distribution of the scalar Schur complement in a noncentral Wishart matrix is a mixture of central chi-square distributions with different degrees of freedom. For the case of a rank-1 noncentrality matrix, the weights of the mixture representation arise from a noncentral beta mixture of Poisson distributions.
• ### $A$-Hypergeometric Distributions and Newton Polytopes(1510.02269)
Nov. 12, 2015 math.CA, math.ST, stat.TH
We give a bijection between a quotient space of the parameters and the space of moments for any $A$-hypergeometric distribution. An algorithmic method to compute the inverse image of the map is proposed utilizing the holonomic gradient method and an asymptotic equivalence of the map and the iterative proportional scaling. The algorithm gives a method to solve a conditional maximum likelihood estimation problem in statistics. Our interplay between the theory of hypergeometric functions and statistics gives some new formulas of $A$-hypergeometric polynomials.
• ### Recursive computation for evaluating the exact $p$-values of temporal and spatial scan statistics(1511.00108)
Oct. 31, 2015 stat.CO, stat.ME
Let $V$ be a finite set of indices, and let $B_i$, $i=1,\ldots,m$, be subsets of $V$ such that $V=\bigcup_{i=1}^{m}B_i$. Let $X_i$, $i\in V$, be independent random variables, and let $X_{B_i}=(X_j)_{j\in B_i}$. In this paper, we propose a recursive computation method to calculate the conditional expectation $E\bigl[\prod_{i=1}^m\chi_i(X_{B_i}) \,|\, N\bigr]$ with $N=\sum_{i\in V}X_i$ given, where $\chi_i$ is an arbitrary function. Our method is based on the recursive summation/integration technique using the Markov property in statistics. To extract the Markov property, we define an undirected graph whose cliques are $B_j$, and obtain its chordal extension, from which we present the expressions of the recursive formula. This methodology works for a class of distributions including the Poisson distribution (that is, the conditional distribution is the multinomial). This problem is motivated from the evaluation of the multiplicity-adjusted $p$-value of scan statistics in spatial epidemiology. As an illustration of the approach, we present the real data analyses to detect temporal and spatial clustering.
• ### MIMO Zero-Forcing Performance Evaluation Using the Holonomic Gradient Method(1403.3788)
April 16, 2015 cs.IT, math.IT
For multiple-input multiple-output (MIMO) spatial-multiplexing transmission, zero-forcing detection (ZF) is appealing because of its low complexity. Our recent MIMO ZF performance analysis for Rician--Rayleigh fading, which is relevant in heterogeneous networks, has yielded for the ZF outage probability and ergodic capacity infinite-series expressions. Because they arose from expanding the confluent hypergeometric function ${_1\! F_1} (\cdot, \cdot, \sigma)$ around 0, they do not converge numerically at realistically-high Rician $K$-factor values. Therefore, herein, we seek to take advantage of the fact that ${_1\! F_1} (\cdot, \cdot, \sigma)$ satisfies a differential equation, i.e., it is a \textit{holonomic} function. Holonomic functions can be computed by the \textit{holonomic gradient method} (HGM), i.e., by numerically solving the satisfied differential equation. Thus, we first reveal that the moment generating function (m.g.f.) and probability density function (p.d.f.) of the ZF signal-to-noise ratio (SNR) are holonomic. Then, from the differential equation for ${_1\! F_1} (\cdot, \cdot, \sigma)$, we deduce those satisfied by the SNR m.g.f. and p.d.f., and demonstrate that the HGM helps compute the p.d.f. accurately at practically-relevant values of $K$. Finally, numerical integration of the SNR p.d.f. produced by HGM yields accurate ZF outage probability and ergodic capacity results.
• ### Schur Complement Based Analysis of MIMO Zero-Forcing for Rician Fading(1401.0430)
Sept. 26, 2014 cs.IT, math.IT
For multiple-input/multiple-output (MIMO) spatial multiplexing with zero-forcing detection (ZF), signal-to-noise ratio (SNR) analysis for Rician fading involves the cumbersome noncentral-Wishart distribution (NCWD) of the transmit sample-correlation (Gramian) matrix. An \textsl{approximation} with a \textsl{virtual} CWD previously yielded for the ZF SNR an approximate (virtual) Gamma distribution. However, analytical conditions qualifying the accuracy of the SNR-distribution approximation were unknown. Therefore, we have been attempting to exactly characterize ZF SNR for Rician fading. Our previous attempts succeeded only for the sole Rician-fading stream under Rician--Rayleigh fading, by writing it as scalar Schur complement (SC) in the Gramian. Herein, we pursue a more general, matrix-SC-based analysis to characterize SNRs when several streams may undergo Rician fading. On one hand, for full-Rician fading, the SC distribution is found to be exactly a CWD if and only if a channel-mean--correlation \textsl{condition} holds. Interestingly, this CWD then coincides with the \textsl{virtual} CWD ensuing from the \textsl{approximation}. Thus, under the \textsl{condition}, the actual and virtual SNR-distributions coincide. On the other hand, for Rician--Rayleigh fading, the matrix-SC distribution is characterized in terms of determinant of matrix with elementary-function entries, which also yields a new characterization of the ZF SNR. Average error probability results validate our analysis vs.~simulation.
• ### EM algorithms for estimating the Bernstein copula(1301.2677)
Jan. 15, 2014 stat.CO
A method that uses order statistics to construct multivariate distributions with fixed marginals and which utilizes a representation of the Bernstein copula in terms of a finite mixture distribution is proposed. Expectation-maximization (EM) algorithms to estimate the Bernstein copula are proposed, and a local convergence property is proved. Moreover, asymptotic properties of the proposed semiparametric estimators are provided. Illustrative examples are presented using three real data sets and a 3-dimensional simulated data set. These studies show that the Bernstein copula is able to represent various distributions flexibly and that the proposed EM algorithms work well for such data.
• ### Exact MIMO Zero-Forcing Detection Analysis for Transmit-Correlated Rician Fading(1307.2958)
Jan. 2, 2014 cs.IT, math.IT
We analyze the performance of multiple input/multiple output (MIMO) communications systems employing spatial multiplexing and zero-forcing detection (ZF). The distribution of the ZF signal-to-noise ratio (SNR) is characterized when either the intended stream or interfering streams experience Rician fading, and when the fading may be correlated on the transmit side. Previously, exact ZF analysis based on a well-known SNR expression has been hindered by the noncentrality of the Wishart distribution involved. In addition, approximation with a central-Wishart distribution has not proved consistently accurate. In contrast, the following exact ZF study proceeds from a lesser-known SNR expression that separates the intended and interfering channel-gain vectors. By first conditioning on, and then averaging over the interference, the ZF SNR distribution for Rician-Rayleigh fading is shown to be an infinite linear combination of gamma distributions. On the other hand, for Rayleigh-Rician fading, the ZF SNR is shown to be gamma-distributed. Based on the SNR distribution, we derive new series expressions for the ZF average error probability, outage probability, and ergodic capacity. Numerical results confirm the accuracy of our new expressions, and reveal effects of interference and channel statistics on performance.
• ### Approximate tail probabilities of the maximum of a chi-square field on multi-dimensional lattice points and their applications to detection of loci interactions(1012.4921)
March 30, 2013 stat.ME
Define a chi-square random field on a multi-dimensional lattice points index set with a direct-product covariance structure, and consider the distribution of the maximum of this random field. We provide two approximate formulas for the upper tail probability of the distribution based on nonlinear renewal theory and an integral-geometric approach called the volume-of-tube method. This study is motivated by the detection problem of the interactive loci pairs which play an important role in forming biological species. The joint distribution of scan statistics for detecting the pairs is regarded as the chi-square random field above, and hence the multiplicity-adjusted $p$-value can be calculated by using the proposed approximate formulas. By using these formulas, we examine the data of Mizuta, et al. (2010) who reported a new interactive loci pair of rice inter-subspecies.
• ### Likelihood ratio tests for positivity in polynomial regressions(1108.1033)
Nov. 14, 2012 math.ST, stat.TH
A polynomial that is nonnegative over a given interval is called a positive polynomial. The set of such positive polynomials forms a closed convex cone $K$. In this paper, we consider the likelihood ratio test for the hypothesis of positivity that the estimand polynomial regression curve is a positive polynomial. By considering hierarchical hypotheses including the hypothesis of positivity, we define nested likelihood ratio tests, and derive their null distributions as mixtures of chi-square distributions by using the volume-of-tubes method. The mixing probabilities are obtained by utilizing the parameterizations for the cone $K$ and its dual provided in the framework of Tchebycheff systems for polynomials of degree at most 4. For polynomials of degree greater than 4, the upper and lower bounds for the null distributions are provided. Moreover, we propose associated simultaneous confidence bounds for polynomial regression curves. Regarding computation, we demonstrate that symmetric cone programming is useful to obtain the test statistics. As an illustrative example, we conduct data analysis on growth curves of two groups. We examine the hypothesis that the growth rate (the derivative of growth curve) of one group is always higher than the other.
• ### Abstract tubes associated with perturbed polyhedra with applications to multidimensional normal probability computations(1110.2824)
Oct. 13, 2011 stat.CO
Let $K$ be a closed convex polyhedron defined by a finite number of linear inequalities. In this paper we refine the theory of abstract tubes (Naiman and Wynn, 1997) associated with $K$ when $K$ is perturbed. In particular, we focus on the perturbation that is lexicographic and in an outer direction. An algorithm for constructing the abstract tube by means of linear programming and its implementation are discussed. Using the abstract tube for perturbed $K$ combined with the recursive integration technique proposed by Miwa, Hayter and Kuriki (2003), we show that the multidimensional normal probability for a polyhedral region $K$ can be computed efficiently. In addition, abstract tubes and the distribution functions of studentized range statistics are exhibited as numerical examples.
• ### Distributions of the largest singular values of skew-symmetric random matrices and their applications to paired comparisons(1003.2711)
March 13, 2010 math.ST, stat.TH
Let $A$ be a real skew-symmetric Gaussian random matrix whose upper triangular elements are independently distributed according to the standard normal distribution. We provide the distribution of the largest singular value $\sigma_1$ of $A$. Moreover, by acknowledging the fact that the largest singular value can be regarded as the maximum of a Gaussian field, we deduce the distribution of the standardized largest singular value $\sigma_1/\sqrt{\mathrm{tr}(A'A)/2}$. These distributional results are utilized in Scheff\'{e}'s paired comparisons model. We propose tests for the hypothesis of subtractivity based on the largest singular value of the skew-symmetric residual matrix. Professional baseball league data are analyzed as an illustrative example.
• ### Graph presentations for moments of noncentral Wishart distributions and their applications(0912.0577)
Jan. 22, 2010 math.ST, stat.TH
We provide formulas for the moments of the real and complex noncentral Wishart distributions of general degrees. The obtained formulas for the real and complex cases are described in terms of the undirected and directed graphs, respectively. By considering degenerate cases, we give explicit formulas for the moments of bivariate chi-square distributions and $2\times 2$ Wishart distributions by enumerating the graphs. Noting that the Laguerre polynomials can be considered to be moments of a noncentral chi-square distributions formally, we demonstrate a combinatorial interpretation of the coefficients of the Laguerre polynomials.
• ### The tube method for the moment index in projection pursuit(0711.3931)
Nov. 25, 2007 math.ST, stat.TH
The projection pursuit index defined by a sum of squares of the third and the fourth sample cumulants is known as the moment index proposed by Jones and Sibson. Limiting distribution of the maximum of the moment index under the null hypothesis that the population is multivariate normal is shown to be the maximum of a Gaussian random field with a finite Karhunen-Loeve expansion. An approximate formula for tail probability of the maximum, which corresponds to the p-value, is given by virtue of the tube method through determining Weyl's invariants of all degrees and the critical radius of the index manifold of the Gaussian random field.
• ### Skewness and kurtosis as locally best invariant tests of normality(math/0608499)
Aug. 20, 2006 math.ST, stat.TH
Consider testing normality against a one-parameter family of univariate distributions containing the normal distribution as the boundary, e.g., the family of $t$-distributions or an infinitely divisible family with finite variance. We prove that under mild regularity conditions, the sample skewness is the locally best invariant (LBI) test of normality against a wide class of asymmetric families and the kurtosis is the LBI test against symmetric families. We also discuss non-regular cases such as testing normality against the stable family and some related results in the multivariate cases.
• ### Star-shaped distributions and their generalizations(math/0605600)
May 23, 2006 math.ST, stat.TH
Elliptically contoured distributions can be considered to be the distributions for which the contours of the density functions are proportional ellipsoids. We generalize elliptically contoured densities to star-shaped distributions'' with concentric star-shaped contours and show that many results in the former case continue to hold in the more general case. We develop a general theory in the framework of abstract group invariance so that the results can be applied to other cases as well, especially those involving random matrices. | 2020-03-30 16:27:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7389876246452332, "perplexity": 672.853912789024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497171.9/warc/CC-MAIN-20200330150913-20200330180913-00379.warc.gz"} |
http://tex.stackexchange.com/questions/160259/raggedleft-right-aligned-entries-in-pmatrix | # raggedleft (right-aligned) entries in pmatrix? [duplicate]
I'm using pmatrix from amsmath package and would like the output to align all entries in the same column to be align on the right hand side, as shown here in the code:
$\begin{pmatrix} -1 & 1 & -2\\ 0 & -1 & 4\\ 0 & 0 & 1 \end{pmatrix}$
by default, pmatrix will center.
Any way to do this?
-
from the amsmath users guide: "(If you need left or right alignment in a column or other special formats you must resort to array.)" (p.8) – barbara beeton Feb 13 at 14:20
## marked as duplicate by tohecz, Svend Tveskæg, Peter Jansson, Werner, cmhughesFeb 13 at 16:11
You can use the pmatrix* environment of the mathtools package:
\documentclass{article}
\usepackage{mathtools} % loads amsmath and some very useful complements
\begin{document}
$\begin{pmatrix*}[r] -1 & 1 & -2\\ 0 & -1 & 4\\ 0 & 0 & 1 \end{pmatrix*}$
\end{document}
-
The new tabstackengine package can also do this, including different column alignments, as in the 2nd example. One bug I need to still resolve in the package is that - characters are taken as binary minus signs, rather than unary negative signs. The workaround is to enclose them in braces, as I did here.
\documentclass{article}
\usepackage{tabstackengine}
\stackMath
\begin{document}
$\setstacktabbedgap{1ex}\parenMatrixstack[r]{% {-}1 & 1 & {-}2\\ 0 & {-}1 & 4\\ 0 & 0 & 1 }$
$\left(\tabularCenterstack{lcr}{% {-}1 & 1 & {-}2\\ 0 & {-}1 & 4\\ 0 & 0 & 1 }\right)$
\end{document}
- | 2014-03-15 23:29:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9261139631271362, "perplexity": 5489.605504539663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678700701/warc/CC-MAIN-20140313024500-00011-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://connection.ebscohost.com/c/articles/52616441/polarization-asymmetry-optical-modal-gain-saturation-via-carrier-photon-interaction-zno | TITLE
Polarization asymmetry and optical modal gain saturation via carrier–photon interaction in ZnO
AUTHOR(S)
Bumjin Kim; Heedae Kim; Sungkyun Park; Kwangseuk Kyhm; Chaeryong Cho
PUB. DATE
July 2010
SOURCE
Applied Physics Letters;7/26/2010, Vol. 97 Issue 4, p041115
SOURCE TYPE
DOC. TYPE
Article
ABSTRACT
The polarization dependence of modal gain was examined in ZnO using a variable stripe length method, where the transverse electric (TE) mode gain was dominant over the transverse magnetic (TM) mode gain due to the polarization asymmetry of the wurtzite structure. Modal gain saturation was also investigated using a modal gain contour map for the wavelength and stripe length. The TE modal gain and degree of polarization increased with increasing stripe length up to a threshold length (∼100 μm). At longer stripe lengths, rapid reduction in the carrier density along the stripe resulted in gain saturation and a spectral redshift.
ACCESSION #
52616441
Related Articles
• Nonreciprocal light transmission in silicon by Raman-induced asymmetry of the permittivity tensor. Krause, Michael // Journal of Applied Physics;May2012, Vol. 111 Issue 9, p093107
We consider the effect of Raman-induced nonreciprocal light transmission in silicon and show that it can be understood as the consequence of the asymmetry of an effective permittivity tensor at the Stokes wavelength. This viewpoint enables the derivation of a necessary condition for this effect:...
• Bias-Dependent Ultraviolet Photodetection by Au-Mg0.1Zn0.9O/ZnO-Ag Structure. Mridha, S.; Ghosh, R.; Basak, D. // Journal of Electronic Materials;Apr2007, Vol. 36 Issue 4, p524
We report on the bias-dependent photodetection of two different wavelengths of photons in the ultraviolet (UV) region by Au-Mg0.1Zn0.9O/ZnO-Ag structure deposited on a glass substrate by the sol-gel technique. Without the cap layer of Mg0.1Zn0.9O, the current-voltage (I-V) characteristic is...
• Multiple-Wavelength Focusing and Demultiplexing Plasmonic Lens Based on Asymmetric Nanoslit Arrays. Wang, Bo; Wu, Xue; Zhang, Yan // Plasmonics;Dec2013, Vol. 8 Issue 4, p1535
A multiple-wavelength focusing and demultiplexing plasmonic lens based on asymmetric nanoslit arrays is designed. The nanoslit arrays are perforated in a gold film and act as metal–insulator–metal plasmonic waveguides. By manipulating the widths of the slit arrays, the plasmonic...
• Asymmetric effect of (000 $\bar{1}$) and (0001) facets on surface and interface properties of CdS single crystal. Singaevsky, A.; Piryatinski, Y.; Grynko, D.; Dimitriev, O. // Applied Physics A: Materials Science & Processing;Jul2011, Vol. 104 Issue 1, p493
different effect of (0001) and (000 $\bar{1}$) crystal facets of the cadmium sulfide (CdS) wurtzite structure terminated with Cd and S atoms, respectively, was observed in respect to the properties of the crystal surface and interface with metal or organic semiconductor contacts. In addition to...
• Intersubband photoluminescence in InAs quantum wells. Kaspi, R.; Tilton, M. L.; Dente, G. C.; Barresi, R.; Yang, C.; Ongstad, A. P. // Applied Physics Letters;11/15/2010, Vol. 97 Issue 20, p201104
We conduct a study of photoluminescence in a series of InAs quantum wells with asymmetric barriers that are designed to generate emission from intersubband transitions near 4 μm wavelength. The results show that optical pumping of the barrier layers can be used to transfer carriers into the...
• Distance-Dependent Fluorescence Quenching Efficiency of Gold Nanodisk: Effect of Aspect Ratio-Dependent Plasmonic Absorption. Zhu, Jian; Li, Jian-Jun; Zhao, Jun-Wu // Plasmonics;Jun2012, Vol. 7 Issue 2, p201
The fluorescence quenching efficiency of an emitter close to a gold nanodisk is investigated by theoretical calculation based on the modified quasi-static approximation and fluorescence energy transfer under dipole-dipole coupling. The calculation results show that the surface plasmon resonance...
• The regime of aerosol asymmetry parameter over Europe, the Mediterranean and the Middle East based on MODIS satellite data: evaluation against surface AERONET measurements. Korras-Carraca, M. B.; Hatzianastassiou, N.; Matsoukas, C.; Gkikas, A.; Papadimas, C. D. // Atmospheric Chemistry & Physics;2015, Vol. 15 Issue 22, p13113
Atmospheric particulates are a significant forcing agent for the radiative energy budget of the Earth- atmosphere system. The particulates' interaction with radiation, which defines their climate effect, is strongly dependent on their optical properties. In the present work, we study one of the...
• Ratios of helicity amplitudes for exclusive $$\rho ^0$$ electroproduction on transversely polarized protons. Airapetian, A.; Akopov, N.; Akopov, Z.; Aschenauer, E.; Augustyniak, W.; Belostotski, S.; Blok, H.; Borissov, A.; Bryzgalov, V.; Capitani, G.; Ciullo, G.; Contalbrigo, M.; Deconinck, W.; Leo, R.; Sanctis, E.; Düren, M.; Elbakian, G.; Ellinghaus, F.; Felawka, L.; Frullani, S. // European Physical Journal C -- Particles & Fields;Jun2017, Vol. 77 Issue 6, p1
Exclusive $$\rho ^0$$ -meson electroproduction is studied by the HERMES experiment, using the 27.6 GeV longitudinally polarized electron/positron beam of HERA and a transversely polarized hydrogen target, in the kinematic region 1.0 GeV ^2
• inside: An echo from the telecom boom. Holton, Conard // Laser Focus World;Jul2006, Vol. 42 Issue 7, p59
The article presents information on a new technology called thermal imaging. Aegis Semiconductor Inc.'s spin-offcompany, RedShift Systems Inc. has taken advantage of the technology in the nontelecommunications arena of thermal imaging. The detection of long-wavelength IR (LWIR) radiation are...
Share
Courtesy of THE LIBRARY OF VIRGINIA
Sorry, but this item is not currently available from your library.
Try another library?
Sign out of this library | 2018-08-16 23:50:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7339785099029541, "perplexity": 10739.344987927865}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211316.39/warc/CC-MAIN-20180816230727-20180817010727-00149.warc.gz"} |
https://www.the-cryosphere.net/12/3589/2018/ | Journal cover Journal topic
The Cryosphere An interactive open-access journal of the European Geosciences Union
Journal topic
The Cryosphere, 12, 3589-3604, 2018
https://doi.org/10.5194/tc-12-3589-2018
The Cryosphere, 12, 3589-3604, 2018
https://doi.org/10.5194/tc-12-3589-2018
Research article 20 Nov 2018
Research article | 20 Nov 2018
An estimate of ice wedge volume for a High Arctic polar desert environment, Fosheim Peninsula, Ellesmere Island
Ice wedge volume estimate for a polar desert environment
Claire Bernard-Grand'Maison1 and Wayne Pollard2 Claire Bernard-Grand'Maison and Wayne Pollard
• 1Department of Geography, Environment and Geomatics, University of Ottawa, Ottawa, K1N 6N5, Canada
• 2Department of Geography, McGill University, Montreal, H3A 0G4, Canada
Abstract
Quantifying ground-ice volume on a regional scale is necessary to assess the vulnerability of permafrost landscapes to thaw-induced disturbance like terrain subsidence and to quantify potential carbon release. Ice wedges (IWs) are a ubiquitous ground-ice landform in the Arctic. Their high spatial variability makes generalizing their potential role in landscape change problematic. IWs form polygonal networks that are visible on satellite imagery from surface troughs. This study provides a first approximation of IW ice volume for the Fosheim Peninsula, Ellesmere Island, a continuous permafrost area characterized by polar desert conditions and extensive ground ice. We perform basic GIS analyses on high-resolution satellite imagery to delineate IW troughs and estimate the associated IW ice volume using a 3-D subsurface model. We demonstrate the potential of two semi-automated IW trough delineation methods, one newly developed and one marginally used in previous studies, to increase the time efficiency of this process compared to manual delineation. Our methods yield acceptable IW ice volume estimates, validating the value of GIS to estimate IW volume on much larger scales. We estimate that IWs are potentially present on 50 % of the Fosheim Peninsula (∼3000 km2), where 3.81 % of the top 5.9 m of permafrost could be IW ice.
1 Introduction
Arctic temperatures have increased twice as fast as the rest of the world over the past 50 years; a pattern that is expected to continue for the next century (IPCC, 2013; AMAP, 2017). Permafrost is perennially cryotic ground that is estimated to underlie up to 24 % of the Earth's land surface (French, 2018), including vast areas in the Arctic that are threatened by climate change. Potential feedbacks of thawing permafrost include widespread landscape instability, accelerated coastal erosion and a massive release of carbon into the atmosphere, thus adding to the forcing on Earth's climate through the greenhouse-gas effect (Schuur et al., 2015). The Canadian High Arctic permafrost is vulnerable to even a slight temperature increase because it lacks thermal protection from vegetation, a substantial surface organic soil layer, or thick snow cover. Subsequent melting of ground ice reinforces the disturbance on permafrost's thermal equilibrium. These effects are already seen in the form of increased subsidence and rapid melting events (Pollard et al., 2015). Ice wedges (IWs), wedge-shaped bodies of nearly pure ice, are a ground-ice type ubiquitous in the High Arctic and in areas of continuous permafrost in general. Investigating the response of IWs to climate change is a necessity to understand future permafrost degradation.
Figure 1Thermokarst processes in the Eureka Sound Lowlands. (a) Retrogressive thaw slump headwall with an exposed ice wedge (∼6 m depth) with no surface expression, Axel Heiberg Island, July 2016. Helicopter and person for scale. (b) Aerial view of an active melt out along ice wedge troughs and the resultant dissected landscape, Fosheim Peninsula, July 2015. (c) Example of backwasting of ice wedges melting out, Fosheim Peninsula, July 2013. (d) Rapid melt out of ice wedges where massive ice is present, Fosheim Peninsula, July 2017.
1.1 Ice wedges and thermokarst
The main factors controlling permafrost occurrence and depth of the active layer are air temperature, vegetation cover, soil type, snow cover and topography (French, 2018). Thermokarst refers to all the processes related to permafrost degradation involving subsidence, erosion, collapse and instability resulting from thawing of ice-rich permafrost or the melting of massive ice (van Everdingen, 1998). Ice-rich permafrost typically contains ice contents well in excess of the saturated moisture content of the sediments under natural unfrozen conditions (van Everdingen, 1998); the volume of ice in excess of saturation is called excess ice. Thermokarst is initiated when the thermal equilibrium of permafrost is disrupted as a result of increasing ground surface temperature, which causes warming in the upper part of the permafrost and increases the depth of the active layer. Thermokarst alters surface hydrology, favouring pond formation and gully formation, further causing permafrost erosion and thaw (Godin and Fortier, 2012). The magnitude of thermokarst-driven geomorphic change is highly dependent on ground-ice content (Couture and Pollard, 2007; Kokelj and Jorgenson, 2013; Pollard et al., 2015). Thermokarst features can be geomorphologically significant in landscapes containing massive ground-ice and IW ice. These permafrost features have the highest excess ice contents since they are usually composed of almost pure ice (Couture and Pollard, 1998) (Fig. 1).
Figure 2Ice wedge surface expression. (a) Representation of an epigenetic ice wedge. (b) Aerial view of ice wedge polygons on the Fosheim Peninsula, Ellesmere Island.
During the freezing season, rapid cooling of soil can lead to thermal contraction and result in the formation of cracks in frozen ground. At the beginning of the thaw season, contraction cracks are filled with meltwater that freezes to form a vertical vein of ice. Over hundreds of years, re-forming cracks create polygonal patterns and form large IWs that are widespread in the Arctic and for this reason have been the subject of abundant research (e.g. Leffingwell, 1915; Lachenbruch, 1962; Black, 1976; Mackay, 1990).
Three types of IWs are identified based on their growth direction relative to the ground surface: epigenetic, syngenetic and anti-syngenetic wedges (Mackay, 1990). Epigenetic IWs typically grow on stable surfaces in pre-existing permafrost (Fig. 2). Their V-shapes denote their tapered growth as cracks tend to form in the middle of the wedge. Syngenetic IWs grow upward as a response to surface aggradation of sediments. They are typically located in floodplains, where fluvial sedimentation occurs, in areas of eolian sedimentation and on lower segment of slopes as a result of mass wasting. They are often nested in a chevron pattern. Anti-syngenetic IWs are characterized by a gradual downward growth pattern because of an incremental removal of surface material, for example on upper slope segments affected by denudation from slow mass wasting. They penetrate deeper each year if thermal contraction cracking keeps pace with ice vein formation and their tops are truncated by thaw (Mackay, 1990). In the Canadian High Arctic polar desert, epigenetic IWs are most typical and reflect a dynamic balance between climate and geomorphology, whereas the other types are less common and occur in areas of geomorphic change (e.g. deposition and erosion). The distinct polygonal patterns produced by networks of IWs reflect the complex interaction between climate, materials and topography. In general, two polygonal morphotypes are recognized, namely high- and low-centred polygons depending on the microtopographic relationship between polygon edges and polygon centres (Mackay, 2000). In this study, our analysis is concerned with epigenetic IWs, most commonly expressed as high-centered polygons in this polar desert environment.
As IWs grow, a shallow trough often develops over the ice body reflecting the quasi-stable relationship between the active layer and the IW top. In some cases, ridges parallel to the IW trough develop, creating low-centred polygons with raised rims often higher than 50 cm (French, 2018). Preferential thaw of IW happens when snowmelt and run-off collect and flow in IW troughs creating a thermal perturbation. Changes in microtopography related to the evolution of high- and low-centred IW polygons play an important role in surface process by influencing drainage, snow distribution and vegetation. These changes generate feedbacks that accentuate polygon morphology and eventually their vulnerability to thawing (Liljedahl et al., 2016).
1.3 Ground-ice volume estimation
Analysis of permafrost and modelled disturbance due to an increase in ground temperature shows that the degree of response, specifically the magnitude of ground subsidence, is a direct function of the volume of excess ice (Couture and Pollard, 2007). Ground-ice content is a key property of permafrost terrain (Gogineni et al., 2014) and its estimation is necessary to predict the sensitivity of a particular area to disturbance (Gilbert et al., 2016), which is important for engineering and environmental evaluations. It is also crucial for quantifying carbon pools and potential fluxes in the atmosphere (Kuhry et al., 2013; Strauss et al., 2013; Ulrich et al., 2014). Understanding the cryostratigraphy and estimating ground-ice-type proportions also helps to reconstruct geomorphic history (Couture and Pollard, 2007; Gilbert et al., 2016). One of the first regional ground-ice approximation study was performed by Pollard and French (1980) for Richards Island in the Mackenzie Delta, Northwest Territories, Canada. Estimation of ground-ice in the upper 10 m of permafrost was completed using drill log data, aerial photographs and topographic maps. Intensive field studies that characterize IW terrain and estimate ground-ice content have recently been carried out in the Mackenzie Delta by Morse and Burn (2013) and on the Beaufort Sea coast in Alaska by Kanevskiy et al. (2013). The lack of detailed field data for large and basically unstudied areas in the Arctic explains why ground-ice distribution is often estimated on small regional scales.
Pore ice and segregated ice volumes can be determined from permafrost sample analysis but not large ice bodies like wedges and massive ice. To specifically conduct IW volume estimations, knowledge of IW morphology as well as IW polygons geometry are required. Point measurement data from exposed IWs, excavation and/or boreholes help to constrain IW type, shape and mean IW width and depth for a specific site (e.g. Pollard and French, 1980; Morse and Burn, 2013; Jorgenson et al., 2015). Geophysical techniques such as ground-penetrating radar (GPR) and electrical resistivity tomography (ERT) have also been used to investigate IW morphology (e.g. Munroe et al., 2007; Bode et al., 2008; De Pascale et al., 2008; Léger et al., 2017). IW are often approximated to be an inverted isosceles triangle and mean cross-sectional area of IWs is estimated with the use of width-to-depth ratios based on field observations. This works well for epigenetic and anti-syngenetic IWs but would underestimate syngenetic ice volume due to their chevron pattern (Morse and Burn, 2013). Many other assumptions are often made which lead to over-/underestimation of wedge ice. For example, Kanevskiy et al. (2013) assumed that IW polygons were square and did not take into account the active layer thickness. Pollard and French (1980) adapted their IW volume estimation for areas with less developed polygons to come up with an overall volume for Richard Island, Northwest Territories. Bearing in mind the assumptions made possibility yield large errors, these studies give some crucial information on the relative proportion of IW volume as a first approximation.
IW polygon geometry, mainly perimeter and total length of troughs for a defined surface area, can be manually calculated on a small scale from air photos (e.g. Pollard and French, 1980; Couture and Pollard, 1998). Improvements in remote-sensing capabilities for this task are needed to characterize various permafrost regions (Jorgenson and Grosse, 2016). In the review of recent advances in the study of ground ice by Gilbert et al. (2016), the use of satellite imagery has been recognized as the main contemporary method used to determine IW polygon geometry on larger scales. Access to remotely sensed high-resolution imagery encouraged the development of techniques to measure polygon geometries using Geographic Information Systems (GIS). Ulrich et al. (2014) proposed a method to estimate IW volume from high-resolution satellite images and limited ground data using 3-D GIS tools. Polygon networks are delineated from satellite imagery and converted into a triangular irregular network (TIN). Average IW width and depth from previous field surveys serve as inputs to a 3-D subsurface model of polygons, enabling wedge ice volume calculation. Their method was used in Yedoma deposits (Pleistocene-age ice-rich permafrost) and Holocene thermokarst basins in Siberia and Alaska. Two procedures were used to digitize the centre lines of troughs in their study: manual delineation and Thiessen polygon delineation. Compared to previous estimation methods, the volume accuracy is increased as the actual polygon dimensions are used. The study by Ulrich et al. (2014) establishes that GIS is an appropriate tool with which to conduct estimates of geometrically irregular features on a large scale but recognizes that precise field data are necessary.
Figure 3Sample locations of this study and potential coverage area of ice wedges on the Fosheim Peninsula in the Canadian High Arctic, shown in inset. Surficial geology data have been simplified and are from a map produced by Bell (1992). The 150 m contours (CanVec data, Natural Resources Canada, 2016) are a proxy for the Holocene sea level on the Peninsula (Bell, 1996). Coordinate system: NAD 1983 UTM 16N. Projection: Transverse Mercator.
IWs often have a distinct surface expression that can be mapped using high-resolution satellite imagery (Gilbert et al., 2016). Previous studies have shown that IWs occur literally everywhere in unconsolidated sediments in the continuous permafrost zone and that their subsurface geometry is relatively consistent and closely related to terrain conditions and surficial geology (Couture and Pollard, 2007). Given the predicted changes in Arctic climate and our current understanding about nature and distribution of IWs, it is safe to assume that IW melt will contribute greatly to High Arctic geomorphic change with feedbacks that will reinforce permafrost instability. Semi-automated techniques that delineate IW polygons on satellite images greatly improve the time efficiency and coverage area of wedge ice volume estimates compared to manual delineation (Ulrich et al., 2014). In this study, we build upon the methodology introduced by Ulrich et al. (2014) by testing GIS-based methods to delineate IW troughs and present a new semi-automated method based on watershed segmentation principles. We then provide a first approximation of IW ice volume in a High Arctic polar desert environment, the Fosheim Peninsula, to assess its sensitivity to thermokarst processes as a response to climate change.
Table 1Definition of the sample locations and satellite imagery used.
2 Study area: Fosheim Peninsula
Our study focusses on the Fosheim Peninsula of Ellesmere Island (Fig. 3), which lies within the Eureka Sound Lowlands (ESL). The ESL region roughly covers 750 km2 on central Ellesmere and Axel Heiberg Islands in the northernmost part of the Canadian Arctic Archipelago and is bordered by the Sawtooth Mountains to the east and the Mueller Ice cap to the west (Pollard et al., 2015). An Environment Canada weather station located at Eureka (8000 N, 8555 W) is in the centre of the ESL on the Fosheim Peninsula. The area is mostly flat to gently rolling with elevations below 300 m a.s.l., except for several ridges that rise to a maximum of 840 m a.s.l. where outcrops of intact bedrock occur (Bell, 1996). The surficial geology of the area is dominated (∼60 %) by unconsolidated ice-rich silty-clay marine sediments below ∼150 m a.s.l., but local fluvial, glacial and glaciofluvial deposits are present. The area is underlain by continuous permafrost ∼500 m deep. As a polar desert, it is one of the driest regions in Canada, with a mean annual precipitation of 68 mm recorded at Eureka for the period 1980–2015 (Pollard et al., 2015). The mean annual air temperature is −18.8C with the lowest mean monthly temperature in February of −37.4C and highest in July of 6.2 C for the same period. The thaw season varies between 3 and 6 weeks in length (Pollard et al., 2015). The mountains surrounding the ESL limit cold air masses from the ocean and create relatively warm July temperatures for this latitude, and a general warming trend has been noted for this month since 1980 (Pollard et al., 2015). Sparse vegetation (patchy low shrubs) and very low snowfall in this polar desert region suggests that the climate drivers are fairly consistent across the ESL. The mean active layer thickness is 60 cm and ranges between 30 and 100 cm. IW polygons are nearly continuous in unconsolidated sediments across the ESL and exposures of thick tabular massive ice bodies are numerous (Pollard et al., 2015). Couture and Pollard (1998) estimated that the volume of ground ice in the uppermost 5.9 m of permafrost comprises 1.8 %–3.5 % of wedge ice and up to 30.8 % with all ground-ice types combined. Average IW width is 1.46 m and depth is 3.23 m from a survey of 150 exposed IWs by Couture and Pollard (1998). Extreme polar latitudes often lack thermokarst features but with ice content often exceeding 60–70 % in the fine marine sediments, the ESL is an exception (Pollard et al., 2015; French, 2018). Due to a generally thin active layer, IWs are commonly close to the surface. This makes them vulnerable to an increase in the active layer thickness as a result of an increase in ground temperature. The response of High Arctic polar desert to projected climate change was modelled by Couture and Pollard (2007) with the climatic and geologic conditions of the ESL. They outlined two scenarios of +4.9 and +6.6 C mean annual air temperature increase in the 2040–2060 period compared to mean annual air temperature from the 1948–1997 period. These led to a lengthening of the thaw season by 26 days and increased thaw depths of 17–20 cm. Comparison with modelled and past disturbance values reveal that ground subsidence would be of the order of 1 m in the vicinity of IWs and greater than 1 m for massive ground-ice bodies.
Figure 4Original satellite image and delineation outputs with percentage of ice wedge volume in the top 5.9 m of permafrost for each method for each sample location.
3 Methodology
3.1 Data sources and sample locations
To assess the best techniques for IW trough delineation, we first identified a series of suitable sample locations from a detailed analysis of four high-resolution (0.5 m pixels) WorldView 2 and 3 satellite images. Like Ulrich et al. (2014) we defined the sample locations as squares of 250 × 250 m. Four sample locations, three on Ellesmere Island (EL1, 2 and 3) and one on Axel Heiberg Island (AH1), with different polygon size, morphology, density and width of troughs were selected (Table 1 and Fig. 3). All sample locations are characterized by random orthogonal polygons formed by epigenetic IWs on relatively flat surfaces (Fig. 4). The high-centred polygons on the Ellesmere Island sample locations have well-developed troughs (approximately 2–6 m wide). Wedge hierarchy reflected by variability in trough width is most visible in sample location EL1. EL2 was chosen due to its dominance of rectilinear polygons, while EL3 was chosen for the high number of polygons with small areas and their proximity to polygons with much larger areas. In contrast, AH1 was chosen because of polygons with large areas and narrow troughs where cracking is assumed to be less frequent. AH1 is the only sample location where IW cracks not forming closed polygons are visible.
3.2 Delineation of polygons
At each sample location, the following three delineating methods were performed once using built-in tools in ArcGIS (ESRI, version 10.3.1): (1) manual delineation, (2) Thiessen polygons and (3) watershed segmentation. In our method, we use and refer to specific ArcGIS tools but most GIS packages contain similar tools and functions that could be used to replicate our analysis.
3.2.1 Manual delineation
Following the Ulrich et al. (2014) methodology, we manually digitized polygons at each sample location by creating a line dataset of the troughs centre lines. Only lines enclosing complete polygons falling within the sample location were kept and visible IW cracks not enclosing any polygons were also mapped.
Figure 5Models developed with ArcGIS Model Builder for the watershed segmentation method. (a) Watershed creation: the inputs are the raster image as well as the maximum and minimum values needed to inverse the pixel values. The treatment of each site differed in the number of iterations run by the focal statistic mean tool for a satisfying basin segmentation output. The output is a basin raster, and every pixel has the value of its corresponding watershed. (b) Converting basin output to a raster of the watershed borders: watershed boundaries are classified as a raster where the value of 1 represent boundaries. The snap raster is the initial band image clipped to the sample location. (c) Combination of the bands: all the classified watershed boundaries of each band are converted to a line feature that can be manually edited. A detailed description of the tools can be found at http://desktop.arcgis.com/en/arcmap/10.3/main/tools/a-quick-tour-of-geoprocessing-tool-references.htm (last access: 14 November 2018).
3.2.2 Thiessen polygons
The second method involved the semi-automated delineation of polygons based on the creation of Thiessen (or Voronoi) polygons. This approach was used in Ulrich et al. (2014) to estimate the volume of a relict IW network in baydzherakhs landforms, where IWs had melted and only raised polygon centres remained, and was compared at other sites with manual delineation. Thiessen polygons are defined mathematically as the perpendicular bisectors of the lines between all input points. Then, the area inside one Thiessen polygon is closer to its associated input point than any other input point (Aurenhammer, 1991). The tool “create Thiessen polygon” was used to create Thiessen polygons from manually chosen centre points of IW polygons, hereafter called the “approximated” centre points. Following this creation, polygons near the outer boundaries of the sample squares were necessarily defined by having these boundaries as vertices. To avoid those edge effects, we created approximated centre points for polygons up to 30 m away from the sample locations before the creation of the Thiessen polygons. All resulting polygons that were not completely inside the sample locations were then deleted, and the remaining polygons were converted to a line dataset. Approximated centre points were created without the visible manual delineation lines to test the ability of the analyst to identify IW polygon centres.
3.2.3 Watershed segmentation
Delineating IW polygons has many similarities with detecting grain boundaries in thin sections for petrographic analysis because both involve detecting edges. In a study by Barraud (2006) grain boundaries are detected using a watershed segmentation algorithm in image segmentation software. The basic principles of this segmentation process were reproduced in this study with the Spatial Analyst Hydrology toolbox from ArcGIS.
Figure 6Example of a 3-D subsurface model. The TIN surface represents the entire EL3 sample location (250 m × 250 m). The elevation of zero corresponds to the base of the active layer. This model is based on the methodology developed in Ulrich et al. (2014).
This third delineation method is based on the interpretation of the value of each pixel as a height function, i.e. as if it was a digital elevation model (DEM). If the IW troughs have higher pixel values (brighter) than the polygon centres they will act as “mountains” and polygon centres as “valleys”. If this topography was to be flooded, the water would accumulate in each polygon centre valley delineated by the trough boundary mountains. In the WorldView images, the polygon centres have higher value pixels and the troughs lower value; therefore we inverted the pixel values before using the hydrology tools.
Watersheds were first obtained using the flow direction tool to calculate the flow direction of each pixel in the image, and then the basin tool was used to delineate the smallest possible watersheds where water could accumulate. We converted the multiple steps of this method into a semi-automated process by implementing them in ArcGIS Model Builder (Fig. 5), which increased time efficiency and required few interventions from the analyst. Filtering and smoothing of the image is required before the flow direction and basin tool outputs can provide watershed outlines that are representative of the IW polygons (Fig. 5a). To enhance troughs pattern, we used the focal statistic maximum tool followed by the focal statistic mean tool to reduce noise in the polygon centres. The later had to be performed multiple times to generate watersheds that were not oversegmented, i.e. too many watersheds representing one actual IW polygon. Watersheds were created after each focal mean iteration and evaluated against the manually digitized polygons lines. The iterations were stopped when some watersheds started to merge and to include two or more manually digitized polygons. At this point, approximately one to eight watersheds represented each actual IW polygon.
The hydrology tools were used on all of the available bands in WorldView imagery at each sample location (Table 1) before being combined into a single-line dataset per sample location. The same number of focal mean iterations were performed on all the bands and they were then combined to create the final IW polygon delineation lines. For each band, watershed outlines were extracted as lines and were converted into a raster format (Fig. 5b). The three raster datasets of each band were summed, and pixels which were classified as boundaries (IW troughs) in two or more of the bands were kept (Fig. 5c). For these steps, the pixel size was increased from 0.5 to 1 m to get a better chance of the watershed outlines overlapping. To convert the boundary pixel raster into a clean-line dataset, the output raster was thinned, watershed boundary pixels were converted to polylines, and the extend line tool was used with a maximum extension distance of 5 m (10 original pixels) to obtain a maximum number of closed polygons.
The clean trough centre-line datasets representing IW polygon outlines were visually assessed and edited to improve their accuracy. With the initial WorldView image visible, lines oversegmenting the IW polygons were deleted and lines were added where polygons when some boundaries were not closed. All lines outside the sample locations were also deleted. Manually delineated polygons were included in these edited datasets to be consistent with the initial choice of the analyst. The remaining dangling lines were erased using the trim line tool. Finally, the simplify line tool with the point remove option and a tolerance of 1.5 m (3 pixels) was used to smooth any lines which had sharp edges due to the contouring of pixels form the conversion of raster to polyline format.
3.3 Three-dimension subsurface model for ice wedge and sediment volume calculation
Similarly to Ulrich et al. (2014), field data of mean IW depth and width were used to estimate IW volume from Couture and Pollard (1998). A buffer was created around the delineated lines of half the mean width of an IW (0.73 m) and then the buffer extent was cut out of the polygons with the erase tool. The resulting polygons therefore did not include the IW troughs. All polygons, even the edge ones, are then considered to be surrounded by half an IW.
Figure 7Delineation method for comparison metrics. (a) Difference in mean area and perimeter of delineated polygons for the semi-automated methods with the manual delineation method. Data from Table 2. (b) Mean distance between the centres of polygons created by the semi-automated delineation methods compared to the manual delineation method. The near tool was used with a search radius of 10 m (20 pixels). Two sets of centre points were considered for the Thiessen polygons method: the approximated centre point to create the polygons initially and the resulting centre points of the created Thiessen polygons.
Volume calculations were performed in a 3-D subsurface model by creating a Triangulated Irregular Network (TIN) dataset, which is a network of mass points representing a surface terrain. Assuming that the IWs are inverted isosceles triangles, the elevation of −3.23 m (mean IW depth) was assigned to the trough centre lines and an elevation of 0 m was assigned to the polygons corresponding to the base of the active layer (see legend for elevation in Fig. 6). An elevation of −3.23 m was also assigned to a dissolved polygon extent. The TIN was created from those three datasets with the Delaunay triangulation constrained for each segment (lines and polygon vertices) to be added as an edge in the TIN.
The IW volume and sediment volume were calculated using the surface volume tool. The IW volume was calculated from a plane above the TIN at 0 m. In order to compare the results of this study with the values found in Couture and Pollard (1998), the thickness of frozen soil considered in their study (5.9 m) was used to calculate sediment volume. It was calculated from a plane at this depth below the TIN (−5.9 m). The percent volume of IWs at each sample location was calculated by dividing the IW volume by the total frozen material (sediment and IWs) volume.
3.4 Fosheim Peninsula ice wedge volume estimation
We estimated the cumulative coverage area of IW polygons for the Fosheim Peninsula based on the surficial geology map from Bell (1992), which differentiates between surficial sediments of marine, fluvial, glaciofluvial and glacial origin and indicates weathered bedrock and residuum areas. The map was digitized with reference to the shoreline and contour datasets of CanVec Series dataset from Natural Resource Canada (2016). As it is rare for IW polygons to occur in bedrock (French, 2018), it was assumed that they can be located in all the unconsolidated surficial sediment classes (marine, fluvial, glaciofluvial and glacial). The potential area occupied by IWs was determined by subtracting the area of the large lakes and areas identified as bedrock from the total area of the peninsula. The 150 m CanVec contour was isolated, as this provides a proxy for the Holocene marine limit on the Fosheim Peninsula because IWs are ubiquitous below this elevation (Bell, 1996; Couture and Pollard, 1998). We assumed that the mean of the IW percent volume of our sample locations was representative of the geomorphological settings where IWs are present on Fosheim Peninsula and used it to calculate the equivalent IW ice volume over the entire peninsula.
4 Results
4.1 Delineation of polygons
To compare the accuracy of the two semi-automated delineation methods against the manual method, the mean perimeter and area of polygons (Fig. 7a) as well as the total length of delineated troughs (Table 2) were calculated. It is expected that a more efficient method would have a mean perimeter and mean centre point distance close to the manual delineation method values. While we recognize the importance of assessing variance in polygon size at the sample locations, it could not be treated quantitatively herein, but can be assessed qualitatively by examining Fig. 4.
Table 2Summary of the delineation results for each method at each sample location and corresponding proportion of ice wedge volume.
All the delineation methods provided polygon outlines for the four sample locations, although the accuracy of the outlines is variable. We assume that the manual method provides the best outlines because troughs can be detected by the analyst regardless of their width and intersection with other troughs. Manually delineating the troughs was most difficult at EL3 where polygons were small and contrast was very low, especially on the left side of the sample location (Fig. 4). The presence of IW troughs that do not form closed polygons in AH1 was also difficult to detect as the troughs themselves were thin. When compared with the manual trough centre lines, the Thiessen polygons do not agree very well as they simplify the actual polygon shapes. Some edge effects remain on the Thiessen polygon boundaries, mostly caused by the proximity of polygons with large area difference (Fig. 4). The number of polygons were slightly increased at EL2 (+3.57 %) and EL3 (+0.04 %) and equal at EL1 and AH1 for the Thiessen polygons method (Table 2). The edited trough centre lines from the watershed segmentation method are generally in good agreement with the manually digitized trough centre lines (Fig. 4). The watershed segmentation technique overestimates the number of polygons, by as much as 5 times in the case of AH1 but around two times for the three Ellesmere Island sample locations (Table 2). This result is anticipated, as watershed oversegmentation is preferred to undersegmentation before editing, because outlines of smaller polygons would disappear when watersheds start to merge.
The mean distance between the centre points of the polygons for the different methods was calculated as an indicator of the similarity between the delineated polygons, particularly for the Thiessen approximated centre points versus the manual centre points (Fig. 7b). At all sample locations, this mean distance is <4 m, equivalent to <8 pixels. The maximum distance encountered was 9.5 m for a polygon on the edge of EL2 for the Thiessen polygons method. Two patterns in the mean distance between the centre points seem to emerge: (1) the grouping of manual and Thiessen-approximated methods with manual and watershed-segmentation methods and of Thiessen–Thiessen-approximated methods with manual–Thiessen methods due to their value closeness, and (2) the fact that the last group has higher values. The only exception to this observation is the manual–watershed-segmentation mean distance being the highest value for EL3.
The majority of mean perimeter and polygon areas of the Thiessen and watershed segmentation methods have a difference of <5 % with the manual method at a given sample location (Fig. 7a). Exceptions occur at larger polygon sample locations with the Thiessen polygons method, where polygon area is overestimated by 11.6 % and 15.5 % for EL2 and AH1, respectively (Fig. 6a). Another exception occurs for the mean perimeter of polygons at AH1, which is underestimated by >10 % for each method (Fig. 7a). The Thiessen method overestimates the mean polygon area at each sample location, with proportionally greater overestimations for sample locations with larger polygons. The watershed segmentation area estimation is more precise, being <1 % different than the mean area for the manually delineated polygons at all sample locations.
4.2 Ice wedge volume
An example of the TIN output for the IW volume calculation can be found in Fig. 6. The percent volume of IWs in the top 5.9 m of frozen material ranges from 1.41 % for the lowest polygon density AH1 to 5.88 % for the highest polygon density sample location EL3, all delineation methods included (Table 2). IW volumes for the watershed segmentation and Thiessen methods are slightly lower or equal to the manual method estimate. The largest difference occurs at AH1 where there is a difference of −0.31 in the percent IW volume, which is equivalent at this sample location to 7.23 m3 of IW ice. At each sample location, the IW volume estimate from the Thiessen polygons method is the lowest, except in the case of EL1, where it is equal to the watershed segmentation method but still lower than the manual method estimates (Table 2).
Based on digitization of the Bell (1992) map, approximately half of the Fosheim Peninsula surface area contains IWs, corresponding to an area of ∼3000 km2 (Fig. 3). Considering only the top 5.9 m of permafrost, this is equivalent to a volume of frozen material of 17.7 km3. The total IW ice volume is 6.7×108 m3 when assuming an IW volume of 3.81 % by averaging the results from the manual delineation method at the four sample locations in Table 2. Slightly lower estimates are obtained when averaging the IW volume of the two other semi-automated methods: 6.4×108 m3 with 3.61 % for the Thiessen method and 6.6×108 m3 with 3.74 % for the watershed segmentation method.
5 Discussion
5.1 Semi-automated delineation methods
The use of the Thiessen method on four sample locations with various polygon morphologies reveals its strength for volume estimation but not for trough identification. The main problem with this method is that curved troughs could not be delineated properly because the create Thiessen polygon tool can only output straight lines. This is reflected by the overestimation of polygon area (Fig. 7a). It is anticipated that better results would be obtained for hexagonal or rectangular polygonal patterns, rather than the orthogonal polygons tested in this study. This method is the least time consuming, but overall underestimates IW volume by differing between 0.1 % and 0.3 % from the percent IW volumes from manual delineation (Table 2). The Thiessen method was judged by Ulrich et al. (2014) to be “visually similar” to manual digitization. However, their study areas had a majority of rectilinear polygons, for which the approximation of centre point is easier than for more complex shapes found in the sample locations of this study.
The watershed segmentation method developed for this study with ArcGIS Hydrology tools was the most accurate in terms of locating trough centre lines and IW volume for every sample location. The poor agreement along the margins of sample location EL3 can be attributed to the lack of contrast in this part of the image (Fig. 4) and explains why the mean distance between the automatically vs. manually derived centre points is the highest at this sample location (Fig. 7a). With minimal editing, the results of IW volume calculations using the watershed segmentation were equal to the manual method values for two sample locations (Table 2). The method accuracy can be improved by editing the sharp angles at boundaries that are not completely smoothed with the simplify line tool. These are most prevalent at sample location AH1 where the XY tolerance for simplifying the lines (1.5 m) was the smallest compared to the length of the polygon vertices. The accuracy of the trough centre-line positions was reduced when increasing the pixel resolution to 1 m in the polyline-to-raster conversion but does not make a large difference in IW volume as the troughs are overwhelmingly larger than 1 m (2 pixels) at every sample location.
The effect of larger polygon size is visible in the results of sample location AH1 with the greatest differences in mean perimeter and area of polygons compared to manual delineation, and this was independent of the method used. This can also be attributed to the thin troughs at this sample location and to the difficulty of differentiating what seems like dry run-off channels from IW troughs (Fig. 4).
This study focussed on the development of two methodologies to delineate IW polygons based on only four sample locations. However, we are confident that both methods are applicable to delineating polygons for much larger areas. Thiessen polygons can readily be generated for larger areas and are manually edited along the boundaries to reduce edge effects. The watershed segmentation method can also be used for larger areas by choosing a number of focal mean iterations that will preserve the boundary details of the smallest polygons present. Even if oversegmented, this method preserves the largest polygon outlines corresponding to the darker zones of the images, interpreted as higher elevation when creating watersheds. The applicability of this methodology to terrain with complex contrast patterns in satellite imagery has not yet been tested. We suspect that presence of water bodies within IW polygons or in troughs, or of prominent vegetation at lower Arctic latitudes, would impact the proper detection of IW troughs. Hence, we suggest that it could be applied to high-resolution DEMs (≤0.5 m pixels with centimetre scale horizontal and vertical accuracy) instead of high-resolution satellite images. Then, the watershed segmentation method could be applied with more confidence on a wider range of Arctic terrain. The need for higher-resolution DEMs has been identified for the study of permafrost degradation in general, specifically to monitor surface subsidence and thermokarst processes (Jorgenson and Grosse, 2016). Promising remote sensing methods to detect topographic and subsurface change and to map ground-ice distribution include airborne light detection and ranging (lidar), interferometric radar (InSAR), airborne ground-penetrating radar and structure-from-motion technology (Gogineni et al., 2014; Jorgenson and Grosse, 2016). High resolution terrain models can be derived from these methods that are needed to monitor surface subsidence on a smaller scale and to estimate ground-ice distribution over large areas (Gogineni et al., 2014). These data could be acquired from unmanned aerial vehicles (UAVs) or other airborne platforms and would require fieldwork. This highlights the strength of our relatively simple methodology, relying on high-resolution satellite images and minimal field data, which can be applied on remote locations without the need of extensive fieldwork to create DEMs.
There are other semi-automated delineation methods that could be used to improve the delineation process of IW polygons on satellite images. One potential technique is the method described by Li et al. (2008), which was used to delineate grain boundaries in thin sections. In this method, edge detection is based on the abrupt change in pixel values, representing brightness, at the boundary between two grains. However, this method was deemed unsuitable for delineation of IW polygons since it requires considerable manual editing, and image classification algorithms could also not be applied due to the lack of contrast in some of our satellite images. Another approach would be to build on the methodology of Skurikhin et al. (2013), who classified Arctic tundra drainage network components including IW troughs with image segmentation and shape-based classification.
5.2 Ice wedge volume calculations
IW volume at the four sample locations of this study, derived from manual delineation and both of the semi-automated methods, is similar to the results of Couture and Pollard (1998) on the Fosheim Peninsula. Their study concluded that, for low-density polygonal terrain, IW ice comprised 1.8 % of the top 5.9 m of permafrost and high-density polygonal terrain 3.5 %. Their low density sample is very close to sample location AH1 (1.73 %), which confirms that the sample location on Axel Heiberg Island is representative of parts of the Fosheim Peninsula. Even if our values from EL2 are very similar to the high-density values in their study, EL1 and EL3 have a much higher IW ice volume percentage, redefining high-density polygonal terrain on the Fosheim Peninsula. This may be due to the choice of sample locations of 250 × 250 m for estimating IW volume. This surface area was found by Ulrich et al. (2014) to provide a representative scale where polygon diameter showed only small variations. Although the polygon density and shapes of sites in Siberia used by Ulrich et al. (2014) may not be comparable to the sites tested here, this size was chosen in our study as a manageable area for manual delineation and development of methodologies to delineate polygons. In future studies, the effect of the scale of the extent considered on the IW perimeter, area and IW volume should be assessed to refine the IW volume estimation.
Multiple necessary assumptions are made when calculating IW volume with TINs and here we consider their potential effect in estimating IW volume on large scales. The most critical is probably the assumption that IW width and depth do not vary significantly between polygonal terrains. A lack of subsurface data meant that using mean IW width and depth was the best approximation we could use for our calculations. The small variability in estimations of IW volume for the entire peninsula from the three delineation methods suggests that more error might be introduced in our estimate from the assumption of a fixed IW geometry than by the technique used to derive IW length in a specific area. Differences in IW width at our sample locations are obvious in Fig. 4, where multiple troughs are greater than the 1.46 m average used (e.g. EL2) and likely relate to sub-regional variation in geological history. Using these visible differences in surface expression as information on IW width would require another assumption that cannot be validated with the limited field data available: trough width is approximately IW width. Multiple sample locations in each surficial geology class presented by Bell (1992) would have permitted the calculation of IW volume on a sub-regional basis and the definition of IW parameters such as apparent width and polygon density for each surficial geology class. A specific example in which assuming a fixed IW geometry is not valid on the Fosheim Peninsula is in the surficial geology unit of the thin veneer of glacial sediments defined by Bell (1992). The thickness of this geological unit over bedrock is defined as 2 m, which is less than the 3.23 m mean IW depth used here. It is important to mention that the IW width and depth used in our calculations are minimal estimates because only exposed IWs were measured by Couture and Pollard (1998). As with this earlier study, we used the depth of 5.9 m below the active layer to calculate the IW volume because no IWs were observed below this depth.
The fixed geometry of IWs, assumed to be isosceles triangles, also impacts our IW volume calculation. It is the general shape of epigenetic IWs recognized by Mackay (1990) and the shape used in previous IW volume estimates (e.g. Pollard and French, 1980; Couture and Pollard, 1998; Bode et al., 2008; Ulrich et al., 2014). Even though IWs can be irregularly shaped in cross section, an inverted isosceles triangle with its base corresponding to the IW width is the best approximation of the shape for a calculation of this nature. Based on nearly 20 years of fieldwork on the Fosheim Peninsula, we have found syngenetic IWs to be relatively uncommon and limited to areas of active sedimentation like glacial forelands (floodplains), alluvial fans and deltas. Thus, we assumed that most IWs in the Fosheim Peninsula are epigenetic and that our sample locations are representative of the ESL and should therefore not greatly affect our IW volume calculation.
IWs may contain gas inclusions, small amounts of sediment as disseminated grains and discontinuous veins of silt and fine sand (French, 2018). The inclusion of this factor in our volume calculation is not realistic for this first-approximation study so it was assumed that all IWs were composed of pure ice. This has also been assumed by Ulrich et al. (2014) and most previous studies (e.g. Pollard and French, 1980; Couture and Pollard, 1998; Bode et al., 2008). Delineating IW polygons on satellite images implicitly assumes that all IWs have a visible surface expression (i.e. a trough structure). Field observations in the ESL show that this is not always the case because many of the factors leading to trough development (e.g. vegetation coverage and surface hydrology) do not always apply in very cold and relatively dry polar desert environments. Commonly, no trough structure is visible when the top of an IW is in equilibrium with the thin active layer depth (Fig. 1a) (Pollard et al., 2015). This assumption would lead to an underestimation of IW volumes on the Fosheim Peninsula. The opposite is also true as there might not be an IW below every crack and trough, but our estimates can only be based on what is detectable in the satellite imagery.
Given the potential errors discussed above associated with assumptions of width, depth and surface expression of IWs on the Fosheim Peninsula, we refer to our estimate of total IW volume as a first approximation and are confident it is a reasonable minimum estimate for this regional scale.
5.3 Impacts of melting ice wedges
IWs are the most widespread ground-ice phenomenon in areas of continuous permafrost. By virtue of their formative processes, the top of the active IW in many cases corresponds with the base of the active layer. Since they are in a quasi-stable relationship with maximum seasonal thaw depth, then any increase in the active layer depth will result in subsidence of the ground surface over the top of the IW. Under stable permafrost conditions, networks of shallow IW troughs will interact with snow distribution, surface vegetation and surface hydrology, in some cases contributing locally to additional deepening and surface ponding (Jorgenson et al., 2006, 2015). Over time, however, warm summers may produce small amounts of thaw at the top of the wedge leading to deepening of the trough. Thus, the localized degradation of IWs may be part of the normal evolution of permafrost landscapes. However, the widespread deepening of the active layer under projected Arctic climate change scenarios is expected to lead to dramatic regional changes in landscapes marked by increased local topography and changes in surface hydrology (Liljedahl et al., 2016). As IWs melt out, the sides of the IW polygons collapse into the open trough, producing a highly dissected landscape characterized by mounds at the former polygon centres and networks of deep channels and shallow ponds along the former IW troughs (Fig. 1; Couture and Pollard, 2007).
There is evidence that our local observations relate directly to widespread permafrost thaw and development of thermokarst terrain on the Fosheim Peninsula. An increase in thermokarst processes and retrogressive thaw slump retreat in the ESL over the past 25 years has been documented by Pollard et al. (2015). Unlike IW degradation associated mainly with thermal erosion by running water (e.g. Fortier et al., 2007), the instability of IWs in this region is related initially to thaw-induced surface collapse, but undoubtedly running water will play a role at some point. The active melt out seen in Fig. 1b gives an indication of how rapidly these changes may occur once the system becomes unstable. There is not only subsidence in the IW troughs, but also widespread backwasting of exposed IWs, similar to the headwall retreat in a retrogressive thaw slump (Fig. 1a). There is also evidence of shallow active layer detachments along IW troughs in the ESL (Fig. 1b). In some cases, on the Fosheim Peninsula we have observed rapid melt out of IWs contributing to the formation of much larger retrogressive thaw slumps in areas where massive ground ice is present (Fig. 3a, d). The net result will be a period of landscape instability amplified by feedbacks associated with run-off (surface hydrology), snow accumulation, changing vegetation, thermokarst from massive ground ice, mass wasting and microclimate. In principle, the new landscape will develop a deeper active layer consistent with the summer thaw conditions, though it may take a long time for the new active layer depth to stabilize, prolonging the period of thermokarst activity and subsidence. The new landscape will be quite different, and depending on the topographic and geologic setting not unlike badlands. For other areas, the new landscape will reflect a geomorphic system affected not only by IW degradation but other changes to the permafrost system and surface hydrology.
6 Conclusion
IWs are the most common form of ground ice in areas underlain by continuous permafrost. The occurrence of IWs increases the biophysical complexity of permafrost landscapes. Their widespread nature will contribute to significant permafrost instability once thermokarsts processes are initiated. IWs exist in a quasi-stable equilibrium with seasonal thaw as defined by the depth of the active layer. Accordingly, a climate-change-driven increase in active layer depths will likely produce widespread instability of landscapes associated with melting IWs. To better understand the potential impact of widespread destabilization of polygonal terrains there is a need to assess the volume and extent of IW ice. In the absence of detailed field observations, the analysis of IW polygons using high-resolution satellite imagery and GIS-based tools is the most logical solution. Based on our analysis of IW polygons for the Fosheim Peninsula, we present three main conclusions. Firstly, compared to manual delineation, two GIS-based semi-automated techniques – the Thiessen polygons methodology presented in Ulrich et al. (2014) and the watershed segmentation methodology, newly developed in this study – permit an acceptable approximation of IW volume in remote Arctic locations. Implementation of these methods in a coded process accelerated the polygon delineation and demonstrates their potential to be applied to much larger areas in an efficient manner. Time constraint and required level of precision in the estimation of IW volume are two criteria to be considered when choosing one of these methods for future application in other sample locations. Secondly, IWs potentially cover an area of ∼3000 km2 on the Fosheim Peninsula where 3.81 % of the upper 5.9 m of permafrost could be IW ice. This first approximation is based on limited field validation data and sample locations that constrain it to the Fosheim Peninsula; however we are confident that our results are applicable to the entire ESL. Thirdly, further study in the ESL should focus on estimating IW volume for other sample locations using one of the semi-automated methods to increase the statistical significance of the results. Fieldwork in the ESL region could improve the IW volume estimates by linking surficial geology and physiographic units with IW characteristics. Associated with estimations of other ground ice type and carbon content, IW volume estimates will help to assess the vulnerability of High Arctic permafrost to climate change.
Data availability
Data availability.
The data used are listed in the references and tables.
Author contributions
Author contributions.
The research questions, objectives and study sites were identified collaboratively by both authors. CB developed the original elements of the methodology and carried out the analysis. WP supervised the project, arranged and coordinated field logistics, and provided funding. CB and WP wrote the paper.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
The authors acknowledge the support of ESRI Canada by awarding financial and technical support to Claire Bernard-Grand'Maison to complete this project through the McGill University ESRI Canada Award 2016. This research was funded by the Canadian National Science and Engineering Research Council Discovery, Accelerator and Northern Supplement grants awarded to Wayne Pollard and a Polar Knowledge Canada Northern Scientific Training programme grant awarded to Claire Bernard-Grand'Maison. The writers would like to acknowledge the logistical support provided by the Polar Continental Shelf Program, Natural Resources Canada. The authors wish to acknowledge discussions with Michael Templeton.
Edited by: Peter Morse
Reviewed by: Mikhail Kanevskiy and one anonymous referee
References
AMAP: Snow, Water, Ice and Permafrost in the Arctic (SWIPA) 2017, Arctic Monitoring and Assessment Programme (AMAP), Oslo, Norway, xiv + 269 pp., 2017.
Aurenhammer, F.: Voronoi diagrams – a survey of a fundamental geometric data structure, ACM Computing Surveys (CSUR), 23, 345–405, https://doi.org/10.1145/116873.116880, 1991.
Barraud, J.: The use of watershed segmentation and GIS software for textural analysis of thin sections, J. Volcanol. Geoth. Res., 154, 17–33, https://doi.org/10.1016/j.jvolgeores.2005.09.017, 2006.
Bell, T.: Glacial and sea level history of western Fosheim Peninsula, Ellesmere Island, Arctic Canada, PhD thesis, University of Alberta, Edmonton, Alberta, 1992.
Bell, T.: The last glaciation and sea level history of Fosheim Peninsula, Ellesmere Island, Canadian High Arctic, Can. J. Earth Sci., 33, 1075–1086, https://doi.org/10.1139/e96-082, 1996.
Black, R. F.: Periglacial features indicative of permafrost: ice and soil wedges, Quaternary Res., 6, 3–26, https://doi.org/10.1016/0033-5894(76)90037-5, 1976.
Bode, J. A., Moorman, B. J., Stevens, C. W., and Solomon, S. M.: Estimation of ice wedge volume in the Big Lake area, Mackenzie Delta, NWT, Canada, in: Proceedings of the 9th International Conference on Permafrost, Fairbanks, Alaska, 28 June–3 July 2008, 131–136, 2008.
Brummell, M. E., Farrell, R. E., Hardy, S. P., and Siciliano, S. D.: Greenhouse gas production and consumption in High Arctic deserts, Soil Biol. Biochem., 68, 158–165, https://doi.org/10.1016/j.soilbio.2013.09.034, 2014.
Couture, N. J. and Pollard, W. H.: An assessment of ground ice volume near Eureka, Northwest Territories, in: Proceedings of the 7th International Conference on Permafrost, Yellowknife, NWT, Canada, 23–27 June 1998, 195–200, 1998.
Couture, N. J. and Pollard, W. H.: Modelling geomorphic response to climatic change, Climatic Change, 85, 407–431, https://doi.org/10.1007/s10584-007-9309-5, 2007.
De Pascale, G. P., Pollard, W. H., and Williams, K. K.: Geophysical mapping of ground ice using a combination of capacitive coupled resistivity and ground-penetrating radar, Northwest Territories, Canada, J. Geophys. Res., 113, F02S90, https://doi.org/10.1029/2006JF000585, 2008.
Fortier, D., Allard, M., and Shur, Y.: Observation of rapid drainage system development by thermal erosion of ice wedges on Bylot Island, Canadian Arctic Archipelago, Permafrost Periglac., 18, 229–243, https://doi.org/10.1002/ppp.595, 2007.
French, H. M.: The periglacial environment (4th 3rd edn.), John Wiley and Sons, Chichester, England, 2018.
Gilbert, G. L., Kanevskiy, M., and Murton, J. B.: Recent advances (2008–2015) in the study of ground ice and cryostratigraphy, Permafrost Periglac., 27, 377–389, https://doi.org/10.1002/ppp.1912, 2016.
Godin, E. and Fortier, D.: Fine scale spatio-temporal monitoring of multiple thermo-erosion gullies development on Bylot Island, Eastern Canadian Archipelago, in: Proceedings of the 10th International Conference on Permafrost, Salekhard, Russia, 25–29 June 2012, 125–130, 2012.
Gogineni, P., Romanovsky, V. E., Cherry, J., Duguay, C., Goetz, S., Jorgenson M. T., and Moghaddam, M.: Opportunities to use remote sensing in understanding permafrost and related ecological characteristics: National Research Council workshop, National Academies Press, Washington, DC, 2014.
IPCC: Climate Change 2013: The Physical Science Basis, Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, edited by: Stocker, T. F., Qin, D., Plattner, G.-K., Tignor, M., Allen, S. K., Boschung, J., Nauels, A., Xia, Y., Bex, V., and Midgley, P. M., Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp., 2013.
Jorgenson, M. T. and Grosse, G.: Remote sensing of landscape change in permafrost regions, Permafrost Periglac., 27, 324–338, https://doi.org/10.1002/ppp.1914, 2016.
Jorgenson, M. T., Shur, Y. L., and Pullman, E. R.: Abrupt increase in permafrost degradation in Arctic Alaska, Geophys. Res. Lett., 33, L02503, https://doi.org/10.1029/2005GL024960, 2006.
Jorgenson, M. T., Kanevskiy, M., Shur, Y., Moskalenko, N., Brown, D. R. N., Wickland, K., Striegl, R., and Koch, J.: Role of ground ice dynamics and ecological feedbacks in recent ice wedge degradation and stabilization, J. Geophys. Res.-Earth, 120, 2280–2297, https://doi.org/10.1002/2015JF003602, 2015.
Kanevskiy, M., Shur, Y., Jorgenson, M. T., Ping, C. L., Michaelson, G. J., Fortier, D., Stephani, E., Dillon, M., and Tumskoy, V.: Ground ice in the upper permafrost of the Beaufort Sea coast of Alaska, Cold Reg. Sci. Technol., 85, 56–70, https://doi.org/10.1016/j.coldregions.2012.08.002, 2013.
Kokelj, S. V. and Jorgenson, M. T.: Advances in thermokarst research, Permafrost Periglac., 24, 108–119, https://doi.org/10.1002/ppp.1779, 2013.
Kuhry, P., Grosse, G., Harden, J. W., Hugelius, G., Koven, C. D., Ping, C. L., Schirrmeister, L., and Tarnocai, C.: Characterisation of the permafrost carbon pool, Permafrost Periglac., 24, 146–155, https://doi.org/10.1002/ppp.1782, 2013.
Lachenbruch, A. H.: Mechanics of thermal contraction cracks and ice-wedge polygons in permafrost, Geol. S. Am. S., 70, 1–66, https://doi.org/10.1130/SPE70-p1, 1962.
Leffingwell, E. de K.: Ground-ice wedges the dominant form of ground-ice on the north coast of Alaska, J. Geol., 23, 635–654, https://doi.org/10.1086/622281, 1915.
Léger, E., Dafflon, B., Soom, F., Peterson, J., Ulrich, C., and Hubbard, S.: Quantification of arctic soil and permafrost properties using ground-penetrating radar and electrical resistivity tomography datasets, IEEE J. Sel. Top. Appl., 10, 4348–4359, https://doi.org/10.1109/JSTARS.2017.2694447, 2017.
Li, Y., Onasch, C. M., and Guo, Y.: GIS-based detection of grain boundaries, J. Struct. Geol., 30, 431–443, https://doi.org/10.1016/j.jsg.2007.12.007, 2008.
Liljedahl, A. K., Boike, J., Daanen, R. P., Fedorov, A. N., Frost, G. V., Grosse, G., Hinzman, L. D., Iijma, Y., Jorgenson, J. C., Matveyeva, N., and Necsoiu, M.: Pan-Arctic ice-wedge degradation in warming permafrost and its influence on tundra hydrology, Nat. Geosci., 9, 312–318, https://doi.org/10.1038/ngeo2674, 2016.
Mackay, J. R.: Some observations on the growth and deformation of epigenetic, syngenetic and anti-syngenetic ice wedges, Permafrost Periglac., 1, 15–29, https://doi.org/10.1002/ppp.3430010104, 1990.
Mackay, J. R.: Thermally induced movements in ice-wedge polygons, western arctic coast: a long-term study, Géogr. Phys. Quatern., 54, 41–68, https://doi.org/10.7202/004846ar, 2000.
Morse, P. D. and Burn, C. R.: Field observations of syngenetic ice wedge polygons, outer Mackenzie Delta, western Arctic coast, Canada, J. Geophys. Res.-Earth., 118, 1320–1332, https://doi.org/10.1002/jgrf.20086, 2013.
Munroe, J. S., Doolittle, J. A., Kanevskiy, M. Z., Hinkel, K. M., Nelson, F. E., Jones, B. M., Shur, Y., and Kimble, J. M.: Application of ground-penetrating radar imagery for three-dimensional visualisation of near-surface structures in ice-rich permafrost, Barrow, Alaska, Permafrost Periglac., 18, 309–321, https://doi.org/10.1002/ppp.594, 2007.
Pollard, W. H. and French, H. M.: A first approximation of the volume of ground ice, Richards Island, Pleistocene Mackenzie Delta, Northwest Territories, Canada, Can. Geotech. J., 17, 509–516, https://doi.org/10.1139/t80-059, 1980.
Pollard, W. H., Ward, M. K., and Becker, M. S.: The Eureka Sound Lowlands: an ice-rich permafrost landscape in transition, in: Proceedings of GeoQuebec 2015, 68th Canadian Geothechnical Conference and 7th Canadian Permafrost Conference, Quebec, QC, 21–23 September 2015.
Schuur, E. A. G., McGuire, A. D., Schädel, C., Grosse, G., Harden, J. W., Hayes, D. J., Hugelius, G., Koven, C. D., Kuhry, P., Lawrence, D. M., and Natali, S. M.: Climate change and the permafrost carbon feedback, Nature, 520, 171–179, https://doi.org/10.1038/nature14338, 2015.
Skurikhin, A. N., Gangodagamage, C., Rowland, J. C., and Wilson, C. J.: Arctic tundra ice-wedge landscape characterization by active contours without edges and structural analysis using high-resolution satellite imagery, Remote Sens. Lett., 4, 1077–1086, https://doi.org/10.1080/2150704X.2013.840404, 2013.
Smith, S. L., Throop, J., and Lewkowicz, A. G.: Recent changes in climate and permafrost temperatures at forested and polar desert sites in northern Canada, Can. J. Earth Sci., 49, 914–924, https://doi.org/10.1139/e2012-019, 2012.
Strauss, J., Schirrmeister, L., Grosse, G., Wetterich, S., Ulrich, M., Herzschuh, U., and Hubberten, H. W.: The deep permafrost carbon pool of the Yedoma region in Siberia and Alaska, Geophys. Res. Lett., 40, 6165–6170, https://doi.org/10.1002/2013GL058088, 2013.
Ulrich, M., Grosse, G., Strauss, J., and Schirrmeister, L.: Quantifying wedge-ice volumes in Yedoma and thermokarst basin deposits, Permafrost Periglac., 25, 151–161, https://doi.org/10.1002/ppp.1810, 2014.
van Everdingen, R. (Eds.): Multi-language glossary of permafrost and related ground-ice terms, National Snow and Ice Data Centre, Boulder, CO, 1998.
Walker, D. A., Gould, W. A., Maier, H. A., and Raynolds, M. K.: The Circumpolar Arctic Vegetation Map: AVHRR-derived base maps, environmental controls, and integrated mapping procedures, Int. J. Remote Sens., 23, 4551–4570, https://doi.org/10.1080/01431160110113854, 2002. | 2018-12-09 19:45:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6008991003036499, "perplexity": 5144.552893224034}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823009.19/warc/CC-MAIN-20181209185547-20181209211547-00353.warc.gz"} |
https://www.physicsforums.com/threads/what-o-what-shall-i-do.128608/ | # What o' what shall I do ?
1. Aug 10, 2006
Hi I'll be freshman starting on the 21st of August, at our local univeristy. MSSU Missouri. I have not declared a major yet. IM interested in both biology and physics. I would rather major in physics, but people tell me that it is hard to find employment after your bachelors or further education compared to biology. So I was thinking about majoring in biology and take my chances , but I know I'll regret it since I love physics so much more. Please any help ?
OK enough of that , ofcourse if I go into physics , I have the first 4 years of school to learn about it and of course Im sure I will be presented various employment opportunities there will be. I still look forward to getting a higher education after receiving my Bachelors of Science(Physics). I figure by then I would at least need good employment to help my family and I to survive.
I know I can only make up my own mind , but talk me into majoring in physics(lol)
peace
Damien
2. Aug 10, 2006
### d_leet
Well if physics is what you really love then do physics, especially since you're saying that you'll regret it if you do biology since you love physics so much more.
3. Aug 10, 2006
### 0rthodontist
At least in terms of immediate employment, biology majors tend to make less than physics majors.
4. Aug 10, 2006
### franznietzsche
Median in 2004 for biologists was $68,950. Median for physicists was$87,450. Of course, there were 77,000 biologists employed in the US in and 16,000 physicists.
5. Aug 10, 2006
wooohooo
Physics!
Thanks!
I know what I need to do, just asking for someone to shove me over the bridge! I think I should ignore the numbers really because then I might make a mistake and not do what I love
Peace
Damien
6. Aug 10, 2006
### 0rthodontist
Last edited by a moderator: May 2, 2017
7. Aug 10, 2006
### interested_learner
Physicists work as engineers
Many phyiscists work as engineers or programmers. There is plenty of work, especially in the defense industry. I wouldn't worry about it. If you like physics best, study physics. You are going to be most successful in the field you are most interested. | 2018-10-23 07:03:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33374613523483276, "perplexity": 1566.1243702752897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516117.80/warc/CC-MAIN-20181023065305-20181023090805-00076.warc.gz"} |
http://mymathforum.com/number-theory/339077-i-x.html | My Math Forum π(x)
Number Theory Number Theory Math Forum
February 14th, 2017, 04:51 PM #1 Newbie Joined: Jul 2014 From: Taiwan Posts: 7 Thanks: 0 π(x) Sorry, my math and English are poor, but I still want to ask a question about π(x). If we let x/ln(x) be substituted into x, and so that becomes to be [x/ln(x)]/ln[x/ln(x)], then repeat this step over and over again; thus, is it correct that finally equals π(x)? Would someone like to explain to me if that is correct?
February 14th, 2017, 07:00 PM #2 Senior Member Joined: Sep 2015 From: CA Posts: 1,303 Thanks: 666 repeated iteration of $\dfrac{x}{\ln(x)}$ results in $e,~\forall x > 1$ $\dfrac{x}{\ln(x)}=x$ $1 = \ln(x)$ $x = e$ $e \neq \pi(x)$
Tags hypothesis, πx, number theory, prime, prime number
Thread Tools Display Modes Linear Mode
Contact - Home - Forums - Cryptocurrency Forum - Top | 2017-08-23 06:07:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5677968263626099, "perplexity": 4297.801769489341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117874.26/warc/CC-MAIN-20170823055231-20170823075231-00085.warc.gz"} |
https://learn.careers360.com/ncert/question-which-of-the-following-numbers-are-prime-23-51-37-26/ | ## Filters
Q&A - Ask Doubts and Get Answers
Q
# Which of the following numbers are prime 23 51 37 26
8. Which of the following numbers are prime?
(a) 23 (b) 51 (c) 37 (d) 26
Views
(a) 23,23=1$\times$ 23,23=23 $\times$ 123,23=1 $\times$ 23,23=23 $\times$1 23 has only two factors, 1 and 23. Therefore, it is a prime number.
(b) 51,51=1 $\times$ 51,51=3 $\times$ 1751,51=1 $\times$ 51,51=3 $\times$ 17 51 has four factors, 1, 3, 17, 51. Therefore, it is not a prime number. It is a composite number.
(c) 37 It has only two factors, 1 and 37. Therefore, it is a prime number.
(d) 26 26 has four factors (1, 2, 13, 26). Therefore, it is not a prime number. It is a composite number.
Exams
Articles
Questions | 2020-02-19 04:19:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34376436471939087, "perplexity": 2250.1625047095304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144027.33/warc/CC-MAIN-20200219030731-20200219060731-00255.warc.gz"} |
https://tutorbin.com/questions-and-answers/subject/statistical-methods-in-management | ### Question 41191
Verified
Statistical Methods In Management
Suppose that $2200 is invested at 3% interest, compounded semiannually. Find the function for the amount of money after tyears. A=2200(1.03)^{2:} C_{1} A=2200(1.015)^{t} A=2200(1.015)^{2 t} A=2200(1.0125)^{21} ### Question 41190 Verified Statistical Methods In Management Jennifer invested$5,000 in her savings account for 5 years. When she withdrew it, she had $6,749.29. Interest was compounded continuously. What was the interest rate on the account? Round to the nearest tenth of a percent. 5.9%6.0%6.1%6.2% ### Question 41189 Verified Statistical Methods In Management Jack sets up a new hat store. His initial start-up costs are$13,000 plus he pays $3 for every hat he stocks. Jack then decides to sell each hat for$1 in the hopes ofbringing a lot of customers with his low prices. What do you know for sure about Jack's business?
A. He will never make money because he is charging his customers less than it costs him to stock his store.
O B. He will be hugely successful because he is offering high-quality products at a low price.
### Question 41185
Verified
Statistical Methods In Management
Sammy borrowed $10,000 to purchase a new car at an annual interest rate of 11%. She is to pay it back in equal monthly payments over a 5-year period. How much total interest will be paid over the period of the loan? Round to the nearest dollar. ### Question 41184 Verified Statistical Methods In Management Investing. How many years will it take$200 to grow to $3,500 if it is invested at 6% (A) compounded quarterly? (B) compounded continuously? ### Question 41182 Verified Statistical Methods In Management In order to purchase a home, a family borrows$70,000a 12% for 15 years. What is the monthly house payment to amortze the lan? Round the answer to the nearest cent.
$902.99$46.67$700.00$840.12
### Question 41181
Verified
Statistical Methods In Management
After a few months, Marco sits down with his accountant to discuss the progress of his bakery. His accountant goes into great detail about the cost, revenue and profit or the bakery. While Marco understands clearly what the cost of his business is, he has trouble distinguishing between revenue and profit. Which of the following best explains the difference?
A. The profit is how much the bakery earns over a period of time, while the profit is the costs that Marco has to spend minus the revenue.
OB. The revenue is how much the bakery earns over a period of time, while the profit is the revenue minus any costs that Marco has to spend.
C. The revenue is how much the bakery earns over a period of time, while the profit is the costs that Marco has to spend minus the revenue.
1. The profit is how much the bakery earns over a period of time, while the revenue is the profit minus any costs that Marco has to spend.
### Question 41180
Verified
Statistical Methods In Management
A sallbaat costs \$27,570. You pay 25% down and amortize the rest with equal monthly payments over a 9-year period. If you must pay 6.1% compounded monthly, what's your monthly payment? How much interest will you pay? | 2022-08-18 01:31:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34781140089035034, "perplexity": 1928.017982628177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573145.32/warc/CC-MAIN-20220818003501-20220818033501-00336.warc.gz"} |
https://www.physicsforums.com/threads/non-exact-integral.303137/ | # Non exact integral
#### orthovector
and exact integral is listed this way in LATEX symbols
$$\int dU$$
how do you write an inexact integral using LATEX symbols such as the integral of dW????
Related MATLAB, Maple, Mathematica, LaTeX, Etc News on Phys.org
uhhh,, hello????
#### MATLABdude
If you mean a path integral (the one denoted with either a subscript C on the integral or a circle in the middle of the integral) you might want to look here:
http://latex.wikia.com/wiki/Integral
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving | 2019-08-21 22:41:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9216598272323608, "perplexity": 7291.220766429001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316549.78/warc/CC-MAIN-20190821220456-20190822002456-00476.warc.gz"} |
http://math.stackexchange.com/questions/35618/identity-for-solving-trig-equation | # Identity for solving trig equation
I have the following type of equation which I wish to solve for $t$:
$$\frac{x}{\cos(t)} - \frac{y}{\sin(t)} = z$$
I have used $c^2 + s^2 = 1$ to get it into the following form:
$$x\sqrt{1-\cos^2(t)} - y \cos(t) = z \cos(t)\sqrt{1-\cos^2(t)}$$
But now I am a little stuck as to how to continue. Is there another identity, e.g. double angle formulae that I should use?
-
First thing, a warning: $\sin(t)$ is not necessarily equal to $\sqrt{1-\cos(t)^2}$, you need $t \in [0, \pi] \pmod {2\pi}$.
As for your problem, I would suggest putting all the $\sqrt{1-\cos(t)^2}$ on the same side of the equation and the other term on the other, factor then square the whole thing. But remember, this only gives you necessary conditions (it's a $\Rightarrow$, not a $\Leftrightarrow$), therefore you need to check all the answers you may find at the end to see if they are in the right range.
-
I think this is the right way to do it, but I think the difficulties @Hans points out means I shall have to tackle this a different angle. Incidentally, it is for finding minimum distance points on an ellipse. Thanks for your help. – Bill Cheatham Apr 28 '11 at 15:27
@Bill: Ha, that's funny. My office roommate back when I was a PhD student once tried to find the point on an ellipse closest to a given point, and was surprised to discover that he couldn't get further than reducing the problem to solving a quartic equation... – Hans Lundmark Apr 28 '11 at 18:52
I suspect it will be hard to find a nice expression for the solution. If you do as zulon suggests, you will get an equation of degree 4 in $C=\cos t$. Alternatively, with $T=\tan(t/2)$ you get an equation of degree 4 in $T$ (using $\cos t = (1-T^2)/(1+T^2)$ and $\sin t=2T/(1+T^2)$).
-
Thanks. I think this suggests I've formualted the problem wrong. Possibly a numerical solution will do. – Bill Cheatham Apr 28 '11 at 15:25
Here you can find some details on using $\tan(t/2$ substitution: math.stackexchange.com/questions/9138/… (And also on simplifying something of the form $a\sin t+b\sin t$, which might be related to your question, but I do not see a way of simplifying your expression using this trick.) – Martin Sleziak Apr 28 '11 at 15:26
$$xsec(t)-ycosec(t)=z$$ $sec(t)=\sqrt{1+T^2}$ and $cosec(t)=\frac {\sqrt {1+T^2}}{T}$ where $T=tan(t)$ . Therefore $$xT\sqrt{1+T^2}+y\sqrt{1+T^2}=z\\=>\sqrt{1+T^2}=\frac{z}{xT+y}\\=>1+T^2=\frac{z^2}{x^2T^2+y^2+2xyT}\\=>x^2T^4+2xyT^3+(x^2+y^2)T^2+2xyT+y^2-z^2=0$$Now I am afraid you will have to solve the equation keeping in mind that the expression $\frac z{xT+y}$ is positive.
- | 2015-01-31 06:21:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8829779028892517, "perplexity": 203.22356142706582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115869320.77/warc/CC-MAIN-20150124161109-00048-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/2395157/primitive-polynomial-vs-irreducible-polynomial-for-construction-field-gf2x | Primitive polynomial vs irreducible polynomial for construction field $GF(2)[x]/\langle p(x)\rangle$
Question 1: Can the field $GF(2)[x]/\langle p(x)\rangle$ be constructed using $p(x)$ irreducible but NOT primitive? what are the consequences?
In my readings, the math books talk about irreducible polynomial to construct a field, but in other literature for error correction codes they say that we use primitive polynomial instead.
For example, to construct the field $GF(2^4)$, there are 3 irreducible polynomials of degree 4 in $GF(2)[x]$, but one of them is not primitive:
• $p_1(x)=x^4 + x + 1$ (irreducible AND primitive)
• $p_2(x)=x^4 + x^3 + 1$ (irreducible AND primitive)
• $p_3(x)=x^4 + x^3 + x^2 + x + 1$ (irreducible but NOT primitive)
Matlab, for example, refuses to "create" an element of such a field using the above $p_3(x)$ polynomial:
>> a=gf(15, 4, 'x^4 + x^3 + x^2 + x + 1')
Error using gf (line 96)
PRIM_POLY must be a primitive polynomial.
EDIT:
I've found that using $p_3(x)=x^4+x^3+x^2+x+1$ (irreducible but NOT primitive) into $GF(2)[x]/\langle p_3(x) \rangle$ results in a field, with various elements that are generators: $x+1$, $x^2+1$, $x^2+x$, $x^2+x+1$, etc . For example $g=x+1$ is a generator of the multiplicative group, because the sequence $g^0, g^1, ..., g^{14}$ generates every element of this field, and multiplication using sum of exponents works. One curious point is that $x$ is not a generator of this field using $p_3(x)$, and $x^0=x^5=x^{10}=1$ (there are repetitions).
Then I've learned that irreducible poly is mandatory to make this a field, but primitive is an extra just to make exponentials prettier. For example, the sequence of powers for the generator $x+1$: $\{(x+1)^1, (x+1)^2, \dots\}$ is cumbersome when compared with $\{x^1, x^2, \dots\}$, then it is conveninent to have $x$ as a generator element.
• What is the definition of primitive here? – carmichael561 Aug 16 '17 at 5:06
• @carmichael561, primitive poly in $GF(2)[x]$ are those irreducible polynomials of degree $m$ that divides $a(x)=x^n+1$, where $n=2^m-1$, but not divides any such $a(x)$ with smaller $n$. – Berk7871 Aug 16 '17 at 5:11
• Use something better than MATLAB! – Lord Shark the Unknown Aug 16 '17 at 5:14
• Here we explained some points that you may find interesting. My semi-educated guess is that Matlab's restriction is related to either the 1st point in my answer or some consequences related to it. Getting to use efficient algorithms based on the discrete Fourier transformation for some purpose? Or some such gadgets? Those are frequently needed in coding theoretical applications. Telcomm industry is one of the heavy users of Matlab, so... – Jyrki Lahtonen Aug 16 '17 at 6:07
• @carmichael561 I added an explanation to the tag wiki because the question/confusion comes up frequently enough. – Jyrki Lahtonen Aug 16 '17 at 6:17
The only thing selecting a primitive polynomial does is force (the congruence class of) $x$ to be a generator of the multiplicative group. In fact, this is the definition of "primitive". | 2019-10-18 04:56:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8667572140693665, "perplexity": 370.5272583259484}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677884.28/warc/CC-MAIN-20191018032611-20191018060111-00043.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/the-volume-cube-2744-cm2-its-surface-area-concept-of-surface-area-volume-and-capacity_75669 | # The Volume of a Cube is 2744 Cm2. Its Surface Area is - Mathematics
MCQ
The volume of a cube is 2744 cm2. Its surface area is
• 196 cm2
• 1176 cm2
• 784 cm2
• 588 cm2
#### Solution
1176 cm2
Let the edge of the cube be a cm.
Then, volume of the cube = a3
Or,
a = (2× 73
a = 2 × 7
a = 14 cm
Therefore, surface area of the cube = 6"a"^2
= (6 × 14 × 14) cm2
= 1176 cm
Is there an error in this question or solution?
#### APPEARS IN
RS Aggarwal Secondary School Class 10 Maths
Chapter 19 Volume and Surface Area of Solids
Multiple Choice Questions | Q 36 | Page 921 | 2021-04-10 11:01:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38156574964523315, "perplexity": 3489.7936792610885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056869.3/warc/CC-MAIN-20210410105831-20210410135831-00150.warc.gz"} |
http://mathhelpforum.com/algebra/133898-proof-mathematical-induction-inequality-print.html | # Proof of mathematical induction inequality
• Mar 15th 2010, 07:30 AM
blackhug
Proof of mathematical induction inequality
Hi,
Im having trouble trying to prove the following:
n^2 > 2n; n 3
• Mar 15th 2010, 07:55 AM
MarNie
Make first an equation out of it and try to understand how these numbers relate
• Mar 15th 2010, 08:43 AM
Henryt999
Or you can, do this
In order to prove it by induction u have to prove that it is true for n=3
and then assume that it is true for some n and that it then follows for n+1
So
Step one:
Is it true for 3?
3^2>2*3......True!
Good not we assume that it is true for n and prove that it then follows for n+1.
$n^2>2*n$
(n+1)^2>2*(n+1)
n^2+2n+1>2n+1
and almost done now =).
n^2>2n by our first assumtion.
so we can subtract that from the inequality.
then we are left with 2n+1>1
and since n=positive number clearly this is true...
Hope I was clear enough can you take it from here
?
Regards Henry
• Mar 15th 2010, 08:54 AM
Raoh
hi :)
let $S=\left \{ n\in \mathbb{N}\mid n\geq 3,\text{and},n^2> 2n \right \}$
Clearly $3\in S$ since $9> 6$,now assume $m\in S$ to prove $m+1\in S$ we must prove $(m+1)^2> 2(m+1)$...
i'll let you do that :).
• Mar 15th 2010, 01:10 PM
blackhug
Thanks for the replies.
when I tried to do it I got as far as:
$
k^2 + 2k + 1 > 2k + 2
$
I wasn't aware I could subtract $k^2 > 2k$ from the inequality.
• Mar 15th 2010, 03:34 PM
Quote:
Originally Posted by blackhug
Thanks for the replies.
when I tried to do it I got as far as:
$
k^2 + 2k + 1 > 2k + 2
$
I wasn't aware I could subtract $k^2 > 2k$ from the inequality.
In that case you can simply write
$k^2+(2k+1)>(2k+1)+1$ ?
2k+1 is common to both sides so it is redundant,
hence we can eliminate it.
$k^2>1$ ?
This is true for k>1 | 2017-06-24 21:31:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9506753087043762, "perplexity": 1289.2690874650132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320338.89/warc/CC-MAIN-20170624203022-20170624223022-00589.warc.gz"} |
https://www.vedantu.com/question-answer/prove-the-following-trigonometric-identity-left-class-10-maths-cbse-5efbda2ca261932d30209651 | QUESTION
# Prove the following trigonometric identity:$\left( 1+\cot A+\tan A \right)\left( \sin A-\cos A \right)=\sin A\tan A-\cot A\cos A$
Hint: Take the left hand side or the given expression. Put $\cot A=\dfrac{\cos A}{\sin A},\tan A=\dfrac{\sin A}{\cos A}$.
Then do the multiplication and arrange the terms to get the right hand side.
We have to prove the following identity:
$\left( 1+\cot A+\tan A \right)\left( \sin A-\cos A \right)=\sin A\tan A-\cot A\cos A$
Let us first take the left hand side of the above expression.
$\left( 1+\cot A+\tan A \right)\left( \sin A-\cos A \right)$
We know that $\cot A=\dfrac{\cos A}{\sin A},\tan A=\dfrac{\sin A}{\cos A}$. Put these values in the above expression.
$=\left( 1+\dfrac{\cos A}{\sin A}+\dfrac{\sin A}{\cos A} \right)\left( \sin A-\cos A \right)$
By multiplying both the terms we will get,
$=\sin A\left( 1+\dfrac{\cos A}{\sin A}+\dfrac{\sin A}{\cos A} \right)-\cos A\left( 1+\dfrac{\cos A}{\sin A}+\dfrac{\sin A}{\cos A} \right)$
$=\sin A+\cos A+\dfrac{{{\sin }^{2}}A}{\cos A}-\left( \cos A+\dfrac{{{\cos }^{2}}A}{\sin A}+\sin A \right)$
Now we will adjust terms in such a way so that we can get the right hand side.
$=\sin A+\cos A+\dfrac{\sin A}{\cos A}\times \sin A-\cos A-\sin A-\dfrac{\cos A}{\sin A}\times \cos A$
Now we can cancel out the opposite terms from the above expression and we will put $\cot A=\dfrac{\cos A}{\sin A},\tan A=\dfrac{\sin A}{\cos A}$
Therefore,
$=\tan A\sin A-\cot A\cos A$, this is our right hand side expression
Hence,
$\left( 1+\cot A+\tan A \right)\left( \sin A-\cos A \right)=\sin A\tan A-\cot A\cos A$
Note: Alternatively we can start the proof with the right hand side. That is:
$\sin A\tan A-\cot A\cos A$
Put $\cot A=\dfrac{\cos A}{\sin A},\tan A=\dfrac{\sin A}{\cos A}$.
$=\sin A\dfrac{\sin A}{\cos A}-\cos A\dfrac{\cos A}{\sin A}$
$=\dfrac{{{\sin }^{2}}A}{\cos A}-\dfrac{{{\cos }^{2}}A}{\sin A}$
Now take the left hand side. By putting $\cot A=\dfrac{\cos A}{\sin A},\tan A=\dfrac{\sin A}{\cos A}$ and multiplying both the terms we will get:
$=\dfrac{{{\sin }^{2}}A}{\cos A}-\dfrac{{{\cos }^{2}}A}{\sin A}$
Therefore,
Left hand side = right hand side. | 2020-07-09 08:43:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9124367833137512, "perplexity": 207.41441964450394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899209.48/warc/CC-MAIN-20200709065456-20200709095456-00466.warc.gz"} |
https://www.sparrho.com/item/process-for-decoding-alamouti-block-code-in-an-ofdm-system-and-receiver-for-the-same/df67d1/ | # Process for decoding ALAMOUTI block code in an OFDM system, and receiver for the same
Imported: 17 Feb '17 | Published: 23 Sep '14
USPTO - Utility Patents
## Abstract
A process for decoding a signal being representative of a Space Time or Frequency Block coding during two signaling periods (STBC) or two parallel channels (SFBC) is provided. The process receives an OFDM signal received from at least one antenna. The process also performs an OFDM demodulation in order to generate N frequency domain representations of the received signal. Then the process performs a decoding process on said OFDM demodulated signal and groups the received signal in word code, Y=(y1, y2), to represent the signal that was received during two signaling periods (STBC) or two parallel channels (SFBC). The word code is then decoded into a matrix after which a lattice reduction algorithm is applied to the matrix in order to transform the matrix into a reduced matrix having a near orthogonal vector. Then it performs a detection process on the reduced matrix to improve noise and interference immunity.
## Description
### CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a U.S. National Phase application submitted under 35 U.S.C. §371 of Patent Cooperation Treaty application serial no. PCT/EP2010/006508, filed Oct. 25, 2010, and entitled PROCESS FOR DECODING ALAMOUTI BLOCK CODE IN AN OFDM SYSTEM, AND RECEIVER FOR THE SAME, which application claims priority to European patent application serial no. 09368041.1, filed Oct. 26, 2009, and entitled PROCESS FOR DECODING ALAMOUTI BLOCK CODE IN AN OFDM SYSTEM, AND RECEIVER FOR THE SAME.
Patent Cooperation Treaty application serial no. PCT/EP2010/006508, published as WO 2011/050935, and European patent application serial no. EP 09368041.1, are incorporated herein by reference.
### TECHNICAL FIELD
The invention relates to the field of wireless communication and more particularly to a process for decoding ALAMOUTI block code in an OFDM system, and a receiver for doing the same.
### BACKGROUND
Digital wireless communications are being widely used throughout the world particularly with the latest development of the Orthogonal Frequency Division Multiplex (OFDM systems) and the latest evolution, namely the so-called Long Term Evolution (LTE), DVB-H, WiFi 802.11 and WiMax 802.16 systems.
OFDM is a frequency-division multiplexing (OFDM) scheme utilized as a digital multi-carrier modulation method. As it is well known to one skilled in the art, OFDM systems demonstrate significant advantages in comparison to single-carrier schemes, particularly in their ability to cope with severe channel conditions (i.e., channel attenuation, narrowband interference, frequency-selective fading).
The combination of OFDM and multiple antennas in either the transmitter or receiver is attractive to increase a diversity gain.
In that respect, the well known ALAMOUTI scheme, as disclosed in document “A simple transmit diversity technique for wireless communications”, by in S. M. ALAMOUTI, IEEE J. Selected Areas of Communications, vol. 16, pp. 1451-1458, October 1998, has revealed to be extremely efficient in allowing wireless and cellular systems to increase link reliability. Its efficiency proves because of the extremely simple encoding technique at the transmitter and more importantly in the low complexity linear and optimal decoding which can also easily be extended to multiple receiving antenna case.
With respect to FIG. 1, there is recalled the general principle of the transmission scheme in accordance with the ALAMOUTI Space Time Block coding.
Considering, as shown in the figure that the following sequence of complex symbols should be transmitted: x1, x2, x3, x4.
In normal transmission, a first time slot would be allocated for the transmission of x1, a second time slot would be allocated for x2 etc.
Now, considering the ALAMOUTI scheme and more particularly the Space-Time Block Code (STBC), those symbols are now grouped in two.
During the first time slot, x1 and x2 are respectively transmitted by the first and second antenna while, in the second time slot, −x2* and x1* are respectively sent through the first and second antenna. In the third time slot, x3 and x4 are transmitted by the first and second antenna while, in the fourth time slot, the two antennas transmit −x4* and x3*, respectively, and so on.
It can be noticed that such block coding has no effect on the data rate since two time slots are still required for the transmission of two symbols.
y1=h1x1+h2x2+n1
In the second time slot, the received signal is,
y2=−h1x*2+h2x*1+n2
where
y1, y2 is the received symbol on the first and second time slot respectively,
h1 is the channel from 1st transmit antenna to receive antenna,
h2 is the channel from 2nd transmit antenna to receive antenna,
x1, x2 are the transmitted symbols and
n1 n2, is the noise on 1st and 2nd time slots.
What can be expressed as follows:
$[ y 1 y 2 * ] = [ h 1 h 2 h ~ 2 * - h ~ 1 * ] [ x 1 x 2 ] + [ n 1 n 2 * ] ( 1 )$
Let us define
$H = [ h 1 h 2 h 2 * - h 1 * ] .$
And H+ being the pseudo inverse defined as: H+=(HH H)−1HH
Solving the equation Y=Ax above, leads to the following
$[ x 1 x 2 * ] ^ = ( H H H ) - 1 H H [ y 1 y 2 * ] ( 2 )$
Applied to the frequency domain rather than in the time domain, the ALAMOUTI block coding results, in the so-called Space-Frequency Block Code (SFBC), two consecutive and neighboring subcarriers within the same OFDM symbol, instead of two consecutive time slots.
The use of such space block significantly increases the link reliability of wireless and cellular systems without requiring a significant increase in the complexity of the receiver.
It is particularly effective because of the extremely simple encoding technique at the transmitter and more importantly in the low complexity linear and optimal decoding which can also easily be extended to multiple receiving antenna case.
However, such benefit strongly relies on the assumption that the channel remains constant over two time slots or, in OFDM, between two neighboring subcarriers or resources.
Such assumption of static conditions over the two periods or channel uses spanning its transmission is actually never verified in practice and remains ideal.
In OFDM, the channel is selective because of the time-varying or frequency selective nature of the terminal mobility and the rich scattering of the wireless environment:
long channel delay spread, e.g., hilly terrain propagation
low channel coherence bandwidth, i.e., high relative speed between the base station (BS) and the wireless mobile receiver.
When the static assumption is not verified, the demodulation process tends to become much more complicated.
Indeed, the conventional low complexity methods, such as the very basic matched filter, and even the more sophisticated linear processings (Zero-forcing, MMSE equalization) shows little efficiency and remain sub-optimal.
The well known Maximum Likelihood would be optimal but becomes highly complex as the size of the modulation increases (exponential complexity of the order of 2M), where M is the order of the modulation used, i.e., M=2 for QPSK, M=4 for 16-QAM and M=6 for 64-QAM.
On the other hand, the Near-ML detection based on Sphere Decoding: optimal (slight decrease in coding gain) could be another solution, but still shows high level of complexity (polynomial complexity in function of modulation order M3 in average).
Therefore, there is a desire for a new method which allows decoding of the ALAMUTI code, with low complexity, even in the case where the channel shows variation between two neighboring subcarriers or OFDM blocks.
### SUMMARY
Embodiments of the present invention provide a process for decoding ALAMOUTI block code in an OFDM system which requires little complexity.
Embodiments of the present invention provide a new process for performing improved signal detection in of ALAMOUTI block-codes (SFBC or STBC) in presence of highly selective channels (long delay spread or high Doppler, respectively).
These and other embodiments of the invention are achieved by means of the process for decoding a signal being representative of a STBC or SFBC, comprising the transmission, firstly, of a pair of finite-alphabet complex symbols x1 and x2 with, secondly, the symbols −x2* and x1* (*being the conjugate operation) during two signaling periods (STBC) or two parallel channels (SFBC), which comprises the steps of:
receiving an OFDM signal received from at least one antenna;
performing an OFDM demodulation in order to generate N frequency domain representations of said received signal, each associated to one carrier;
performing a decoding process applied on said OFDM demodulated signal, in order to group the received signal in word code Y=(y1, y2) being representative of the received signal received during two signaling periods (STBC) or two parallel channels (SFBC),
decoding said word code Y=(y1, y2) in order to compute the transmitted symbols x1 and x2 in accordance with the following formulation:
$[ y 1 y 2 * ] = [ h 1 h 2 h ~ 2 * - h ~ 1 * ] [ x 1 x 2 ] + [ n 1 n 2 * ]$ $Or$ $y = Hx + n$
where h1 and h2 being representative of the channel applicable to the transmission of x1 and x2 while {tilde over (h)}1′ and {tilde over (h)}2 are representative of the channel applicable to the transmission of −x2* and x1*;
n1 n2, being the noise.
The process further involves a lattice reduction on matrix H=(b1, b2) in order to transform said matrix in a reduced matrix Hred=(b′1, b′2) having vector being close to orthogonal, then followed by a detection process applied on said reduced matrix in order to improve immunity with respect to noise and interference.
Such process is a non linear method—which efficiency is close to that of the Maximum Likelihood (Near ML) but which offers a complexity which is in average polynomial (cube) in the channel matrix size but remains independent on the size of the constellation.
Therefore, the complexity can be greatly reduced exploiting channel coherence
Furthermore, in the particular 2×2 ALAMOUTI case, the use of the Lattice Reduction method defined above proves very efficient.
The process is applicable for both STBC coding, wherein the transmission of symbol x1 (resp. x2) and symbol −x2* (resp. x1*) are performed during two consecutive OFDM frames, or for SFBC wherein the same are transmitted through two consecutive carriers within the same OFDM frame.
In one embodiment, the lattice reduction applied on each carrier k is based on an iterative algorithm with an initialization performed with the values of the reduced channel processed for carrier k−1. This has for result of significant lowering the complexity of the receiver.
In one embodiment, the lattice reduction comprises the steps of:
checking the correlation by testing whether
I Re {<b1, b2>}l≦½ Il b1Il2 and
I Im {<b1, b2>}l≦½ Il b1Il2, and, if not,
replace b2 with
$b 2 - ⌊ ( b 1 , b 2 ) b 1 2 ⌉ b 1$
and repeat again.
Optionally, the process further comprises testing the modulus of b1 and b2, i.e., square root of the sum of the squared value of each element of the vector.
The invention embodiments also provide a receiver for an OFDM communication system which can decode ALAMOUTI SFBC or STBC decoding, which comprises:
means for receiving an OFDM signal received from at least one antenna;
means for performing an OFDM demodulation in order to generate N frequency domain representations of said received signal, each associated to one carrier;
means for performing a decoding process applied on said OFDM demodulated signal, in order to group the received signal in word code Y=(y1, y2) being representative of the received signal received during two signaling periods (STBC) or two parallel channels (SFBC),
means for decoding said word code Y=(y1, y2) in order to compute the transmitted symbols x1 and x2 in accordance with the following formulation:
$[ y 1 y 2 * ] = [ h 1 h 2 h ~ 2 * - h ~ 1 * ] [ x 1 x 2 ] + [ n 1 n 2 * ]$
where h1 and h2 being representative of the channel applicable to the transmission of x1 and x2 while {tilde over (h)}1′ and {tilde over (h)}2 are representative of the channel applicable to the transmission of −x2* and x1*;
n1 n2, being the noise.
means for applying a lattice reduction algorithm on said matrix H=(b1, b2) in order to transform said matrix in a reduced matrix Hred=(b′1, b′2) having vector being close to orthogonal;
means for performing a detection process on said reduced matrix in order to improve immunity with respect to noise and interference.
In one embodiment, the lattice reduction can be combined with Zero Forcing (ZF), Matched Filter (MF) or Decision Feedback (DF) detection.
The invention is particularly suitable for embodying an US equipment for the Long Term Evolution (LTE), such as a mobile telephone.
### DETAILED DESCRIPTION
There will now be described one particular embodiment of a process which is applicable to any OFDM communication systems, such as Long Term Evolution (LTE), Digital Video Broadcasting Handheld (DVB-H), IEEE 802.11b Direct Sequence Spread Spectrum or Wifi, Wimax etc.
Clearly, the process is more general and could be applied to any other form of OFDM system.
More generally, while the invention shows high efficiency in the simple 2×2 ALAMUTI case, it can be applied to more than two antennas.
With respect to FIG. 2, there is illustrated a block diagram of a Space-Time block coding communication system complying with the ALAMOUTI scheme and which can advantageously use the process described below.
A transmitter 20 comprises a block 21 consisting of a source of information symbol, which are forwarded to a Space Time Black encoder 22 complying with the ALAMOUTI space time coding. The symbols are grouped in blocks of two symbols and are then passed to an OFDM modulation 23, and then to the Transmit Radio Frequency front-end circuits 24 supplying two transmit antennas.
A receiver 30 comprises, in addition to a Receive Radio Frequency front-end circuits 31 and an OFDM demodulator 32, a ST Block decoder 33 achieving the reverse ALAMOUTI decoding for the purpose of regenerating the original sequence of symbols which are forwarded to the decoder 34.
In order to significant simplify the structure of the demodulation, the decoding applied by block 34 on the sequence of symbols resulting from the ALAMOUTI decoding is now based on a non linear process involving an iterative process based carriers of the OFDM symbol.
In addition, for each carrier, the iterative process is initialized with the result of the lattice process computation performed on the preceding carrier.
I. Signal Model Definition
First consider a communication system between a transmitter with two antennas and a receiver with one receiving antenna. Despite this simplification, the results presented are general and can be extended to multiple receiving antennas case. The transmitter employing ALAMOUTI transmit-diversity scheme requires two signaling periods or two parallel channels to convey a pair of finite-alphabet complex symbols x1 and x2: during the first symbol period, the first antenna sends x1 and the second antenna sends x2; in the second period, the symbols −x2* and x1* are respectively transmitted by first and second antenna.
Denote h1 and h2 the complex flat-fading channel coefficients between the two transmit antennas and the receiving antenna during the first period while {tilde over (h)}1′ and {tilde over (h)}2 are the channel coefficients of the second symbol period. It is easy to show that the received symbol vector can be conveniently written in matrix form as
$[ y 1 y 2 * ] = [ h 1 h 2 h ~ 2 * - h ~ 1 * ] [ x 1 x 2 ] + [ n 1 n 2 * ] ( 3 )$
With * being the complex conjugate. The same expression can be written in a more compact way as:
y=Hx+n (4)
where
• n is the zero-mean circularly symmetric complex Gaussian noise vector whose covariance matrix is equal to I.
• Rayleigh fading channel coefficients such that h1, h2, {tilde over (h)}1′ and {tilde over (h)}2 are zero-mean circularly symmetric complex Gaussian random variables each with variance equal to σh2, i.e. E[|h1|2]=E[|h2|2]=σh2 with E[•] denoting the expectation operator;
• uncorrelated transmitting antennas such that h1 and h2 are independent, i.e. E[h1, h2*]=0.
• Correlated channel coefficients between the two symbol such that
E[h1{tilde over (h)}1*]=E[h2{tilde over (h)}2*]=ρ
where ρ is the complex correlation factor with |ρ|2≦1. We stress the fact about ρ being complex as this is the general case. The correlated processes are generated using a first-order auto-regressive model as
{tilde over (h)}i=ρhi+√{square root over (1=ρ2)}wi
with wi being again a zero-mean circularly symmetric complex Gaussian random variable with variance equal to ρh2.
• x being a vector such, for example, a vector of Binary Phase-Shift Keying (BPSK) symbols with xiε{±1}.
II. Lattice Reduction for Reducing the H Matrix in the Receiver
As mentioned in the Background section, the ALAMOUTI decoding applied on the equation (4) above leads to a complex demodulation because of the non static condition of the channel during the two consecutive time interval (in Space-Time Block Coding) or between the two consecutive carriers in the OFDM symbol (in Space-Frequency Block Coding).
Such complexity in the demodulation entails the need of non linear decoding method to be applied in the receiver 30 of FIG. 2.
The inventors have discovered that one particular non linear method, based on an iterative process using lattice reduction, can provide advantageous decoding without a high level of complexity.
Lattice reduction is a non linear method whose complexity is in average polynomial (cube) in the channel matrix size but independent on constellation size, its complexity can be greatly reduced exploiting channel coherence.
It is a near ML method, hence provides quasi ML performance (at the expense of some coding gain loss). While feasible Lattice Reduction algorithms are sub-optimal compared to theoretical one (exponential complexity), it has been discovered for a 2×2 matrix case, an optimal formulation of Lattice Reduction does exist (i.e., Lattice Reduction algorithm is exactly as Korkine-Zolotarev).
Considering that H in formula (4) can be written as two vectors b1 and b2, such as:
H=(b1,b2),
the following algorithm below can be used for generating a reduced matrix X which can be used in the detection process of the Receiver.
Step 1: check the correlation
If I Re {<b1, b2>}l≦½ Il b1Il2 and I Im {<b1, b2>}l≦½ Il b1Il2,
Then stop.
Where <b1, b2> being the inner product defined as equal to b1H·b2. The inner product is representative of the projection of b2 on b1.
Otherwise, replace b2 with
$b 2 - ⌊ ( b 1 , b 2 ) b 1 2 ⌉ b 1$
and go to step 2
Step 2 (optional): check the modulus (or relative power):
If lb2Il≧lb1Il, then stop. Otherwise, swap b1 and b2 and go to step 1.
Step 2 is optional in the case of only 2 transmitting antennas.
Such algorithm achieves, even with highly correlated values of b1 and b2, finding a more orthogonal basis by generating a reduced matrix Hred which improves the performance of the linear detection of the receiver (be it a ZF, MF etc.) by providing decision regions more robust against noise and interference.
FIG. 3 illustrates the change of basis resulting from the Lattice Reduction (LR) algorithm which is described above, in comparison with the so-called Maximum Likelihood Detector (MLD).
It has been discovered that such algorithm finds regions which are slightly smaller (representative of a coding gain loss) compared to the Maximum Likelihood (ML) method, but which still shows to be optimal. Therefore, there is achieved optimal diversity gain retrieval.
In OFDM the process is carried out for all the subcarriers k coded with ALAMOUTI scheme, and the overall complexity should scale as the complexity for one sub-carrier times the number of subcarriers.
In one embodiment, the lattice reduction algorithm is applied for each carrier k in the OFDM symbol, and the algorithm processing Hk uses initialization values for parameters b1k and b2k, which are set to be equal to the values of b1k-1 and b2k-2 b1 computed at the preceding iteration.
It has been shown such initialization causes the iterative LR algorithm to converge on its own very rapidly, thus decreasing the complexity of the whole mechanism. For instance, it has been noticed that complexity can be decreased by a factor 10 (one Lattice reduction is computed for 10 neighboring sub-carriers) for low-to-medium delay spread channels.
With FIG. 4, there is illustrated an example of one embodiment of the algorithm, based on a MAT LAB formulation, allowing iterative computation of the reduction matrix Hred, required for carrying out the optimal Lattice reduction (optLR).
With respect to FIG. 5, there is now described the different steps which are used in new receiver achieving ALAMOUTI decoding with the use of Lattice Reduction algorithm.
In a step 51, the process receives an OFDM signal received from at least one antenna.
In a step 52, the process performs an OFDM demodulation in order to generate N frequency domain representations of said received signal, each associated to one carrier.
It should be noticed that such steps 51 and 52 are conventional in the technical field of OFDM communication and well known to a skilled man.
Then, in a step 53, the process proceeds with a decoding process applied on the OFDM demodulated signal, in accordance with the particular coding being utilized, namely the Space Time Block Code (STBC) or the Space-Frequency Block Code (SFBC).
Such ALAMOUTI decoding results in the generation of a word code Y=(y1, y2).
Then, the process proceeds with the decoding of the word code Y=(y1, y2) in order to compute the transmitted symbols x1 and x2 in accordance with the formula (3) above.
This is achieved, as shown in a step 54, by applying a lattice reduction algorithm on said matrix H=(b1, b2) in order to transform said matrix in a reduced matrix Hred=(b′1, b′2) having vector being close to orthogonal.
In one particular embodiment, the lattice reduction algorithm is applied with an initialization step which includes the values of the reducted matrix which were computed by the preceding iteration, thus taking a great advantage of the channel coherence.
Then, in a step 55, the process proceeds with the decoding of the received symbols using the reduced Hred matrix, and then proceed again with step 51 for the purpose of processing new samples.
With respect to FIG. 6, there is illustrates some simulation results showing the evolution of the Block Error Rate (BER) as a function of the Signal to Noise ration, for different combinations of the proposed method with conventional methods:
ML: Maximum Likelihood;
ZF: Zero-forcing,
LR-ZF Lattice Reduction-Zero Forcing
MF: Matched Filter
DF: Decision Feedback
## Claims
1. A process for decoding a signal being representative of a Space Time Block coding (STBC) or Space Frequency Block coding (SFBC) based on a transmission, firstly, of a pair of finite-alphabet complex symbols x1 and x2, and secondly, of symbols −x2* and x1*, wherein * is a conjugate operation, during two parallel channels (SFBC), wherein the process comprises:
receiving an OFDM signal, from at least one antenna, by a receiver comprising one single antenna;
performing an OFDM demodulation in order to generate N frequency domain representations of the received OFDM signal, each N frequency domain representation being associated to one carrier;
performing a decoding process applied on the demodulated OFDM signal, in order to group the received OFDM signal in word code, Y=(y1, y2), being representative of the received OFDM signal received during two signaling periods (STBC) or two parallel channels (SFBC);
decoding the word code, Y=(y1, y2), in order to compute transmitted symbols x1 and x2 in accordance with a formulation,
$[ y 1 y 2 * ] = [ h 1 h 2 h ~ 2 * - h ~ 1 * ] [ x 1 x 2 ] + [ n 1 n 2 * ]$ $Or$ $y = Hx + n$
wherein H is a matrix, h1 and h2 are representative of the channel applicable to the transmission of x1 and x2 while {tilde over (h)}1′ and {tilde over (h)}2 are representative of the channel applicable to the transmission of −x2* and x1*, and wherein n1 and n2 are noise,
wherein decoding the word code comprises:
applying a lattice reduction algorithm on the matrix H, H=(b1, b2), in order to transform the matrix H into a reduced matrix Hred=(b′1, b′2) comprising vectors that are substantially orthogonal, wherein the lattice reduction algorithm on each carrier k is based on an iterative algorithm that was initialized based on values (b1k-1, b2k-2) computed at a preceding iteration; wherein the lattice reduction algorithm comprises:
checking the correlation by testing whether
I Re {b1, b2>}1≦½Il b1Il2 and
I Im {<b1, b2>}1≦½Il b1Il2, and, if not,
replacing b2 with
$b 2 - ⌊ ( b 1 , b 2 ) b 1 2 ⌉ b 1$
and repeating the checking step again; and
performing a detection process on the reduced matrix Hred in order to minimize noise and interference immunity.
receiving an OFDM signal, from at least one antenna, by a receiver comprising one single antenna;
performing an OFDM demodulation in order to generate N frequency domain representations of the received OFDM signal, each N frequency domain representation being associated to one carrier;
performing a decoding process applied on the demodulated OFDM signal, in order to group the received OFDM signal in word code, Y=(y1, y2), being representative of the received OFDM signal received during two signaling periods (STBC) or two parallel channels (SFBC);
decoding the word code, Y=(y1, y2), in order to compute transmitted symbols x1 and x2 in accordance with a formulation,
wherein H is a matrix, h1 and h2 are representative of the channel applicable to the transmission of x1 and x2 while {tilde over (h)}1′ and {tilde over (h)}2 are representative of the channel applicable to the transmission of −x2* and x1*, and wherein n1 and n2 are noise,
wherein decoding the word code comprises:
applying a lattice reduction algorithm on the matrix H, H=(b1, b2), in order to transform the matrix H into a reduced matrix Hred=(b′1, b′2) comprising vectors that are substantially orthogonal, wherein the lattice reduction algorithm on each carrier k is based on an iterative algorithm that was initialized based on values (b1k-1, b2k-2) computed at a preceding iteration; wherein the lattice reduction algorithm comprises:
checking the correlation by testing whether
I Re {b1, b2>}1≦½Il b1Il2 and
I Im {<b1, b2>}1≦½Il b1Il2, and, if not,
replacing b2 with
applying a lattice reduction algorithm on the matrix H, H=(b1, b2), in order to transform the matrix H into a reduced matrix Hred=(b′1, b′2) comprising vectors that are substantially orthogonal, wherein the lattice reduction algorithm on each carrier k is based on an iterative algorithm that was initialized based on values (b1k-1, b2k-2) computed at a preceding iteration; wherein the lattice reduction algorithm comprises:
checking the correlation by testing whether
I Re {b1, b2>}1≦½Il b1Il2 and
I Im {<b1, b2>}1≦½Il b1Il2, and, if not,
replacing b2 with
checking the correlation by testing whether
I Re {b1, b2>}1≦½Il b1Il2 and
I Im {<b1, b2>}1≦½Il b1Il2, and, if not,
replacing b2 with
and repeating the checking step again; and
performing a detection process on the reduced matrix Hred in order to minimize noise and interference immunity.
and repeating the checking step again; and
performing a detection process on the reduced matrix Hred in order to minimize noise and interference immunity.
and repeating the checking step again; and
and repeating the checking step again; and
2. The process according to claim 1, wherein the coding is STBC coding such that the transmission of symbol x1 (resp. x2) and symbol −x2*(resp. x1*) occur during two consecutive OFDM frames.
3. The process according to claim 1, wherein the coding is a SFBC coding such that the transmission of symbol x1 (resp. x2) and symbol −x2*(resp. x1*) occur through two consecutive carriers in one OFDM frame.
4. The process according to claim 1, wherein the lattice reduction algorithm further comprises an optional step of testing the length of b1 and b2.
5. A receiver for decoding a signal being representative of a Space Time or Frequency Block coding based on a transmission, firstly, of a pair of finite-alphabet complex symbols x1 and x2 with, secondly, the symbols −x2* and x1* (* being the conjugate operation) during two signaling periods (STBC) or two parallel channels (SFBC), the receiver comprising:
one antenna;
an OFDM demodulator adapted to generate a demodulated OFDM signal comprising N frequency domain representations of the received OFDM signal, wherein each N frequency domain representation is associated to one carrier;
a decoder adapted to perform a decoding process on the demodulated OFDM signal, wherein the decoding process groups the received demodulated OFDM signal in word code, Y=(y1, y2), being representative of the received OFDM signal during two signaling periods (STBC) or two parallel channels (SFBC);
wherein the decoder is further adapted to decode the word code, Y=(y1, y2), in order to compute the transmitted symbols x1 and x2 in accordance with a formulation,
$[ y 1 y 2 * ] = [ h 1 h 2 h ~ 2 * - h ~ 1 * ] [ x 1 x 2 ] + [ n 1 n 2 * ]$ $Or$ $y = Hx + n$
where H is a matrix, h1 and h2 are representative of the channel applicable to the transmission of x1 and x2 while {tilde over (h)}1′ and {tilde over (h)}2 are representative of the channel applicable to the transmission of −x2* and x1*, and wherein n1 and n2 are noise;
wherein the decoder is further adapted to apply a lattice reduction algorithm on said matrix H, H=(b1, b2), in order to transform said matrix H in a reduced matrix Hred, Hred=(b′1, b′2), comprising vectors that are substantially orthogonal, wherein the lattice reduction algorithm on each carrier k is based on an iterative algorithm that was initialized based on values (b1k-1, b2k-2) computed at a preceding iteration; and
wherein when the decoder applies the lattice reduction algorithm, the decoder is further configured to check a correlation by determining whether:
|Re {<b1, b2>}|≦½∥b12 and
|Im {<b1, b2>}|≦½∥b12., and, if the correlation is not true, then
replace b2 with
$b 2 - ⌊ ( b 1 , b 2 ) b 1 2 ⌉ b 1$
and check the correlation again; and
a detector adapted to detect received symbols using the reduced matrix Hred.
one antenna;
an OFDM demodulator adapted to generate a demodulated OFDM signal comprising N frequency domain representations of the received OFDM signal, wherein each N frequency domain representation is associated to one carrier;
a decoder adapted to perform a decoding process on the demodulated OFDM signal, wherein the decoding process groups the received demodulated OFDM signal in word code, Y=(y1, y2), being representative of the received OFDM signal during two signaling periods (STBC) or two parallel channels (SFBC);
wherein the decoder is further adapted to decode the word code, Y=(y1, y2), in order to compute the transmitted symbols x1 and x2 in accordance with a formulation,
where H is a matrix, h1 and h2 are representative of the channel applicable to the transmission of x1 and x2 while {tilde over (h)}1′ and {tilde over (h)}2 are representative of the channel applicable to the transmission of −x2* and x1*, and wherein n1 and n2 are noise;
wherein the decoder is further adapted to apply a lattice reduction algorithm on said matrix H, H=(b1, b2), in order to transform said matrix H in a reduced matrix Hred, Hred=(b′1, b′2), comprising vectors that are substantially orthogonal, wherein the lattice reduction algorithm on each carrier k is based on an iterative algorithm that was initialized based on values (b1k-1, b2k-2) computed at a preceding iteration; and
wherein when the decoder applies the lattice reduction algorithm, the decoder is further configured to check a correlation by determining whether:
|Re {<b1, b2>}|≦½∥b12 and
|Im {<b1, b2>}|≦½∥b12., and, if the correlation is not true, then
replace b2 with
|Re {<b1, b2>}|≦½∥b12 and
|Im {<b1, b2>}|≦½∥b12., and, if the correlation is not true, then
replace b2 with
and check the correlation again; and
a detector adapted to detect received symbols using the reduced matrix Hred.
and check the correlation again; and
a detector adapted to detect received symbols using the reduced matrix Hred.
and check the correlation again; and
6. The receiver of claim 5, wherein the coding is a STBC coding such that the transmission of symbol x1 (resp. x2) and symbol −x2*(resp. x1*) occur during two consecutive OFDM frames.
7. The receiver of claim 5, wherein the coding is a SFBC coding such that the transmission of symbol x1 (resp. x2) and symbol −x2*(resp. x1*) occur through two consecutive carriers in one OFDM frame.
8. The receiver of claim 5, wherein the decoder is further adapted to test the length of b1 and b2.
9. The receiver of claim 8, wherein the decoder, when applying the lattice reduction algorithm, is applying the lattice reduction algorithm in combination with a ZF, MF, DF or PDF detector.
10. The receiver of claim 5, wherein the receiver is comprised within a mobile communication device adapted for use in an OFDM communication network. | 2020-09-26 01:49:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2676248252391815, "perplexity": 1694.2842146552632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400232211.54/warc/CC-MAIN-20200926004805-20200926034805-00788.warc.gz"} |
https://cs.stackexchange.com/questions/69050/find-if-a-string-is-periodic-using-a-suffix-tree-and-prove-it | # find if a string is periodic using a suffix tree, and prove it
A string s of length n is periodic if there is a string $u$ of length <= n/2 such that $s = u^ku'$, where $k$ is an integer >=2, $u^k$ is the concatenation of k copies of $u$ , and $u'$ is a prefix of u.
The smallest period of $s$ is the shortest $u$ (largest $k$) for which this holds.
For example, if s=ACACACACA, then k=4, u=AC, u'=A.
I want a linear-time algorithm for determining if s is periodic.
This SO answer gives a nice lead for a suffix tree solution. I want to use it, but I can't prove that it will work if and only if s is periodic.
Can you sketch it for me?
It's wrong. Consider $aaaabb$, so you'll find that $aaa$ repeat twice and consider it as an answer, while it is obviously not.
Check each suffix with length $n>l\ge\frac{n}{2}$. If it's also a prefix of $s$, $s$ is periodic and $u$ is the prefix with length $n-l$. If you can't find such suffix $s$ is not periodic. The proof is straight forward: $s[i]=s[i+n-l]$ for $1\le i \le l$. So you can easily prove that $s[n-l+1..2(n-l)]=u$, and so on. Since $n-l\le\frac{n}{2}$, $u$ repeat at least twice. The converse is similar, by proving $s[i]=s[i+len(u)]$.
Check each node on the path from root to the leaf which represents the whole string. If a node on this path represents a suffix with length $\ge\frac{n}{2}$ (except the whole string), we find the answer. (Note that each node on this path represents a prefix.)
• @ihadanny Actually the simplest way to define "periodic" is $s[i+P]=s[i]$ where $P$ is the period. In other word, $s$ keeps the same if we "shift" it by $P$. Now it's not hard to see we're matching a suffix with a prefix. – aaaaajack Jan 21 '17 at 18:26 | 2021-02-28 17:01:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8946191072463989, "perplexity": 239.34240934960178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361510.12/warc/CC-MAIN-20210228145113-20210228175113-00629.warc.gz"} |
http://quantwolf.com/calculators/binomialdistcalc.html | Binomial Distribution Calculator
This is used to calculate coin toss probabilities.
Enter probability of heads p = 0 < p < 1 Enter number of tosses n =
b(k) = $$\binom{n}{k}p^k(1-p)^{n-k}$$ Binomial Distribution b() Normal approx:
F(k) = $$\sum_{i=0}^{k}b(i)$$ Cumulative Distribution F() Normal approx:
1 - F(k) = $$\sum_{i=k+1}^{n}b(i)$$ 1 - F() Normal approx: | 2018-09-24 18:10:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5774255990982056, "perplexity": 2859.113914583082}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160620.68/warc/CC-MAIN-20180924165426-20180924185826-00009.warc.gz"} |
http://www.gravitydrift.com/category/mathematics/ | ## Fractal – Wikipedia, the free encyclopedia
Fractal – Wikipedia, the free encyclopedia
fractal is a mathematical set that has a fractal dimension that usually exceeds its topological dimension[1] and may fall between the integers.[2]
Fractals are typically self-similar patterns, where self-similar means they are “the same from near as from far”[3] Fractals may be exactly the same at every scale, or as illustrated in Figure 1, they may be nearly the same at different scales.[2][4][5][6] The definition of fractalgoes beyond self-similarity per se to exclude trivial self-similarity and include the idea of a detailed pattern repeating itself.[2]:166; 18[4][7]
.
## Dewey Decimal Classification – Wikipedia, the free encyclopedia
The DDC attempts to organize all knowledge into ten main classes. The ten main classes are each further subdivided into ten divisions, and each division into ten sections, giving ten main classes, 100 divisions and 1000 sections. | 2020-03-29 03:48:26 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8548470139503479, "perplexity": 2817.8440151514847}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493684.2/warc/CC-MAIN-20200329015008-20200329045008-00497.warc.gz"} |
http://docsrv.sco.com/cgi-bin/pod2html/Module::Build | DOC HOME SITE MAP MAN PAGES GNU INFO SEARCH
NAME
Module::Build - Build and install Perl modules
SYNOPSIS
Standard process for building & installing modules:
perl Build.PL
./Build
./Build test
./Build install
Or, if you're on a platform (like DOS or Windows) that doesn't require the ./'' notation, you can do this:
perl Build.PL
Build
Build test
Build install
DESCRIPTION
Module::Build is a system for building, testing, and installing Perl modules. It is meant to be an alternative to ExtUtils::MakeMaker. Developers may alter the behavior of the module through subclassing in a much more straightforward way than with MakeMaker. It also does not require a make on your system - most of the Module::Build code is pure-perl and written in a very cross-platform way. In fact, you don't even need a shell, so even platforms like MacOS (traditional) can use it fairly easily. Its only prerequisites are modules that are included with perl 5.6.0, and it works fine on perl 5.005 if you can install a few additional modules.
See MOTIVATIONS for more comparisons between ExtUtils::MakeMaker and Module::Build.
To install Module::Build, and any other module that uses Module::Build for its installation process, do the following:
perl Build.PL # 'Build.PL' script creates the 'Build' script
./Build # Need ./ to ensure we're using this "Build" script
./Build test # and not another one that happens to be in the PATH
./Build install
This illustrates initial configuration and the running of three 'actions'. In this case the actions run are 'build' (the default action), 'test', and 'install'. Other actions defined so far include:
build manifest
clean manpages
code pardist
config_data ppd
diff ppmdist
dist prereq_report
distcheck pure_install
distclean realclean
distdir retest
distmeta skipcheck
distsign test
disttest testall
docs testcover
fakeinstall testdb
help testpod
html testpodcoverage
install versioninstall
You can run the 'help' action for a complete list of actions.
GUIDE TO DOCUMENTATION
The documentation for Module::Build is broken up into three sections:
General Usage (the Module::Build manpage)
This is the document you are currently reading. It describes basic usage and background information. Its main purpose is to assist the user who wants to learn how to invoke and control Module::Build scripts at the command line.
Authoring Reference (the Module::Build::Authoring manpage)
This document describes the structure and organization of Module::Build, and the relevant concepts needed by authors who are writing Build.PL scripts for a distribution or controlling Module::Build processes programmatically.
API Reference (the Module::Build::API manpage)
This is a reference to the Module::Build API.
Cookbook (the Module::Build::Cookbook manpage)
This document demonstrates how to accomplish many common tasks. It covers general command line usage and authoring of Build.PL scripts. Includes working examples.
ACTIONS
There are some general principles at work here. First, each task when building a module is called an action''. These actions are listed above; they correspond to the building, testing, installing, packaging, etc., tasks.
Second, arguments are processed in a very systematic way. Arguments are always key=value pairs. They may be specified at perl Build.PL time (i.e. perl Build.PL destdir=/my/secret/place), in which case their values last for the lifetime of the Build script. They may also be specified when executing a particular action (i.e. Build test verbose=1), in which case their values last only for the lifetime of that command. Per-action command line parameters take precedence over parameters specified at perl Build.PL time.
The build process also relies heavily on the Config.pm module, and all the key=value pairs in Config.pm are available in
$self->{config}. If the user wishes to override any of the values in Config.pm, she may specify them like so: perl Build.PL --config cc=gcc --config ld=gcc The following build actions are provided by default. build [version 0.01] If you run the Build script without any arguments, it runs the build action, which in turn runs the code and docs actions. This is analogous to the MakeMaker 'make all' target. clean [version 0.01] This action will clean up any files that the build process may have created, including the blib/ directory (but not including the _build/ directory and the Build script itself). code [version 0.20] This action builds your codebase. By default it just creates a blib/ directory and copies any .pm and .pod files from your lib/ directory into the blib/ directory. It also compiles any .xs files from lib/ and places them in blib/. Of course, you need a working C compiler (probably the same one that built perl itself) for the compilation to work properly. The code action also runs any .PL files in your lib/ directory. Typically these create other files, named the same but without the .PL ending. For example, a file lib/Foo/Bar.pm.PL could create the file lib/Foo/Bar.pm. The .PL files are processed first, so any .pm files (or other kinds that we deal with) will get copied correctly. config_data [version 0.26] ... diff [version 0.14] This action will compare the files about to be installed with their installed counterparts. For .pm and .pod files, a diff will be shown (this currently requires a 'diff' program to be in your PATH). For other files like compiled binary files, we simply report whether they differ. A flags parameter may be passed to the action, which will be passed to the 'diff' program. Consult your 'diff' documentation for the parameters it will accept - a good one is -u: ./Build diff flags=-u dist [version 0.02] This action is helpful for module authors who want to package up their module for source distribution through a medium like CPAN. It will create a tarball of the files listed in MANIFEST and compress the tarball using GZIP compression. By default, this action will use the external tar and gzip executables on Unix-like platforms, and the Archive::Tar module elsewhere. However, you can force it to use whatever executable you want by supplying an explicit tar (and optional gzip) parameter: ./Build dist --tar C:\path\to\tar.exe --gzip C:\path\to\zip.exe distcheck [version 0.05] Reports which files are in the build directory but not in the MANIFEST file, and vice versa. (See manifest for details.) distclean [version 0.05] Performs the 'realclean' action and then the 'distcheck' action. distdir [version 0.05] Creates a distribution directory'' named $dist_name-$dist_version (if that directory already exists, it will be removed first), then copies all the files listed in the MANIFEST file to that directory. This directory is what the distribution tarball is created from. distmeta [version 0.21] Creates the META.yml file that describes the distribution. META.yml is a file containing various bits of metadata'' about the distribution. The metadata includes the distribution name, version, abstract, prerequisites, license, and various other data about the distribution. This file is created as META.yml in YAML format. It is recommended that the YAML module be installed to create it. If the YAML module is not installed, an internal module supplied with Module::Build will be used to write the META.yml file, and this will most likely be fine. META.yml file must also be listed in MANIFEST - if it's not, a warning will be issued. The current version of the META.yml specification can be found at http://module-build.sourceforge.net/META-spec-current.html distsign [version 0.16] Uses Module::Signature to create a SIGNATURE file for your distribution, and adds the SIGNATURE file to the distribution's MANIFEST. disttest [version 0.05] Performs the 'distdir' action, then switches into that directory and runs a perl Build.PL, followed by the 'build' and 'test' actions in that directory. docs [version 0.20] This will generate documentation (e.g. Unix man pages and html documents) for any installable items under blib/ that contain POD. If there are no bindoc or libdoc installation targets defined (as will be the case on systems that don't support Unix manpages) no action is taken for manpages. If there are no binhtml or libhtml installation targets defined no action is taken for html documents. fakeinstall [version 0.02] This is just like the install action, but it won't actually do anything, it will just report what it would have done if you had actually run the install action. help [version 0.03] This action will simply print out a message that is meant to help you use the build process. It will show you a list of available build actions too. With an optional argument specifying an action name (e.g. Build help test), the 'help' action will show you any POD documentation it can find for that action. html [version 0.26] This will generate HTML documentation for any binary or library files under blib/ that contain POD. The HTML documentation will only be installed if the install paths can be determined from values in Config.pm. You can also supply or override install paths on the command line by specifying install_path values for the binhtml and/or libhtml installation targets. install [version 0.01] This action will use ExtUtils::Install to install the files from blib/ into the system. See INSTALL PATHS for details about how Module::Build determines where to install things, and how to influence this process. If you want the installation process to look around in @INC for other versions of the stuff you're installing and try to delete it, you can use the uninst parameter, which tells ExtUtils::Install to do so: ./Build install uninst=1 This can be a good idea, as it helps prevent multiple versions of a module from being present on your system, which can be a confusing situation indeed. manifest [version 0.05] This is an action intended for use by module authors, not people installing modules. It will bring the MANIFEST up to date with the files currently present in the distribution. You may use a MANIFEST.SKIP file to exclude certain files or directories from inclusion in the MANIFEST. MANIFEST.SKIP should contain a bunch of regular expressions, one per line. If a file in the distribution directory matches any of the regular expressions, it won't be included in the MANIFEST. The following is a reasonable MANIFEST.SKIP starting point, you can add your own stuff to it: ^_build ^Build$
^blib
~$\.bak$
^MANIFEST\.SKIP$CVS See the distcheck and skipcheck actions if you want to find out what the manifest action would do, without actually doing anything. manpages [version 0.28] This will generate man pages for any binary or library files under blib/ that contain POD. The man pages will only be installed if the install paths can be determined from values in Config.pm. You can also supply or override install paths by specifying there values on the command line with the bindoc and libdoc installation targets. pardist [version 0.2806] Generates a PAR binary distribution for use with the PAR manpage or the PAR::Dist manpage. It requires that the PAR::Dist module (version 0.17 and up) is installed on your system. ppd [version 0.20] Build a PPD file for your distribution. This action takes an optional argument codebase which is used in the generated ppd file to specify the (usually relative) URL of the distribution. By default, this value is the distribution name without any path information. Example: ./Build ppd --codebase "MSWin32-x86-multi-thread/Module-Build-0.21.tar.gz" ppmdist [version 0.23] Generates a PPM binary distribution and a PPD description file. This action also invokes the 'ppd' action, so it can accept the same codebase argument described under that action. This uses the same mechanism as the dist action to tar & zip its output, so you can supply tar and/or gzip parameters to affect the result. prereq_report [version 0.28] This action prints out a list of all prerequisites, the versions required, and the versions actually installed. This can be useful for reviewing the configuration of your system prior to a build, or when compiling data to send for a bug report. pure_install [version 0.28] This action is identical to the install action. In the future, though, if install starts writing to the file file$(INSTALLARCHLIB)/perllocal.pod, pure_install won't, and that will be the only difference between them.
realclean
[version 0.01]
This action is just like the clean action, but also removes the _build directory and the Build script. If you run the realclean action, you are essentially starting over, so you will have to re-create the Build script again.
retest
[version 0.2806]
This is just like the test action, but doesn't actually build the distribution first, and doesn't add blib/ to the load path, and therefore will test against a previously installed version of the distribution. This can be used to verify that a certain installed distribution still works, or to see whether newer versions of a distribution still pass the old regression tests, and so on.
skipcheck
[version 0.05]
Reports which files are skipped due to the entries in the MANIFEST.SKIP file (See manifest for details)
test
[version 0.01]
This will use Test::Harness to run any regression tests and report their results. Tests can be defined in the standard places: a file called test.pl in the top-level directory, or several files ending with .t in a t/ directory.
If you want tests to be 'verbose', i.e. show details of test execution rather than just summary information, pass the argument verbose=1.
If you want to run tests under the perl debugger, pass the argument debugger=1.
In addition, if a file called visual.pl exists in the top-level directory, this file will be executed as a Perl script and its output will be shown to the user. This is a good place to put speed tests or other tests that don't use the Test::Harness format for output.
To override the choice of tests to run, you may pass a test_files argument whose value is a whitespace-separated list of test scripts to run. This is especially useful in development, when you only want to run a single test to see whether you've squashed a certain bug yet:
./Build test --test_files t/something_failing.t
You may also pass several test_files arguments separately:
./Build test --test_files t/one.t --test_files t/two.t
or use a glob()-style pattern:
./Build test --test_files 't/01-*.t'
testall
[verion 0.2807]
[Note: the 'testall' action and the code snippets below are currently in alpha stage, see http://www.nntp.perl.org/group/perl.module.build/2007/03/msg584.html ]
Runs the test action plus each of the test$type actions defined by the keys of the test_types parameter. Currently, you need to define the ACTION_test$type method yourself and enumerate them in the test_types parameter.
my $mb = Module::Build->subclass( code => q( sub ACTION_testspecial { shift->generic_test(type => 'special'); } sub ACTION_testauthor { shift->generic_test(type => 'author'); } ) )->new( ... test_types => { special => '.st', author => '.at', }, ... testcover [version 0.26] Runs the test action using Devel::Cover, generating a code-coverage report showing which parts of the code were actually exercised during the tests. To pass options to Devel::Cover, set the $DEVEL_COVER_OPTIONS environment variable:
DEVEL_COVER_OPTIONS=-ignore,Build ./Build testcover
testdb
[version 0.05]
This is a synonym for the 'test' action with the debugger=1 argument.
testpod
[version 0.25]
This checks all the files described in the docs action and produces Test::Harness-style output. If you are a module author, this is useful to run before creating a new release.
testpodcoverage
[version 0.28]
This checks the pod coverage of the distribution and produces Test::Harness-style output. If you are a module author, this is useful to run before creating a new release.
versioninstall
[version 0.16]
** Note: since only.pm is so new, and since we just recently added support for it here too, this feature is to be considered experimental. **
If you have the only.pm module installed on your system, you can use this action to install a module into the version-specific library trees. This means that you can have several versions of the same module installed and use a specific one like this:
use only MyModule => 0.55;
To override the default installation libraries in only::config, specify the versionlib parameter when you run the Build.PL script:
perl Build.PL --versionlib /my/version/place/
To override which version the module is installed as, specify the versionlib parameter when you run the Build.PL script:
perl Build.PL --version 0.50
See the only.pm documentation for more information on version-specific installs.
OPTIONS
Command Line Options
The following options can be used during any invocation of Build.PL or the Build script, during any action. For information on other options specific to an action, see the documentation for the respective action.
NOTE: There is some preliminary support for options to use the more familiar long option style. Most options can be preceded with the -- long option prefix, and the underscores changed to dashes (e.g. --use-rcfile). Additionally, the argument to boolean options is optional, and boolean options can be negated by prefixing them with 'no' or 'no-' (e.g. --noverbose or --no-verbose).
quiet
Suppress informative messages on output.
use_rcfile
Load the ~/.modulebuildrc option file. This option can be set to false to prevent the custom resource file from being loaded.
verbose
Display extra information about the Build on output.
allow_mb_mismatch
Suppresses the check upon startup that the version of Module::Build we're now running under is the same version that was initially invoked when building the distribution (i.e. when the Build.PL script was first run). Use with caution.
Default Options File (.modulebuildrc)
[version 0.28]
When Module::Build starts up, it will look first for a file, $ENV{HOME}/.modulebuildrc. If it's not found there, it will look in the the .modulebuildrc file in the directories referred to by the environment variables HOMEDRIVE + HOMEDIR, USERPROFILE, APPDATA, WINDIR, SYS$LOGIN. If the file exists, the options specified there will be used as defaults, as if they were typed on the command line. The defaults can be overridden by specifying new values on the command line.
The action name must come at the beginning of the line, followed by any amount of whitespace and then the options. Options are given the same as they would be on the command line. They can be separated by any amount of whitespace, including newlines, as long there is whitespace at the beginning of each continued line. Anything following a hash mark (#) is considered a comment, and is stripped before parsing. If more than one line begins with the same action name, those lines are merged into one set of options.
Besides the regular actions, there are two special pseudo-actions: the key * (asterisk) denotes any global options that should be applied to all actions, and the key 'Build_PL' specifies options to be applied when you invoke perl Build.PL.
* verbose=1 # global options
diff flags=-u
install --install_base /home/ken
--install_path html=/home/ken/docs/html
If you wish to locate your resource file in a different location, you can set the environment variable 'MODULEBUILDRC' to the complete absolute path of the file containing your options.
INSTALL PATHS
[version 0.19]
When you invoke Module::Build's build action, it needs to figure out where to install things. The nutshell version of how this works is that default installation locations are determined from Config.pm, and they may be overridden by using the install_path parameter. An install_base parameter lets you specify an alternative installation root like /home/foo, and a destdir lets you specify a temporary installation directory like /tmp/install in case you want to create bundled-up installable packages.
Natively, Module::Build provides default installation locations for the following types of installable items:
lib
Usually pure-Perl module files ending in .pm.
arch
Architecture-dependent'' module files, usually produced by compiling XS, Inline, or similar code.
script
Programs written in pure Perl. In order to improve reuse, try to make these as small as possible - put the code into modules whenever possible.
bin
Architecture-dependent'' executable programs, i.e. compiled C code or something. Pretty rare to see this in a perl distribution, but it happens.
bindoc
Documentation for the stuff in script and bin. Usually generated from the POD in those files. Under Unix, these are manual pages belonging to the 'man1' category.
libdoc
Documentation for the stuff in lib and arch. This is usually generated from the POD in .pm files. Under Unix, these are manual pages belonging to the 'man3' category.
binhtml
This is the same as bindoc above, but applies to html documents.
libhtml
This is the same as bindoc above, but applies to html documents.
Four other parameters let you control various aspects of how installation paths are determined:
installdirs
The default destinations for these installable things come from entries in your system's Config.pm. You can select from three different sets of default locations by setting the installdirs parameter as follows:
'installdirs' set to:
core site vendor
uses the following defaults from Config.pm:
lib => installprivlib installsitelib installvendorlib
arch => installarchlib installsitearch installvendorarch
script => installscript installsitebin installvendorbin
bin => installbin installsitebin installvendorbin
bindoc => installman1dir installsiteman1dir installvendorman1dir
libdoc => installman3dir installsiteman3dir installvendorman3dir
binhtml => installhtml1dir installsitehtml1dir installvendorhtml1dir [*]
libhtml => installhtml3dir installsitehtml3dir installvendorhtml3dir [*]
* Under some OS (eg. MSWin32) the destination for html documents is
determined by the C<Config.pm> entry C<installhtmldir>.
The default value of installdirs is site''. If you're creating vendor distributions of module packages, you may want to do something like this:
perl Build.PL --installdirs vendor
or
./Build install --installdirs vendor
If you're installing an updated version of a module that was included with perl itself (i.e. a core module''), then you may set installdirs to core'' to overwrite the module in its present location.
(Note that the 'script' line is different from MakeMaker - unfortunately there's no such thing as installsitescript'' or installvendorscript'' entry in Config.pm, so we use the installsitebin'' and installvendorbin'' entries to at least get the general location right. In the future, if Config.pm adds some more appropriate entries, we'll start using those.)
install_path
Once the defaults have been set, you can override them.
On the command line, that would look like this:
perl Build.PL --install_path lib=/foo/lib --install_path arch=/foo/lib/arch
or this:
./Build install --install_path lib=/foo/lib --install_path arch=/foo/lib/arch
install_base
You can also set the whole bunch of installation paths by supplying the install_base parameter to point to a directory on your system. For instance, if you set install_base to /home/ken'' on a Linux system, you'll install as follows:
lib => /home/ken/lib/perl5
arch => /home/ken/lib/perl5/i386-linux
script => /home/ken/bin
bin => /home/ken/bin
bindoc => /home/ken/man/man1
libdoc => /home/ken/man/man3
binhtml => /home/ken/html
libhtml => /home/ken/html
Note that this is different from how MakeMaker's PREFIX parameter works. install_base just gives you a default layout under the directory you specify, which may have little to do with the installdirs=site layout.
The exact layout under the directory you specify may vary by system - we try to do the sensible'' thing on each platform.
destdir
If you want to install everything into a temporary directory first (for instance, if you want to create a directory tree that a package manager like rpm or dpkg could create a package from), you can use the destdir parameter:
perl Build.PL --destdir /tmp/foo
or
./Build install --destdir /tmp/foo
This will effectively install to /tmp/foo/$sitelib'', /tmp/foo/$sitearch'', and the like, except that it will use File::Spec to make the pathnames work correctly on whatever platform you're installing on.
prefix
Provided for compatibility with ExtUtils::MakeMaker's PREFIX argument. prefix should be used when you wish Module::Build to install your modules, documentation and scripts in the same place ExtUtils::MakeMaker does.
The following are equivalent.
perl Build.PL --prefix /tmp/foo
perl Makefile.PL PREFIX=/tmp/foo
Because of the very complex nature of the prefixification logic, the behavior of PREFIX in MakeMaker has changed subtly over time. Module::Build's --prefix logic is equivalent to the PREFIX logic found in ExtUtils::MakeMaker 6.30.
If you do not need to retain compatibility with ExtUtils::MakeMaker or are starting a fresh Perl installation we recommand you use install_base instead (and INSTALL_BASE in ExtUtils::MakeMaker). See Instaling in the same location as ExtUtils::MakeMaker in the Module::Build::Cookbook manpage for further information.
MOTIVATIONS
There are several reasons I wanted to start over, and not just fix what I didn't like about MakeMaker:
• I don't like the core idea of MakeMaker, namely that make should be involved in the build process. Here are my reasons:
+
When a person is installing a Perl module, what can you assume about their environment? Can you assume they have make? No, but you can assume they have some version of Perl.
• +
When a person is writing a Perl module for intended distribution, can you assume that they know how to build a Makefile, so they can customize their build process? No, but you can assume they know Perl, and could customize that way.
For years, these things have been a barrier to people getting the build/install process to do what they want.
• There are several architectural decisions in MakeMaker that make it very difficult to customize its behavior. For instance, when using MakeMaker you do use ExtUtils::MakeMaker, but the object created in WriteMakefile() is actually blessed into a package name that's created on the fly, so you can't simply subclass ExtUtils::MakeMaker. There is a workaround MY package that lets you override certain MakeMaker methods, but only certain explicitly preselected (by MakeMaker) methods can be overridden. Also, the method of customization is very crude: you have to modify a string containing the Makefile text for the particular target. Since these strings aren't documented, and can't be documented (they take on different values depending on the platform, version of perl, version of MakeMaker, etc.), you have no guarantee that your modifications will work on someone else's machine or after an upgrade of MakeMaker or perl.
• It is risky to make major changes to MakeMaker, since it does so many things, is so important, and generally works. Module::Build is an entirely separate package so that I can work on it all I want, without worrying about backward compatibility.
• Finally, Perl is said to be a language for system administration. Could it really be the case that Perl isn't up to the task of building and installing software? Even if that software is a bunch of stupid little .pm files that just need to be copied from one place to another? My sense was that we could design a system to accomplish this in a flexible, extensible, and friendly manner. Or die trying.
TO DO
The current method of relying on time stamps to determine whether a derived file is out of date isn't likely to scale well, since it requires tracing all dependencies backward, it runs into problems on NFS, and it's just generally flimsy. It would be better to use an MD5 signature or the like, if available. See cons for an example.
- append to perllocal.pod
- add a 'plugin' functionality
AUTHOR
Ken Williams <kwilliams@cpan.org>
Development questions, bug reports, and patches should be sent to the Module-Build mailing list at <module-build@perl.org>.
Bug reports are also welcome at <http://rt.cpan.org/NoAuth/Bugs.html?Dist=Module-Build>.
The latest development version is available from the Subversion repository at <https://svn.perl.org/modules/Module-Build/trunk/> | 2020-01-18 20:13:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5178208351135254, "perplexity": 3886.53127379005}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593937.27/warc/CC-MAIN-20200118193018-20200118221018-00294.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/proc.2013.2013.217 | # American Institute of Mathematical Sciences
2013, 2013(special): 217-226. doi: 10.3934/proc.2013.2013.217
## The role of lower and upper solutions in the generalization of Lidstone problems
1 Centro de Investigação em Matemática e Aplicações da U.E. (CIMA-CE), Rua Romão Ramalho 59, 7000-671 Évora 2 School of Sciences and Technology. Department of Mathematics, University of Évora, Research Center in Mathematics and Applications of the University of Évora, (CIMA-UE), Rua Romão Ramalho, 59, 7000-671 Évora, Portugal
Received September 2012 Revised February 2013 Published November 2013
In this the authors consider the nonlinear fully equation
\begin{equation*} u^{(iv)} (x) + f( x,u(x) ,u^{\prime}(x) ,u^{\prime \prime}(x) ,u^{\prime \prime \prime}(x) ) = 0 \end{equation*} for $x\in [ 0,1] ,$ where $f:[ 0,1] \times \mathbb{R} ^{4} \to \mathbb{R}$ is a continuous functions, coupled with the Lidstone boundary conditions, \begin{equation*} u(0) = u(1) = u^{\prime \prime}(0) = u^{\prime \prime }(1) = 0. \end{equation*}
They discuss how different definitions of lower and upper solutions can generalize existence and location results for boundary value problems with Lidstone boundary data. In addition, they replace the usual bilateral Nagumo condition by a one-sided condition, allowing the nonlinearity to be unbounded$.$ An example will show that this unilateral condition generalizes the usual one and stress the potentialities of the new definitions.
Citation: João Fialho, Feliz Minhós. The role of lower and upper solutions in the generalization of Lidstone problems. Conference Publications, 2013, 2013 (special) : 217-226. doi: 10.3934/proc.2013.2013.217
##### References:
show all references
##### References:
[1] Rubén Figueroa, Rodrigo López Pouso, Jorge Rodríguez–López. Existence and multiplicity results for second-order discontinuous problems via non-ordered lower and upper solutions. Discrete & Continuous Dynamical Systems - B, 2020, 25 (2) : 617-633. doi: 10.3934/dcdsb.2019257 [2] Alberto Cabada, João Fialho, Feliz Minhós. Non ordered lower and upper solutions to fourth order problems with functional boundary conditions. Conference Publications, 2011, 2011 (Special) : 209-218. doi: 10.3934/proc.2011.2011.209 [3] Luis Barreira, Davor Dragičević, Claudia Valls. From one-sided dichotomies to two-sided dichotomies. Discrete & Continuous Dynamical Systems, 2015, 35 (7) : 2817-2844. doi: 10.3934/dcds.2015.35.2817 [4] Wolf-Jüergen Beyn, Janosch Rieger. The implicit Euler scheme for one-sided Lipschitz differential inclusions. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 409-428. doi: 10.3934/dcdsb.2010.14.409 [5] Ana Maria Bertone, J.V. Goncalves. Discontinuous elliptic problems in $R^N$: Lower and upper solutions and variational principles. Discrete & Continuous Dynamical Systems, 2000, 6 (2) : 315-328. doi: 10.3934/dcds.2000.6.315 [6] Charles Fulton, David Pearson, Steven Pruess. Characterization of the spectral density function for a one-sided tridiagonal Jacobi matrix operator. Conference Publications, 2013, 2013 (special) : 247-257. doi: 10.3934/proc.2013.2013.247 [7] Piermarco Cannarsa, Vilmos Komornik, Paola Loreti. One-sided and internal controllability of semilinear wave equations with infinitely iterated logarithms. Discrete & Continuous Dynamical Systems, 2002, 8 (3) : 745-756. doi: 10.3934/dcds.2002.8.747 [8] Kengo Matsumoto. K-groups of the full group actions on one-sided topological Markov shifts. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3753-3765. doi: 10.3934/dcds.2013.33.3753 [9] Chunyan Ji, Yang Xue, Yong Li. Periodic solutions for SDEs through upper and lower solutions. Discrete & Continuous Dynamical Systems - B, 2020, 25 (12) : 4737-4754. doi: 10.3934/dcdsb.2020122 [10] Luisa Malaguti, Cristina Marcelli. Existence of bounded trajectories via upper and lower solutions. Discrete & Continuous Dynamical Systems, 2000, 6 (3) : 575-590. doi: 10.3934/dcds.2000.6.575 [11] Massimo Tarallo, Zhe Zhou. Limit periodic upper and lower solutions in a generic sense. Discrete & Continuous Dynamical Systems, 2018, 38 (1) : 293-309. doi: 10.3934/dcds.2018014 [12] Alberto Boscaggin, Fabio Zanolin. Subharmonic solutions for nonlinear second order equations in presence of lower and upper solutions. Discrete & Continuous Dynamical Systems, 2013, 33 (1) : 89-110. doi: 10.3934/dcds.2013.33.89 [13] Rim Bourguiba, Rosana Rodríguez-López. Existence results for fractional differential equations in presence of upper and lower solutions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1723-1747. doi: 10.3934/dcdsb.2020180 [14] Alessandro Fonda, Rodica Toader. A dynamical approach to lower and upper solutions for planar systems "To the memory of Massimo Tarallo". Discrete & Continuous Dynamical Systems, 2021, 41 (8) : 3683-3708. doi: 10.3934/dcds.2021012 [15] Juntang Ding, Xuhui Shen. Upper and lower bounds for the blow-up time in quasilinear reaction diffusion problems. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4243-4254. doi: 10.3934/dcdsb.2018135 [16] Anne Mund, Christina Kuttler, Judith Pérez-Velázquez. Existence and uniqueness of solutions to a family of semi-linear parabolic systems using coupled upper-lower solutions. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5695-5707. doi: 10.3934/dcdsb.2019102 [17] Christoph Kawan. Upper and lower estimates for invariance entropy. Discrete & Continuous Dynamical Systems, 2011, 30 (1) : 169-186. doi: 10.3934/dcds.2011.30.169 [18] Armengol Gasull, Hector Giacomini, Joan Torregrosa. Explicit upper and lower bounds for the traveling wave solutions of Fisher-Kolmogorov type equations. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3567-3582. doi: 10.3934/dcds.2013.33.3567 [19] Nakao Hayashi, Chunhua Li, Pavel I. Naumkin. Upper and lower time decay bounds for solutions of dissipative nonlinear Schrödinger equations. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2089-2104. doi: 10.3934/cpaa.2017103 [20] Dongfeng Zhang, Junxiang Xu. On elliptic lower dimensional tori for Gevrey-smooth Hamiltonian systems under Rüssmann's non-degeneracy condition. Discrete & Continuous Dynamical Systems, 2006, 16 (3) : 635-655. doi: 10.3934/dcds.2006.16.635
Impact Factor: | 2021-09-27 16:16:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5470649600028992, "perplexity": 6214.927199986527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058456.86/warc/CC-MAIN-20210927151238-20210927181238-00719.warc.gz"} |
https://uta-ir.tdl.org/uta-ir/handle/10106/25174/recent-submissions | Now showing items 1-20 of 83
• #### The Art Of Microchannel Molding In Microscope Glass Slides
This research is an effort to develop a new process by combining engineering principles with the history of glass art molding to develop clear micro-level test features in glass, for bio-medical and drug experiments dealing ...
• #### IMPLEMENTATION PROCESS FOR AUTOMATED DATA ANALYSIS IN MINERAL EXTRACTION COMPANIES
(2015-11-24)
The need to determine knowledge from increasing amounts of information and raw data is a current and ongoing goal [1]. As technology continues to advance in the ability to collect and save more and more data, companies ...
• #### Evaluating The Economic Impacts Of Pipeline Useage On The Indian Oil and Gas Supply Chain.
The objective of this dissertation is to find the minimum supply chain cost for the Indian oil and gas industry. The problem is solved by introducing a mixed – integer programming model which will avail in taking the ...
• #### Evaluating The Economic Impacts Of Pipeline Usage On Texas Oil & Gas Supply Chain
The objective of this dissertation is to find the minimum supply chain cost for the Texas oil and gas industry, when pipeline is used as the major mode of transporting oil. The problem is solved, by introducing a mixed – ...
• #### Embedding Integrated RFID Sensors Into Fiber Reinforced Plastics During The Manufacturing Process
This research evaluates the impact of embedding a Radio Frequency Identification (RFID) tag into the structure of a fiber reinforced polymer (FRP). The first portion of the research evaluates the mechanical impacts of ...
• #### Build-to-order Supply Chain Efficiency Using Stochastic Frontier Analysis
The Build-to-Order supply chain (BTO-SC) is one agile supply chain that has received great attention in research and industry. Flexibility and responsiveness in mass customization has become a major objective of many ...
• #### Two-stage Stochastic Programming In Adaptive Interdisciplinary Pain Management
Chronic pain is a global health problem. About 100 million adult Americans suffering from chronic pain, which costs $560 -$635 billion per year. The Eugene McDermott Center for Pain Management at the University of Texas ...
• #### Evaluate The Impact Of Sustainability And Pipeline Quality On Global Crude Oil Supply Chain
United states is one of the largest crude oil producer in the world but the consumption rate is higher than production hence united states imports oil from varies part of world depend on different criteria. The objective ...
• #### High-dimensional Adaptive Dynamic Programming With Mixed Integer Linear Programming
Dynamic programming (DP, Bellman 1957) is a classic mathematical programming approach to solve multistage decision problems. The “Bellman equation” uses a recursive concept that includes both the current contribution and ...
• #### Stationary And Non-stationary Time Series Prediction Using State Space Model And Pattern-based Approach
The motion-adaptive radiotherapy techniques are promising to deliver ablative radiation doses to tumor with minimal normal tissue exposure by accounting for real-time tumor movement. However, a major challenge of successful ...
• #### An Exploration And Exploitation Pareto Approach To Surrogate Optimization
The experiments or simulations conducted by computers can be a tedious task,requiring substantial computational time. To find a global solution using a computerexperiments process, we usually need to perform many function ...
• #### Evaluating The Impact Of Auto Id Technologies On Oil And Gas ERP System Data Accuracy And Reliability
Most of Enterprise Resource Planning (ERP) systems currently use manual entry or mass upload methods to enter data collected from warehouse operations. These methods are highly unreliable and prone to manual error affecting ...
• #### A Systems Thinking Approach To Apply Water Sustainability In Hospitals
One of the most critical challenges the world is facing is to ensure everyone has access to an adequate and quality supply of water. As the population increases, the demand for water rises. Water needs to be sustained for ...
• #### Unmanned Aerial Vehicle Routing In The Presence Of Threats
The use of Unmanned Aerial Vehicles (UAVs) and the importance of its role have evolved and increased recently in both civilian and military operations. In this research, we study the routing of Unmanned Aerial Vehicles ...
• #### Joint Kinematics, Muscle Activity And Postural Strain For Finger- Intensive Operation Of Small Hand- Held Devices
Background: Extensive movement of fingers while playing video games, and while browsing or texting in a smartphone often result in medically recognized repetitive strain injuries such as "PlayStation thumb" or "Blackberry ...
• #### Evaluating The Impact Of A Shared Pharmaceutical Supply Chain Model To Minimize Counterfeit Drugs, Diverted Drugs, And Drug Shortages
The pharmaceutical supply chain in the United States of America (USA) is getting complicated and is often not controllable due to a globally open market, increasing online market, and many illegal activities. Consumers who ...
• #### Facility Capital Equipment And Labor Decision Support System Using A Discrete-event Simulation And Bottleneck Detection Approach
(Industrial & Manufacturing Engineering, 2014-07-14)
Market demand is constantly changing. Therefore, it is critical for companies to be flexible and willing to adapt in order to remain competitive. This study will evaluate bottleneck detection techniques that have been ...
• #### Inventory Pooling In Petroleum Upstream Logistics Network
(Industrial & Manufacturing Engineering, 2014-07-14)
The petroleum industry is heading toward the era of efficiency and cost reduction. Oilfield service companies have to raise their efficiency to stay competitive. This dissertation explores the efficiency issues facing an ...
• #### A Design And Analysis Of Computer Experiments-based Approach To Approximate Infinite Horizon Dynamic Programming With Continuous State Spaces
(Industrial & Manufacturing Engineering, 2014-07-14)
Dynamic programming (DP) is an optimization approach that transforms a complex problem into a sequence of simpler sub-problems at different points in stage. The original DP approach used Bellman's equation to compute the ...
• #### Phase I Monitoring With Applications In Manufacturing And Healthcare
(Industrial & Manufacturing Engineering, 2014-07-14)
This research develops statistical methods for quality monitoring in complex systems. Quality monitoring typically consists of two phases called Phase I analysis (or offline monitoring) and Phase II analysis (or online ... | 2018-01-19 15:02:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2259969860315323, "perplexity": 3544.7666082849914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888041.33/warc/CC-MAIN-20180119144931-20180119164931-00163.warc.gz"} |
http://mathoverflow.net/questions/66656/c-0-semigroups-applications?sort=votes | $C_0$-semigroups applications
My graduation thesis was about stability theorems for $C_0$-semigroups (see the Wikipedia article for the definitions: http://en.wikipedia.org/wiki/C0-semigroup). I would like to know if there is some aplicability of the stability theorems I know in this field. The only applications I found for my thesis were about the Hille-Yosida theorem and some of its applications to existence and uniqueness of solutions of partial differential equations.
I will not put any names to my theorems, since maybe they are not known to the world as my teachers name them. Here are some of them:
The $C_0$-semigroup $\{T(t)\}_{t \geq 0}$ is exponentially stable if and only if there exists $p \geq 1$ such that $\int_0^\infty \|T(t)\|^pdt <\infty$.
The $C_0$-semigroup $\{T(t)\}_{t \geq 0}$ is exponentially stable if and only if it satisfies the following condition: For any $f \in \mathcal{C}$ it follows that $x_f \in \mathcal{C}$ where $x_f: \Bbb{R}_+ \to X,\ x_f(t)=\int_0^t T(t-s)f(s)ds$, and $\mathcal{C} = \{ f : \Bbb{R}_+ \to X,\ f \text{ continuous and bounded } \}$.
The last theorem can be formulated and proved in some cases for $(L^p,L^q)$ spaces with $(p,q) \neq (1,\infty)$. A more general concept, dichotomy can be formulated (the space splits into two spaces, on one of them there is stability, and on the other one there is instability.
All these sound very nice, and have quite beautiful proofs, but are they applicable to some branches of applied math, such as ordinary or partial differential equations, or they are just pure math, and thats it?
-
I don't know if I have the right tags. – Beni Bogosel Jun 1 '11 at 12:16 | 2015-11-28 10:04:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183973670005798, "perplexity": 172.9935791589801}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398451872.11/warc/CC-MAIN-20151124205411-00136-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/66144/group-points-by-given-shapes | Group points by given shapes
I am using a sensing board able to detect magnetic signals between the board and a display.
I have a set of objects that are represented (each of them) by a unique set of points (magnets) with a particular shape. For example: object #1 is made by three points that form an equilateral triangle with side length 1cm; object #2 is made by three points that form a right triangle with sides 3cm, 4cm, 5cm; object #3 is made by three aligned points with distance 2cm; and so on. I can have a multiplicity of objects with unique patterns.
Now I have a list of points with the coordinates w.r.t. the Cartesian plane, and I need to match them referring to the patterns I got from the objects. I also know that every point must be matched, therefore I can minimize the overlapping errors. In practice, every point in the set can belong to maximum one object, and at the same time also it must belong to an object of the initial set.
Any idea on how to do that in an efficient way?
• Does your representation have accurate scale (can you tell 3 cm distance from 4 cm, or are we talking about detecting ratios 3:4:5)? How efficient does this need to be (how big is the number of points N, and would a $N^3$ algorithm be too slow)? – Karolis Juodelė Nov 17 '16 at 12:41
• The representation is accurate enough to distinguish between 3cm and 4cm, so we do not need to guess the 3:4:5 ratios independently from the distances. I am using this in Processing (processing.org) therefore it needs to be fast as this calculations need to be done tens of times per second. $N$ should not be greater than 20, I think that $N^3$ is enough. Thank you! – Alessio Palmero Aprosio Nov 17 '16 at 13:17
1 Answer
In the general case a problem like this is NP, however in the vast majority of real cases it should be easy.
20 points make 1140 triangles so it shouldn't be hard to pick out the triangles most similar to your basic shapes (unless the shapes can be more complicated). A little ugly backtracking may be needed when the top scoring triangles overlap.
Also, if most magnets move continuously, you can easily map old points to new points and old triangles to new triangles.
What I'm talking about are fairly obvious methods. There may be smarter ways to do this, but you don't necessarily need them. | 2020-01-29 22:07:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4636048972606659, "perplexity": 466.38597398965254}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251802249.87/warc/CC-MAIN-20200129194333-20200129223333-00235.warc.gz"} |
https://math.stackexchange.com/questions/4198476/do-matrices-of-this-type-form-a-group | # Do matrices of this type form a group? [closed]
I have a set of matrices of the form $$A = \begin{pmatrix} a & b\\ -b & a \end{pmatrix}$$ where $$a,b \in \mathbb{C}$$. They also have the property $$A A^{T} = (a^{2} + b^{2})\mathbb{I}$$ We do know that they form a group with two conditions:
• $$a^{2} + b^{2} \neq 0$$
• $${|a| }^{2} + {|b|}^{2} \neq 0$$
What group do these matrices form?
• If $a,b \in \mathbb{R}$, then the matrix $A$ represents the complex number $a+bi$. Jul 14, 2021 at 18:11
• why are there two reopen votes? Jul 15, 2021 at 1:14
The general bicomplex number can be represented by the matrix $${w\: iz \choose iz \:w}$$ which has determinant $$w^2+z^2$$.
Use a different basis and the matrix becomes $${a\:\: b \choose -b \:a}$$ which has determinant $$a^2+b^2$$ which form a complex algebra which is a ring because they are closed under addition and multiplication with basis $$\,\{I,J\}\,$$ where $$I^2=I,\,IJ=JI=J,\,J^2=-I.$$ One matrix representation is $$I={1\:0 \choose 0\:1},\quad J={0\:1 \choose -1\:0}$$ and another is $$I={1\:0 \choose 0\:1},\quad J={0\:i \choose i\:0} .$$
The bicomplex numbers form a commutative algebra over $$\,\mathbb{C}\,$$ of dimension two, which is isomorphic to the direct sum of algebras $$\,\mathbb{C}\oplus\mathbb{C}.$$
The isomorphism is similar to the case of split-complex numbers $$aI+bJ\;\;\leftrightarrow\;\;(a+b\,i,a-b\,i)$$ where addition and multiplication are defined componentwise which implies that the group of units is $$\,\mathbb C^{\times}\times\mathbb C^{\times}$$ with all numbers of the form $$\,a\,I\pm a\,i\,J\,$$ being the zero divisors. | 2022-08-18 02:00:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9040812849998474, "perplexity": 103.36549159882652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573145.32/warc/CC-MAIN-20220818003501-20220818033501-00786.warc.gz"} |
https://en.wikipedia.org/wiki/Talk:Furby | # Talk:Furby
## Banning
Can anyone explain the phenomenon of a school 'banning' a certain fad? What is it about the popularity of a toy/game that makes it more disruptive than any other object brought to school? You can't do anything in school other than pay attention to lectures without getting into trouble, so I'm just not understanding this policy.
The nosies it makes can be annoying and disrupt a class.
## Furbies dying?
At the time I remember a lot of people saying that their furbies would die if they communicated with other furbies in a certain way. Was there any truth in this or was it yet another Furby myth?
The rumor is 100% false.${\displaystyle Insertformulahere}$
## Furby stories
This article mentions how it's big with hacker communities and other people who like to modify the furbys. Maybe we should find a link to that kind of thing. I've heard so many hilarious stories about modded furbys - like a guy who painted his with glow in the dark paint and gave it glowing red eyes. The guy who got his to say swear words. Stories of feral furbies - etc.
## Motivation of parents violates impariality
In several places its stated parents had a greater motivation to get a furby then the child, and that parents were forced to buy them for 300 or more dollars at auction "just to make their child happy". The motivation of the parent's actions violates the concept of impariality, in addition to being unvarifiable and therefore having no place in the article. I'd ask for an editor's permission to alter these points or for them to do it themselves.
## Myth?
I happen to know it is not true that they only utter prerecorded phrases, we had to murder mine after it completely accurately imitated our telephone.
Hello fellow Wikipedians,
I have just modified one external link on Furby. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
You may set the `|checked=`, on this template, to true or failed to let other editors know you reviewed the change. If you find any errors, please use the tools below to fix them or call an editor by setting `|needhelp=` to your help request.
• If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
• If you found an error with any archives or the URLs themselves, you can fix them with this tool.
If you are unable to use these tools, you may set `|needhelp=<your help request>` on this template to request help from an experienced user. Please include details about your problem, to help other editors.
Cheers.—InternetArchiveBot 00:29, 9 October 2017 (UTC)
## Furby connect
We need to edit it to reflect the release of the Furby Connect. DPS2004 (talk) 13:41, 15 February 2018 (UTC) | 2018-02-22 11:55:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1959877461194992, "perplexity": 1973.7557301630204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814101.27/warc/CC-MAIN-20180222101209-20180222121209-00212.warc.gz"} |
https://www.scienceopen.com/document?vid=6f001eb8-8392-4415-8e1b-f688d692f58d | 14
views
0
recommends
+1 Recommend
0 collections
0
shares
• Record: found
• Abstract: found
• Article: found
Is Open Access
# Stochastic Hysteresis and Resonance in a Kinetic Ising System
Preprint
, ,
Bookmark
There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
### Abstract
We study hysteresis for a two-dimensional, spin-1/2, nearest-neighbor, kinetic Ising ferromagnet in an oscillating field, using Monte Carlo simulations and analytical theory. Attention is focused on small systems and weak field amplitudes at a temperature below $$T_{c}$$. For these restricted parameters, the magnetization switches through random nucleation of a single droplet of spins aligned with the applied field. We analyze the stochastic hysteresis observed in this parameter regime, using time-dependent nucleation theory and the theory of variable-rate Markov processes. The theory enables us to accurately predict the results of extensive Monte Carlo simulations, without the use of any adjustable parameters. The stochastic response is qualitatively different from what is observed, either in mean-field models or in simulations of larger spatially extended systems. We consider the frequency dependence of the probability density for the hysteresis-loop area and show that its average slowly crosses over to a logarithmic decay with frequency and amplitude for asymptotically low frequencies. Both the average loop area and the residence-time distributions for the magnetization show evidence of stochastic resonance. We also demonstrate a connection between the residence-time distributions and the power spectral densities of the magnetization time series. In addition to their significance for the interpretation of recent experiments in condensed-matter physics, including studies of switching in ferromagnetic and ferroelectric nanoparticles and ultrathin films, our results are relevant to the general theory of periodically driven arrays of coupled, bistable systems with stochastic noise.
### Most cited references80
• Record: found
### Stochastic resonance
(1998)
Bookmark
• Record: found
### Time‐Dependent Statistics of the Ising Model
(1963)
Bookmark
• Record: found
### A microscopic theory for antiphase boundary motion and its application to antiphase domain coarsening
(1979)
Bookmark
### Author and article information
###### Journal
02 December 1997
1998-02-25
###### Article
10.1103/PhysRevE.57.6512
cond-mat/9712021
Phys. Rev. E 57 6512-6533 (1998)
35 pages. Submitted to Phys. Rev. E Minor revisions to the text and updated references
cond-mat.mtrl-sci chao-dyn cond-mat.stat-mech nlin.CD
Condensed matter, Nonlinear & Complex systems | 2020-10-01 04:39:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4805506765842438, "perplexity": 2437.0784060108504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130615.94/warc/CC-MAIN-20201001030529-20201001060529-00125.warc.gz"} |
http://bib-pubdb1.desy.de/collection/PUB_T-20120731?ln=en | # T
2018-09-2716:23 [PUBDB-2018-03667] Report/Journal Article et al Lyman-$\alpha$ forest constraints on interacting dark sectors [DESY-18-081; TUM-HEP-1141-18; arXiv:1805.12203] The Lyman-$α$ forest is a valuable probe of dark matter models featuring a scale-dependent suppression of the power spectrum as compared to $Λ$CDM . In this work, we present a new estimator of the Lyman-$α$ flux power spectrum that does not rely on hydrodynamical simulations [...] Restricted: PDF PDF (PDFA); 2018-09-1715:06 [PUBDB-2018-03521] Preprint/Report Konstandin, T. Bubble wall velocities in the Standard Model and beyond [DESY-18-162; arXiv:1809.04907] We present results for the bubble wall velocity and bubble wall thickness during a cosmological first-order phase transition in a condensed form. Our results are for minimal extensions of the Standard Model but in principle are applicable to a much broader class of settings. [...] OpenAccess: PDF PDF (PDFA); 2018-09-1309:59 [PUBDB-2018-03486] Journal Article/Report/Contribution to a conference proceedings Servant, G. The serendipity of electroweak baryogenesis [DESY-18-131] Higgs cosmology : Theo Murphy meeting, BuckinghamshireBuckinghamshire, UK, 27 Mar 2017 - 28 Mar 2017 Philosophical transactions of the Royal Society of London / A 376(2114), 20170124 - (2018) [10.1098/rsta.2017.0124] The origin of the matter–antimatter asymmetry of the universe remains unexplained in the Standard Model (SM) of particle physics. The origin of the flavour structure is another major puzzle of the theory. [...] 2018-09-1111:32 [PUBDB-2018-03438] Journal Article Ringwald, A. Search for WISPs gains momentum CERN courier 58(7), 25 - 33 (2018) External link: Fulltext 2018-09-0512:07 [PUBDB-2018-03389] Preprint/Report et al Strong constraints on clustered primordial black holes as dark matter [DESY-18-132; arXiv:1808.05910] The idea of dark matter in the form of primordial black holes has seen a recent revival triggered by the LIGO detection of gravitational waves from binary black hole mergers. In this context, it has been argued that a large initial clustering of primordial black holes can help alleviate the strong constraints on this scenario. [...] OpenAccess: PDF PDF (PDFA); 2018-09-0512:02 [PUBDB-2018-03388] Preprint/Report Wild, S. BBN constraints on MeV-scale dark sectors. Part II. Electromagnetic decays [DESY-18-133; arXiv:1808.09324] Meta-stable dark sector particles decaying into electrons or photons may non-trivially change the Hubble rate, lead to entropy injection into the thermal bath of Standard Model particles and may also photodisintegrate light nuclei formed in the early universe. We study generic constraints from Big Bang Nucleosynthesis on such a setup, with a particular emphasis on MeV-scale particles which are neither fully relativistic nor non-relativistic during all times relevant for Big Bang Nucleosynthesis. [...] OpenAccess: PDF PDF (PDFA); 2018-09-0511:55 [PUBDB-2018-03387] Preprint/Report et al $e^+ e^-$ angularity disributions at NNLL$^\prime$ accuracy [DESY-18-083; SI-HEP-2018-19; LA-UR-18-24071; arXiv:1808.07867] We present predictions for the $e^{+}e^{-}$ event shape angularities at NNLL$^{\prime}$ resummed and $\mathcal{O}(\alpha_s^{2})$ matched accuracy and compare them to LEP data at center-of-mass energies $Q=91.2$ GeV and $Q=197$ GeV. We perform the resummation within the framework of Soft-Collinear Effective Theory, and make use of recent results for the two-loop angularity soft function. [...] OpenAccess: PDF PDF (PDFA); 2018-09-0510:09 [PUBDB-2018-03375] Preprint/Report et al MSSM Higgs Boson Searches at the LHC: Benchmark Scenarios for Run 2 and Beyond [DESY-18-140; MPP-2018-211; KA-TP-25-2018; IFT-UAM/CSIC-18-017; EFI-18-12; arXiv:1808.07542] We propose six new benchmark scenarios for Higgs boson searches in the Minimal Supersymmetric Standard Model. Our calculations follow the recommendations of the LHC Higgs Cross Section Working Group, and benefit from recent developments in the predictions for the Higgs-boson masses and mixing. [...] OpenAccess: PDF PDF (PDFA); 2018-09-0510:04 [PUBDB-2018-03374] Preprint/Report Akal, I. Exact instantons via worldline deformations [DESY-18-145; arXiv:1808.06868] The imaginary part of the one loop effective action in external backgrounds can be efficiently computed using worldline instantons which are closed periodic paths in spacetime. Exact solutions for nonstatic backgrounds are only known in certain cases. [...] OpenAccess: PDF PDF (PDFA); 2018-09-0509:59 [PUBDB-2018-03373] Preprint/Report Akal, I. Entanglement entropy on finitely ramified graphs [DESY-18-146; arXiv:1808.10391] We compute the entanglement entropy in a composite system separated by a finitely ramified boundary with the structure of a self-similar lattice graph. We derive the entropy as a function of the decimation factor which determines the spectral dimension, the latter being generically different from the topological dimension. [...] OpenAccess: PDF PDF (PDFA); | 2018-10-18 09:39:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5251250267028809, "perplexity": 3902.879254087884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511761.78/warc/CC-MAIN-20181018084742-20181018110242-00473.warc.gz"} |
https://economics.stackexchange.com/questions/5752/are-whites-robust-standard-errors-robust-to-clustered-errors/5755 | # Are White's Robust standard errors robust to clustered errors?
I want to ask about OLS White's 1980 "robust" standard errors. The key assumption, is that regression errors $u_i$ have distinct variances $σ_i^2$. Then the variance matrix is: $$\Sigma = \operatorname{diag}(\sigma_1^2, \ldots, \sigma_n^2)$$ with its White's estimator: $$\hat\sigma_i^2 = \hat u_i^2$$ This is the HCE (heteroscedasticity-consistent estimator). Is the White's robust variance of $\hat{\beta}$ estimated by OLS assuming independence? For example, assume that there is some $i,j$ such that $$Cov(u_i,u_j)\neq0$$ Because $i,j$ are part of a cluster. In that case $\Sigma$ is not diagonal but has the same diagonal as before. Will the White estimator converge to the diagonal of the true $\Sigma$?
• @Soccerman what's your motivation for diluting this post so much ? We should keep notations inline when possible, IMO, while spacing out equations is fine. – VicAche May 26 '15 at 13:02 | 2019-10-23 13:30:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9508013129234314, "perplexity": 1194.8797251062076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00427.warc.gz"} |
https://math.stackexchange.com/questions/140906/is-there-such-a-function | Is there such a function?
Does there exist a continuous function $f:[0,1]\rightarrow\mathbb{R}$ such that for any two points P,Q on the curve, there exists a point R on the curve such that PQR is an equilateral triangle? If so, can we find a smooth one?
For bounty, scroll to bottom of page...
• is $f : [0,1] \to {\mathbb R}^2$? – sdcvvc May 4 '12 at 14:14
• Well you can construct it either way. It's the same thing for the purposes of this question. I guess for a proof it's easier to use f:[0,1]-->R^2 because the geometry is in R^2. Edit: it might actually be easier to work in R. Whatever. It doesn't matter. – Adam Rubinson May 4 '12 at 14:23
• My only guess for the answer would be to use the Intermediate Value Theorem for a relatively flat (e.g. |gradient| <= 2/3) function to show that it is never true. To this end I'm not making progress, and even so, it says nothing about a function that has |gradient| > 2/3 in places. – Adam Rubinson May 4 '12 at 14:35
• What curve? The graph of $f$? – Chris Eagle May 4 '12 at 14:48
• Do you mean to fill an equilaterial triangle? – Jim Hefferon May 4 '12 at 14:50
Here is the answer to the generalized form of your question in the case of bounded sets. As it turns out, the convexity assumption is unnecessary.
Proposition. Let $A\subseteq\Bbb R^2$ be a bounded set, such that for each pair of distinct points $x,y\in A$ there is a point $z\in A$, such that $x,y,z$ are the vertices of an equilateral triangle. Then $A$ is one of the following:
• empty set,
• a set containing only one point,
• the set of vertices of some equilateral triangle.
Proof. Let $A$ be a set satisfying the hypotheses of the proposition. Suppose $A$ contains more than one point. We will show that then $A$ must be the set of vertices of some equilateral triangle.
First, we shall prove the proposition in the case that $A$ is closed. (In the end we shall show that the general case follows easily from this one.) So, let's assume $A$ is closed, i.e. $A$ contains all its limit points. Then $A$ is compact, so there exist points $a,b\in A$ such that $$d(a,b)=\operatorname{diam}A=\sup\lbrace d(x,y)|\text{ }x,y\in A\rbrace,$$ where $d$ is the metric in $\Bbb R^2$ and $\operatorname{diam}$ stands for diameter.
Let $c$ be a point such that $a,b,c$ are the vertices of an equilateral triangle. This exists by the hypotheses. Let $r=d(a,b)$ and let $S=\overline{K}(a,r)\cap\overline{K}(b,r)\cap\overline{K}(c,r)$, where $\overline{K}(x,r)$ denotes the closed ball centered at $x$ with radius $r$. Then $A\subseteq S$, because otherwise we would have $\operatorname{diam} A>r$ which is not the case. By the way, $S$ is a Reuleaux triangle:
We already know that $A$ contains the vertices of this Reuleaux triangle. We shall now show that this is all that $A$ contains. We shall first define some more points. Let $a'$ be the reflection of $a$ across the line through $b$ and $c$. Define $b'$ and $c'$ analogously. (As reflections of $b$ and $c$ across the lines opposite to them.) Let $$S_{a}:=S\setminus(\overline{K}(b',r)\cup\overline{K}(c',r))\\S_{b}:=S\setminus(\overline{K}(c',r)\cup\overline{K}(a',r))\\S_{c}:=S\setminus(\overline{K}(a',r)\cup\overline{K}(b',r))$$ Now, note that every point of $S\setminus\lbrace a,b,c\rbrace$ is contained in at least one of the sets $S_a,S_b,S_c$. So, to complete the proof in the case where $A$ is closed, it suffices to show that $A\cap S_a=\emptyset,A\cap S_b=\emptyset$ and $A\cap S_c=\emptyset$, which we will now do.
Actually, we will only prove the case of $S_a$. The other two can be proved by a completely symmetric argument, so we leave them to the reader. Let $x\in S_a$. Let $u(x)$ be the point that forms an equilateral triangle together with $a$ and $x$, for which the triangle $a,x,u(x)$ is oriented clockwise (i.e. negatively). Let $v(x)$ be the other point that forms an equilateral triangle with $a$ and $x$, i.e. $a,x,v(x)$ is oriented counterclockwise (positively). This defines two functions $u:S_a\to\Bbb R^2$, $v:S_a\to\Bbb R^2$. By definition, $u$ is a rotation around the point $a$ by $-\frac{\pi}{3}$, and $v$ is the rotation around the point $a$ by $\frac{\pi}3$.
By exploiting this geometry, one can now easily see that $u$ rotates the set $S_a$ outside of $S$, i.e. $u(S_a)\subseteq \Bbb R^2\setminus S$, and for $v$ the same holds: $v(S_a)\subseteq \Bbb R^2\setminus S$. But this means that for each $x\in S_a$ the only vertices that could form an equilateral triangle with $x$ and $a$ are outside of $S$, hence not in $A$. But then $x$ cannot be an element of $A$. Here's a picture showing $S_a$ (middle), and its two rotations (the Reuleaux triangle around $S_a$ is $S$, and the two rotations lie outside of $S$):
This completes the proof of the closed case.
In the general case of bounded (not necessarily closed) $A$ with at least two points, we proceed as follows. Let $\overline{A}$ be the closure of $A$. Since $\Bbb R^2$ is such a nice (i.e. first-countable) space, this means that $$\overline{A}=\lbrace x\in\Bbb R^2|\text{ there is a sequence } (x_n)_n\text{ with terms in } A\text{, such that } \lim_{n\to\infty}x_n=x\rbrace.$$ Now, clearly $A\subseteq\overline{A}$. Furthermore, $\overline{A}$ also satisfies the hypotheses of the proposition. To see this, let $x,y\in\overline{A}$. Then there are sequences $(x_n)_n$ and $(y_n)_n$ in $A$, such that $\lim_{n\to\infty}x_n=x$, $\lim_{n\to\infty}x_n=x$. But since $x_n,y_n\in A$ for each $n$, we can associate a point $z_n\in A$ to each pair $x_n,y_n\in A$ such that $x_n,y_n,z_n$ are the vertices of an equilateral triangle. Because $\overline{A}$ is compact, we can choose a subsequence of $(z_n)_n$ that converges to a point $z\in\overline{A}$. By continuity of the metric, $x,y,z$ again form an equilateral triangle. So, $\overline{A}$ indeed satisfies the hypotheses of the proposition and is therefore the set of vertices of some equilateral triangle. From this it easily follows that $\overline{A}=A$, which concludes the proof. $\square$
This proves among other things, that there is no continuous function $f:[0,1]\to\Bbb R$, whose graph contains for every pair of points a third point that forms an equilateral triangle with them. It proves even more: there is also no such discontinuous function $f:[0,1]\to\Bbb R$. Why? Suppose there is. Then it must be bounded, since, otherwise its graph would contain two points $x,y$ such that $d(x,y)>100$. There would then have to be a third point on the graph whose first coordinate would lie outside of $[0,1]$, which is not the case. But for bounded sets, the proposition applies. The same applies for any function $f:B\to\Bbb R$, where $B\subseteq\Bbb R$ is a bounded set.
The situation might be more interesting with functions $f:\Bbb R\to\Bbb R$, however. But in this case again, there is no such bounded function. (Same argument as in the previous paragraph.) It might be possible to construct something unbounded and probably horribly discontinuous, though, but I haven't thought much about that.
• I did suspect that convexity was unnecessary for proofs (especially general ones). And for geometric purposes, it seems as though you "need" convexity, even though you have shown you don't if you use compactness of R^n. In fact, the compactness method you use adds to the beauty of the problem in my opinion. HOWEVER, in my question I require for you to also show the equilateral triangle property holds/does not hold for cts, unbounded f:R-->R functions. If you can do this you will achieve the bounty... I don't think this is unreasonable as it clearly states it in my question at the bottom. – Adam Rubinson May 12 '12 at 2:35
• I can think of ideas that might work for "horrible discontinuous unbounded functions f:R-->R with the property" I am thinking: take the "equilateral triangle grid": exchangedownloads.smarttech.com/public/content/29/… considering only the vertices, not the lines joining the vertices. Rotate the whole graph by pi/4 radians anti-clockwise (for example). Then I think every point is no on top any other point. Maybe it doesn't fill up R in the domain but maybe we can add another point somewhere and "expand out" so that it does. – Adam Rubinson May 12 '12 at 2:46
• @AdamRubinson: If I think of something, I'll let you know. I won't be around for two days, though, since I have some stuff to do ... – Dejan Govc May 12 '12 at 3:01
• thanks v. much. To try to be fair, if someone else finds the answer before you, then if possible I'll see if I can share the bounty between you two. I'll ask a mod if I can do that if that situation arises... – Adam Rubinson May 12 '12 at 3:03
Not in the smooth case. Here we suppose that $f$ is $C^1$.
Let $s=\frac{\sqrt{3}}3=\tan{\fracπ6}.$
Suppose that for some $x_0$, $|f'(x_0)|<s$ then around $x_0$ the graph of $f$ looks like a straight line so horizontal that the third point of a equilateral triangle containing $(a,f(a))$ and $(b,f(b))$ would be of abscissa $x∈[a,b]$.
In the following picture, the case on the left is the one that is impossible.
Then $|f'|≥s$ and $f'$ is continuous so $f$ is monotonous so $f$ can't meet your criteria.
I don't know how to generalize.
• For your line of reasoning, we only need |f'(x_0)| < 2/3, not necc. 1/2. Thanks though, I think I can construct an epsilon-delta proof now for functions that have |f'(x_0)| < 2/3 for some x_0. If f is such that there is no x_0 with |f'(x_0)| < 2/3, then the result is kind of obvious. So that is the smooth case done. The "cts but non-smooth" case still remains to be solved. – Adam Rubinson May 4 '12 at 15:32
• I made an error, it was not $\frac12=\sin{\fracπ6}$ but $\tan{\fracπ6}$. However $\frac23$ is not small enough. – jmad May 5 '12 at 15:55
• Thanks for the diagram. Just for anyone reading: the shaded-in part of the diagram (equilateral triangles) are not supposed to be part of the function. Only the vertices of the equilateral triangle are meant to be part of the function. The only other option (for the smooth case) is if we have a function which has |f(x)| >= 2/3 everywhere. f must be monotone otherwise there must be a turning point which would mean |f|<=2/3 --><--. But f monotone and gradient >= 2/3 clearly won't work. – Adam Rubinson May 5 '12 at 18:52
• However if we don't require smoothness then we can try some crazy fractal functions like the Weierstrass function: en.wikipedia.org/wiki/Weierstrass_function – Adam Rubinson May 5 '12 at 18:53
• However if we don't require smoothness then we can try some crazy fractal functions like the Weierstrass function: en.wikipedia.org/wiki/Weierstrass_function I guess this is what I am after for the non-smooth case. Also, I don't think we can find a subset of R^2 (e.g. shaded in circle, shaded in square etc.) which works. Suppose there were such a shape, call it a set X in R^2. Just find (z1,z2) in X that maximizes {|zA-zB|: zA,zB in X}}. Then you would need the 3rd vertex (say z3) to be in X and when you colour in the equilateral triangle joining z1, z2 and z3, this shaded triangle – Adam Rubinson May 5 '12 at 19:04
Suppose that the set $S$ is bounded, and suppose that it has more than one point. Let $A$ and $B$ be two points in $S$. Then it must have a third point $C$ which is (another) vertex of the equilateral triangle with one side as $AB$, and by our condition of convexity the entire triangle $ABC$ must be in $S$.
Now let $D$ be the point of intersection of the perpendicular bisector of $\angle BAC$ and ${BC}$. Then there must be a point $E$ which is (another) vertex of the equilateral triangle with one side as $AD$. Let the distance of $AB$ be equal to $d$. Then, by several applications of the pythagorean theorem, we have that either the length of $CE=\frac {\sqrt 7} 2 d$ or $BE=\frac{\sqrt 7} 2 d$. Either way, this shows that the distance between points is unbounded, which is a contradiction with our hypothesis. Hence $S$ may not have more than one point.
If the sets are unbounded, it need not be all of $\mathbb R ^2$. For instance, any half-plane should do.
Here's a nice simple proof for why a continuous function is not possible. I've tried to be more laconic, and just give the idea of why it will not work, as I believe is in line with your preference (from the comments). Please let me know if you would like more details.
For any point (in the picture I place the point at the origin, but it's no matter) we cannot have any point of our graph on the lines emanating from this point at a $\pi/6$ angle with the horizontal, for if we did it would quickly follow that the only way to create an equilateral triangle would be to place a point above/below one of these points, and thus $f$ would not be a function. By the intermediate value theorem, this means that we cannot place any points in the shaded area, for in order to create an equilateral triangle with the point at the origin and a point in the shaded area we would need to place a point outside the shaded area, and at some point $f$ would have to cross the line. Now place a point anywhere in the non-shaded area. Immediately, from our discussion earlier, we know that $f$ cannot cross the dotted lines emanating from this point either, and furthermore the point needed to create an equilateral triangle will lie in between these lines, which we know (by the intermediate value theorem) is not a point which $f$ can reach. Hence, such a continuous function is impossible.
Also, my proof at the beginning, that a bounded set $S$ which is convex is not possible, does not really deal with showing that $S$ cannot be between a circle and a triangle, it is much simpler than that. Read it again, and note that all it shows is that, given two points $A, B$ with distance $d$, we are guaranteed that there are points $C, D$ which have distance $\frac {\sqrt 7} 2 d$. The reason this shows that the set $S$ cannot be bounded is that if it were bounded by some ball with diameter $h$, we are guaranteed that there exist points in $S$ with a distance greater than $h$. We simply pick any two points $A, B$ with distance $d$, and then repeat the process above $k$ times, where $\left(\frac {\sqrt 7} 2\right) ^k d>h$. Therefore $S$ cannot be bounded. Note that nowhere do we assume anything about bounding $S$ by the smallest ball possible, or choosing points yielding the maximum distance because (in particular, if $S$ were open) such points may not exist.
• Assuming what you wrote is correct... I still want to know if there is a $cts$ function f:R-->R with the equilateral triangle properties (I think this is equivalent to asking the same question for f:[0,infinity)-->R ) – Adam Rubinson May 11 '12 at 21:41
• @AdamRubinson Do you mean that the image of $f$ is a set $S$ which has these properties? – process91 May 12 '12 at 1:45
• @AdamRubinson It doesn't make sense to ask for a continuous function $f:\mathbb R \to \mathbb R$ such that the image set satisfies these properties because the image of $f$ will be one dimensional. What I think you mean is to ask for a function $f:\mathbb R \to \mathbb R ^2$. A simple answer to your question is that a Peano curve can be made with domain $\mathbb R$, whose range is all of $\mathbb R ^2$. Such a function would be continuous (read the linked page on Peano curves to see why)... – process91 May 12 '12 at 1:55
• Yes. That is actually an extremely nice proof in my opinion. I'll read the second half tomorrow. Also, are there any moderators on this site? I had a quick glance at all users yesterday and it was not obvious that there were... – Adam Rubinson May 13 '12 at 2:32
• @AdamRubinson: Michael and Dejan are the respective owners of their answers, even though this is your question. I think it's inappropriate to combine their answers without asking both users first, as well as unnecessarily redundant. There are four mods (Qiaochu, Mariano, Willy, Zev) and a couple more on the way, but IIRC moderators do not have the power to split bounties, and the only way to award a second bounty is to increase the amount on it. (Related.) – anon May 13 '12 at 15:02
Edit: The bounty relates to this post. Basically, fill in all the details. I'm doing this because I am interested in the problem, however I don't have time atm to investigate.
There is no convex subset of R^2 other than:
1. (The empty set)
2. One point
which is bounded and has the property that, given any two points in the set, the 3rd point is in the set. This can be proven as such:
1. Suppose there is such a convex bounded set in R^2 (i.e. we can stick a closed circle (i.e. closed ball) round it)
2. Stick the smallest possible closed circle/closed ball round it.
3. Take the two points in the set which have max distance.
4. If there is a 3rd point in the set then the set must be a convex shaded region between an equilateral triangle and the circle.
5. An equilateral triangle fails (think about the perpendicular bisector).
6. Circle fails (diameter)
7. Anything in between fails (the perpendicular from the equilateral triangle is actually outside the circle by construction (I think)).
Contradiction. Shaded concave sets... part of the bounty.
Now to unbounded sets in R^2...
We can have an unbounded set that works. E.g. take a small straight line and "extend it" by taking two points on the line and filling up the equilateral triangles. I don't know what types of sets we will get but they will be crazy and whether or not they fill up R^2 I don't know...
Still I can't work out if a cts function R-->R exists. Or if such a function R-->R exists at all. if we fill up the equilateral grid and then shift it slightly is that a function? Bounty is in response to this question
• Step 3 is wrong: If the set is itself a (filled) eqilateral triangle, then two maximally distant points in it do make an equilateral triangle with a third point. (A vertex and the mid-point of the opposite side don't, so your conclusion may be true -- but you haven't proved it here.). – TonyK May 10 '12 at 16:50
• Yes. Well I can show it for convex sets of R^2. Concave sets I haven't actually shown yet. Let's change the bounty question... – Adam Rubinson May 10 '12 at 17:15
• basically, if anyone can fill in all the details and make some concrete proofs or even better some examples of functions, then they get 100 points. I'm going back to revision. Good luck – Adam Rubinson May 10 '12 at 17:32
• Are you using a definition of convexity that admits isolated points, such as those which are the vertices of an equilateral triangle, as given by your 3rd example? If so, then there are unbounded sets which do not take up all of $\mathbb R ^2$. Consider that, given any two points, we can find exactly two other points which are vertices of an equilateral triangle with the first two. Start with two points, say $(-1,0)$ and $(1,0)$, and repeat this process recursively. The infinite set of points will be unbounded, will satisfy the other requirements, but will not contain the origin. – process91 May 10 '12 at 19:04
• It does not, however, satisfy the typical definition of convexity, and neither does your third example, I believe. – process91 May 10 '12 at 19:08 | 2019-06-20 13:08:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8758354783058167, "perplexity": 191.3596290805896}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999218.7/warc/CC-MAIN-20190620125520-20190620151520-00450.warc.gz"} |
http://blog.eqnets.com/category/networks/page/2/ | ## The Clinton doctrine
25 January 2010
After the fallout from Aurora, US Secretary of State Hillary Clinton gave a major speech last Thursday at the Newseum in DC. Highlights below:
The spread of information networks is forming a new nervous system for our planet…in many respects, information has never been so free…[but] modern information networks and the technologies they support can be harnessed for good or for ill…
There are many other networks in the world. Some aid in the movement of people or resources, and some facilitate exchanges between individuals with the same work or interests. But the internet is a network that magnifies the power and potential of all others. And that’s why we believe it’s critical that its users are assured certain basic freedoms. Freedom of expression is first among them…
…a new information curtain is descending across much of the world…
Governments and citizens must have confidence that the networks at the core of their national security and economic prosperity are safe and resilient…Disruptions in these systems demand a coordinated response by all governments, the private sector, and the international community. We need more tools to help law enforcement agencies cooperate across jurisdictions when criminal hackers and organized crime syndicates attack networks for financial gain…
States, terrorists, and those who would act as their proxies must know that the United States will protect our networks. Those who disrupt the free flow of information in our society or any other pose a threat to our economy, our government, and our civil society. Countries or individuals that engage in cyber attacks should face consequences and international condemnation. In an internet-connected world, an attack on one nation’s networks can be an attack on all [ed. see article 5 of the North Atlantic Treaty]. And by reinforcing that message, we can create norms of behavior among states and encourage respect for the global networked commons.
China denies everything and is trying to change the subject.
The tone of this speech was remarkable. While it is natural to expect that most nations conduct offensive computer network operations against foreign governments and organizations, getting publicly called on it is rare. Most observers have no doubt that the PRC has been infiltrating and attacking US government and commercial networks for strategic ends, and the NSA would not be doing its job if it were not doing the same thing abroad. So even if everything isn’t Marquis of Queensberry you wouldn’t expect to see folks complain too loudly.
But human rights and censorship is another story. There is a simple reason why Cold War rhetoric was recycled in this speech. Regardless of whether Google capitulates or leaves China (any other outcome is unlikely), by going public instead of leaking to the press they have put the PRC on the defensive. As I remarked earlier, Google surely must have known it had the (at least implicit) backing of the US before it (effectively) named names. The administration must have seen this as a golden opportunity to seize the moral high ground. When force of arms cannot be decisive, the justness of a cause still might be.
## Random bits
20 January 2010
14 January 2010
Time for the (n+1)th dissection of Google’s recent announcement concerning cyberattacks and censorship. (You’ve got to love recursion!)
As Galrahn points out, discounting Google’s market share relative to Baidu isn’t really sensible. They’ve got a lot of market share there, especially for non-search services without strong competitors—but many of these services (YouTube, Picasa, and often Blogger) have been blocked by the Chinese government. That speaks to two things in China: an opportunity for user base consolidation and to a governmental approach to information that is inimical to Google’s business model. More to the point:
For what amounts to only 2% of revenue, Google is threatening to disrupt the internet behavior of at minimum 118 million internet savvy Chinese and believes that fact alone has value in negotiations.
Is this really a funeral, or will a hundred flowers blossom?
That is, Google is using a casus belli to force an issue that predates their entry into the Chinese market. It doesn’t cost them much to do so. They’ve already got the explicit backing of some other heavyweight Western companies (e.g., Yahoo) and network effects may induce many others to climb on board the bandwagon. They surely have the implicit backing of the US government in pushing back against China (and am I the only one who is thinking about the possibility of honeypots here? No way).
The bottom line is that this is not about a moral stand. By taking things public, Google is creating a negotiating opportunity for what it’s wanted all along from China. The real issue here is not who is “right” or “wrong” but who is going to win. For Google to thrive in China, the Chinese Communist Party’s control over information has to be weakened. For the CCP to thrive in China, it has to retain a monopoly on political power, and this requires controlling the flow of information. Moreover, and as I’ve mentioned before, there is a clear path from China’s cyber strategy to the foundations of its politics. So Google will probably not win much if anything in this skirmish.
The larger point is much more interesting, though. After a decade of undeclared cyber war with Chinese characteristics, this is the first overt public response. China has less to lose from cyberwarfare than the West does. But as it finds what it’s looking for with rampant cyberespionage, China may also find that it is hurting itself.
13 January 2010
## Random bits
8 January 2010
768-bit RSA modulus factored. This is basically right on schedule for a Moore’s law fit of largest publicly factored RSA moduli from a RSA technical report dating from 2000. Expect 1024-bit moduli to go down in about a decade.
Visualizing Abdulmutallab. This is supposed to make some sense if you look at it long enough, apparently.
Geolocation hack
IPv4 lives on…for now
6 January 2010
## Random bits
4 January 2010
Holiday round-up edition…
Suricata IDS in beta. Another open-source IDS is a good thing. (But open-source network monitoring will be even better!)
The best defense is a good offense
Switchable DNA nanostructures
Hijacking NetBIOS
Eavesdropping on quantum crypto?
Survey of key exchange security deriving from the Second Law
An approach to subexponential factoring
The use of ideas of Information Theory for studying “language” and intelligence in ants
## Random bits
16 December 2009
Fake Steve Jobs wants to DDOS AT&T
Quasicrystals from entropic packing of tetrahedra (NB. the Nature article actually mentions this preprint, which achieves a higher packing fraction)
## Random bits
7 December 2009
Two interesting tidbits from Ars…
How robber barons hijacked the Victorian Internet
Bandwidth hogs join unicorns in realm of mythical creatures
## Birds on a wire and the Ising model
30 November 2009
Statistical physics is very good at describing lots of physical systems, but one of the basic tenets underlying our technology is that statistical physics is also a good framework for describing computer network traffic. Lots of recent work by lots of people has focused on applying statistical physics to nontraditional areas: behavioral economics, link analysis (what the physicists abusively call network theory), automobile traffic, etc.
In this post I’m going to talk about a way in which one of the simplest models from statistical physics might inform group dynamics in birds (and probably even people in similar situations). As far as I know, the experiment hasn’t been done–the closest work to it seems to be on flocking (though I’ll give $.50 and a Sprite to the first person to point out a direct reference to this sort of thing). I’ve been kicking it around for years and I think that at varying scopes and levels of complexity, it might constitute anything from a really good high school science fair project to a PhD dissertation. In fact I may decide to run with this idea myself some day, and I hope that anyone else out there who wants to do the same will let me know. The basic idea is simple. But first let me show you a couple of pictures. Notice how the tree in the picture above looks? There doesn’t seem to be any wind. But I bet that either the birds flocked to the wire together or there was at least a breeze when the picture below was taken: Because the birds are on wires, they can face in essentially one of two directions. In the first picture it looks very close to a 60%-40% split, with most of the roughly 60 birds facing left. In the second picture, 14 birds are facing right and only one is facing left. Now let me show you an equation: $H = -J\sum_{\langle i j \rangle} s_i s_j - K\sum_i s_i.$ If you are a physicist you already know that this is the Hamiltonian for the spin-1/2 Ising model with an applied field, but I will explain this briefly. The Hamiltonian $H$ is really just a fancy word for energy. It is the energy of a model (notionally magnetic) system in which spins $s_i$ that occupy sites that are (typically) on a lattice (e.g., a one-dimensional lattice of equally spaced points) take the values $\pm 1$ and can be taken as caricatures of dipoles. The notation $\langle i j \rangle$ indicates that the first sum is taken over nearest neighbors in the lattice: the spins interact, but only with their neighbors, and the strength of this interaction is reflected in the exchange energy $J.$ The strength of the spins’ interaction with an applied (again notionally magnetic) field is governed by the field strength $K.$ This is the archetype of spin models in statistical physics, and it won’t serve much for me to reproduce a discussion that can be found many other places (you may like to refer to Goldenfeld’s Lectures on Phase Transitions and the Renormalization Group, which also covers the the renormalization group method that inspires the data reduction techniques used in our software). Suffice it to say that these sorts of models comprise a vast field of study and already have an enormous number of applications in lots of different areas. Now let me talk about what the pictures and the model have in common. The (local or global) average spin is called the magnetization. Ignoring an arbitrary sign, in the first picture the magnetization is roughly 0.2, and in the second it’s about 0.87. The 1D spin-1/2 Ising model is famous for exhibiting a simple phase transition in magnetization: indeed, the expected value of the magnetization for in the thermodynamic limit is shown in every introductory statistical physics course worth the name to be $\langle s \rangle = \frac{\sinh \beta K}{\sqrt{\sinh^2 \beta K + e^{-4\beta J}}}$ where $\beta \equiv 1/T$ is the inverse temperature (in natural units). As ever, a picture is worth a thousand words: For $K = 0$ and $T > 0,$ it’s easy to see that $\langle s \rangle = 0.$ But if $K \ne 0, J > 0$ and $T \downarrow 0$, then taking the subsequent limit $K \rightarrow 0^\pm$ yields a magnetization of $\pm 1.$ At zero temperature the model becomes completely magnetized–i.e., totally ordered. (Finite-temperature phase transitions in magnetization in the real world are of paramount importance for superconductivity.) And at long last, here’s the point. I am willing to bet ($.50 and a Sprite, as usual) that the arrangement of birds on wires can be well described by a simple spin model, and probably the spin-1/2 Ising model provided that the spacing between birds isn’t too wide. I expect that the same model with varying parameters works for many–or even most or all–species in some regime, which is a bet on a particularly strong kind of universality. Neglecting spacing between birds, I expect the effective exchange strength to depend on the species of bird, and the effective applied field to depend on the wind speed and angle, and possibly the sun’s relative location (and probably a transient to model the effects of arriving on the wire in a flock). I don’t have any firm suspicions on what might govern an effective temperature here, but I wouldn’t be surprised to see something that could be well described by Kawasaki or Glauber dynamics for spin flips: that is, I reckon that–as usual–it’s necessary to take timescales into account in order to unambiguously assign a formal or effective temperature (if the birds effectively stay still, then dynamics aren’t relevant and the temperature should be regarded as being already accounted for in the exchange and field parameters). I used to think about doing this kind of experiment using tagged photographs or their ilk near windsocks or something similar, but I can’t see how to get any decent results that way without more effort than a direct experiment. I think it probably ought to be done (at least initially) in a controlled environment.
Anyways, there it is. The experiment always wins, but I have a hunch how it would turn out.
UPDATE 30 Jan 2010: Somebody had another interesting idea involving birds on wires. | 2013-12-04 22:28:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2654880881309509, "perplexity": 2607.280831114226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163037568/warc/CC-MAIN-20131204131717-00097-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/195464/on-grothendiecks-idea-on-his-standard-conjecture-b | # On Grothendieck's idea on his Standard Conjecture B
Let me recall the Standard Conjecture B (see [1,2] below):
The $\Lambda$-operation of Hodge theory is algebraic.
It more or less says that the partial inverse to “cupping with the class of a hyperplane” comes from an algebraic cycle. This is true, for example, for abelian varieties.
Question
On the fifth page of his article on the Standard Conjectures [2] (page 197 of the journal) in the second paragraph from the top Grothendieck writes:
I have an idea of a possible approach to Conjecture $B$, which relies in turn on certain unsolved geometric questions, and which should be settled in any case.
Q2: What are the “unsolved geometric questions” that he mentions? What is the status of these questions?
References
• Kleiman's 1994 overview (cited in the Wikipedia entry) is quite explicit that Grothendieck published only this single 1968 paper on the standard conjectures. – Carlo Beenakker Feb 3 '15 at 11:29
My guess is that he was thinking about crystalline cohomology.
It fits rather nicely in Grothendieck research at that time. He had obviously in mind the success of Dwork's p-adic approach to the Weil conjectures, and the limitations of the other cohomologies avaible at the time (see sections 1.5 to 1.8 of "Crystals and the de Ram cohomology of schemes").
The standard conjectures were worked out in 1965 (according to Grothendieck's 1968 paper), so he had to have them in mind while working on his p-adic cohomology, that he presented to Bourbaki on december 1966. He then gave it to his student Pierre Berthelot to develop. The intro of the Bourbaki notes reads:
The content of the notes are by no means intended to be a complete theory. Rather, they outline the start of a program of work which has still not been carried out (*).
(*) For a more detailed exposition and progress in this direction, we refer to the work of P. Berthelot, to be developped presumably in SGA 8.
Berthelot's complete exposition was not presented as SGA 8, but as in independient work in 1974. So even if the cohomology was ready long before that, Grothendieck had to regard it as unsolved in 1968, when he wrote about the standard conjectures. It is also reasonable to imagine that he had hopes at those early stages for crystalline cohomology to be an important tool in the yoga of motives.
Again, this is just a guess. I'm not sure that he would refer to this as "unsolved geometric questions" (perhaps in the sense of its aplication to Hodge/Betti coefficients?). Maybe someone who knows more about all of this can add some details.
Some relevant references:
"On the de Rham cohomology of algebraic varieties" (Grothendieck, written in 1963)
"Crystals and the de Rham cohomology of schemes" pp. 254-306 (Grothendieck, written in 1966)
"Letter to Tate" (Grothendieck, written in 1966)
• This may be true. At least, in some papers written around 1970 (not by Grothendieck:)) I have met the hope that crystalline cohomology (or, maybe, some other $p$-adic cohomology theory) will yield the proof of the last remaining Weil conjecture. – Mikhail Bondarko Mar 9 '15 at 22:15
• Thanks for your answer. It clears up some history, thought what Grothendieck really thought will probably remain a mistery. – jmc Mar 10 '15 at 1:08
In the paper Smirnov, Oleg N., Graded associative algebras and Grothendieck standard conjectures// Invent. Math. 128 (1997), no. 1, 201–206 it is proved that Standard Conjecture D (numerical equivalence vs. homological equivalence) implies Lefschetz type Standard Conjecture (Conjecture B).
• hmm, how does this answer Q1 or Q2? – Carlo Beenakker Feb 4 '15 at 14:37
• Well, Standard Conjecture D is an open problem.:) As you have noted, there are no publications of Grothendieck that answer the question; yet he could have suspected that the implication proved by Smirnov is valid. – Mikhail Bondarko Feb 4 '15 at 14:45
• I usually dont go offtopic, but arent there some new mathematical insights from Grothendieck's legacy. Is there even one? – nxir Feb 5 '15 at 0:35
• That conjecture D implies conjecture B was known to Grothendieck, and can be found in the various articles of Kleiman on the subject. What Smirnov proved is that conjecture D for one variety $X$ implies conjecture B for $X$. I doubt very much that Grothendieck had such a detail in mind. – abx Mar 2 '15 at 14:14
• Well, this was just a guess; you can write down your own one. – Mikhail Bondarko Mar 2 '15 at 14:25 | 2020-11-29 02:37:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7839799523353577, "perplexity": 917.1117265027405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195967.34/warc/CC-MAIN-20201129004335-20201129034335-00607.warc.gz"} |
https://stats.stackexchange.com/questions/417047/how-do-you-calculate-an-exact-two-tailed-p-value-using-binomial-distribution/417057 | # How do you calculate an exact two-tailed P-value using binomial distribution? [closed]
First, I will preface this question with my ulterior motive: I would like more evidence that the use of 19th and 20th century approximations play little to no pedagogic advantage in modern intro stats or intro data science courses.
First, let us agree to work with the following definition of a P-value: The probability of observing your sample—or something more extreme—given that the null hypothesis is true.
We wish to conduct a two-tailed hypothesis test for a population proportion using counts and exact probabilities from the binomial distribution. The hypotheses are $$H_0 : p = 10\%$$ $$H_a : p \ne 10\%$$ The sample obtained has $$n=189$$ and there are $$k=10$$ successful observations in this sample.
¿What is the two-tailed P-value for this test? It seems that there are reasonable arguments for either $$P(X \le 10) + P(X \ge 27) = 0.053$$ or $$P(X \le 10) + P(X \ge 28) = 0.038$$ (For those of you "addicted" to the conventional significance level of $$\alpha=0.05$$, you can probably see where I might be going with this. ;-)
To keep this in a pedagogic framework, I'm most curious for answers that might indicate how you would grade a student's work who submitted either answer...and how you would justify any loss of points that might occur.
## closed as primarily opinion-based by Glen_b♦Jul 12 at 1:01
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
• Take the 1 sided test p-value ($P(X\le 10)$) and double it. – AdamO Jul 11 at 21:38
• I would have thought $P(X \le 10) + P(X \ge 29) = 0.0285$ has some justification as the probability of "as extreme as or more extreme than $10$" – Henry Jul 11 at 22:19
• @AdamO ¿can you provide a rationale for why this would be considered exact? – Gregg H Jul 12 at 0:38
• $P(X \le 10) \approx 0.0150$ while $P(X \ge 28)\approx 0.0229$ so you could say that, given the hypothesis $n=189, p=0.1$, then $28$ is a less extreme observation than $10$. Meanwhile $P(X \ge 29)\approx 0.0135$ so $29$ is a more extreme observation than $10$ – Henry Jul 12 at 7:02
• This test hasn't been fully defined yet: you still have to specify a critical region. Ignoring randomized tests, there are at least 5 possibilities, depending on whether you want it to be symmetric in values, symmetric in probabilities, come as close as possible to the nominal p-value, or something else. Thus, this isn't a good question to ask learners and has little if any bearing on the ultimate questions of pedagogy that motivate you. Indeed, I don't see how this question is related even remotely to "19th and 20th century approximations:" could you explain the connection? – whuber Jul 12 at 16:02
• (+1) NB Henry suggests adding the probability from the upper tail that doesn't exceed $P(X \leq x)$; not the probabilities that don't exceed $P(X = x)$. These are both common methods, & agree in this case but not always. – Scortchi Jul 15 at 12:10 | 2019-08-26 09:50:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6885167956352234, "perplexity": 591.4485491533845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027331485.43/warc/CC-MAIN-20190826085356-20190826111356-00460.warc.gz"} |
https://proofwiki.org/wiki/Proof_by_Cases/Formulation_1/Forward_Implication | # Proof by Cases/Formulation 1/Forward Implication
## Theorem
$\paren {p \implies r} \land \paren {q \implies r} \vdash \paren {p \lor q} \implies r$
## Proof 1
By the tableau method of natural deduction:
$\left({p \implies r}\right) \land \left({q \implies r}\right) \vdash \left({p \lor q}\right) \implies r$
Line Pool Formula Rule Depends upon Notes
1 1 $\left({p \implies r}\right) \land \left({q \implies r}\right)$ Premise (None)
2 1 $p \implies r$ Rule of Simplification: $\land \EE_1$ 1
3 1 $q \implies r$ Rule of Simplification: $\land \EE_2$ 1
4 1 $p \lor q \implies r \lor r$ Sequent Introduction 2, 3 Constructive Dilemma
5 5 $p \lor q$ Assumption (None)
6 1, 5 $r \lor r$ Modus Ponendo Ponens: $\implies \mathcal E$ 4, 5
7 1, 5 $r$ Sequent Introduction 6 Rule of Idempotence: Disjunction
8 1 $p \lor q \implies r$ Rule of Implication: $\implies \II$ 5 – 7 Assumption 5 has been discharged
$\blacksquare$
## Proof 2
From the Constructive Dilemma we have:
$p \implies q, r \implies s \vdash p \lor r \implies q \lor s$
from which, changing the names of letters strategically:
$p \implies r, q \implies r \vdash p \lor q \implies r \lor r$
From the Rule of Idempotence we have:
$r \lor r \vdash r$
and the result follows by Hypothetical Syllogism.
$\blacksquare$ | 2022-05-28 01:28:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8432524800300598, "perplexity": 5542.130627058132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663011588.83/warc/CC-MAIN-20220528000300-20220528030300-00559.warc.gz"} |
https://brilliant.org/practice/scientific-notation/ | ###### Waste less time on Facebook — follow Brilliant.
×
Basic Mathematics
# Scientific Notation
Which of the following correctly expresses $$3.102$$ in scientific notation?
Which of the following numbers is equal to $$2 . 174 \times 10 ^ 6$$?
Which of the following correctly expresses $$0. 3 134$$ in scientific notation?
Which of the following numbers is equal to $$2 . 204 \times 10 ^ {-2}$$?
Which of the following can not be a valid representation of $$464,700$$ in scientific notation?
×
Problem Loading...
Note Loading...
Set Loading... | 2018-01-22 18:31:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4264703392982483, "perplexity": 1899.0032178399674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891530.91/warc/CC-MAIN-20180122173425-20180122193425-00587.warc.gz"} |
http://math.stackexchange.com/questions/237183/recurrence-relation-of-integral | recurrence relation of integral
Consider the integral defined by
$$\displaystyle{ I_k( \phi) = \int_0^{\pi} \frac{ \cos(k\theta) - \cos( k \phi) }{ \cos \theta - \cos\phi} d \theta}$$
(a) Show that $I_k( \phi)$ satisfies the difference equation
$$\displaystyle { I_{n+2} ( \phi) - 2\cos \phi I_{n+1}( \phi)+ I_n( \phi)=0, \quad I_0 (\phi)=0 , \quad I_1( \phi) = \pi }$$
(b) Solve the difference equation in part (a) to find $I_n( \phi)$
..................................................................................................................................................
Can someone help with (a)?
-
Note: $I_{n+2} ( \phi) - 2\cos \phi I_{n+1}( \phi)+ I_n( \phi)$ is not an equation. Do you mean this is equal to zero? – Thomas Andrews Nov 14 '12 at 13:27
Also, you are missing an $x$ in the characteristic quadratic, $x^2-2(\cos \phi )x +1$ – Thomas Andrews Nov 14 '12 at 13:28
Note that $c_1$ and $c_2$ are not constants, they are functions of $\phi$. (Think of it as solving a recurrence for each $\phi$, getting different $c_i$). And the $c_i$ are not necessarily real, so while it is true that $c_1(\phi)+c_2(\phi)=0$ it is not true that $(c_1(\phi)+c_2(\phi))\cos \phi = \pi$ – Thomas Andrews Nov 14 '12 at 13:31
@ThomasAndrews: What is the other equation? – passenger Nov 14 '12 at 13:42
It's $c_1(\phi)x_1 + c_2(\phi)x_2 = \pi$. What else would it be? – Thomas Andrews Nov 14 '12 at 13:55
For $(a)$, first show that $$\cos (n+2)x - 2\cos x\cos(n+1)x + \cos nx = 0$$ for all $x$. This follows by almost direct application of the sum rules for $\cos$. Indeed, it might be easier to show if you write it as $\cos(m+1)x + \cos(m-1)x = 2\cos x\cos mx$ where $m=n+1$. The rest of $(a)$ follows with some manipulation. (It's not quite as easy as it looks.)
For $(b)$, you've assumed $c_1$ and $c_2$ are real values. They are not. They are possibly complex functions of $\phi$.
The actual resulting formula should be $$I_n(\phi)=\frac{\pi\sin n\phi}{\sin \phi}$$
One other thing to note is that if $\sin\phi = 0$ then $x_1=x_2$, so you have to adjust your general formula for the recurrence relationship to the case where your recurrence polynomial has repeated roots. Then $x_1=x_2=x=\pm 1$. If $x=+1$ then $I_n = c_0+nc_1$ and we get that $I_n = n\pi$. If $x=-1$, then $c_0=0$ and $c_1=-\pi$ and $I_n=(-1)^{n+1}\pi n$. This is actually just the limit - it is the value which makes $I_n(\phi)$ continuous at these values.
In the calculation for $(a)$, when you do the substitution listed at the top in the expression $\frac{\cos(n+2)\theta - \cos(n+2)\phi}{\cos\theta-\cos\phi}$ you get:
$$\frac{2\cos \theta \cos(n+1)\theta - \cos n\theta - (2\cos\phi\cos(n+1)\phi -\cos n\phi)}{\cos\theta-\cos\phi}$$
The trick is to write $\cos \theta = (\cos\theta - \cos\phi) + \cos\phi$. Substituting, we get:
$$2\cos(n+1)\theta + 2\cos\phi\frac{\cos(n+1)\theta - \cos(n+1)\phi}{\cos\theta-\cos\phi} - \frac{\cos n\theta -\cos n\phi}{\cos\theta -\cos\phi}$$
Then integrating, you get $$I_{n+2}(\phi)=\int_{0}^\pi 2\cos(n+1)\theta\ d\theta + 2\cos\phi I_{n+1}(\phi) - I_n(\phi)$$
But $\int_{0}^\pi 2\cos(n+1)\theta\ d\theta=0$.
So you are done.
-
Thank you very much for your time! – passenger Nov 14 '12 at 16:44 | 2015-11-28 06:52:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9661204218864441, "perplexity": 109.82184306031775}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398451648.66/warc/CC-MAIN-20151124205411-00008-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://www.w3spoint.com/coalesce-function-oracle | # COALESCE function in Oracle
COALESCE is an advanced function that the Oracle database supports. It is used to get the first non-null expression in the list. The COALESCE function is supported in the various versions of the Oracle/PLSQL, including, Oracle 12c, Oracle 11g, Oracle 10g and Oracle 9i.
Syntax:
COALESCE( expr_1, expr_2, ... expr_n )
Parameters:
expr_1, expr_2, … expr_n:
It is used to specify the expressions to be tested.
Example :
SELECT COALESCE( value1, value2, value3 ) result FROM values;
Explanation:
Here, each value will be compared one by one by the COALESCE function for non-null values. If each value is null, the result will be NULL. | 2021-09-19 01:30:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4314112067222595, "perplexity": 5302.106710660492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056656.6/warc/CC-MAIN-20210919005057-20210919035057-00521.warc.gz"} |
https://sts-math.com/post_555.html | In the given figure, the value of x is
I can see how you might have trouble with this one, because the drawing isn’t clear.
Here’s what it’s trying to show:
- The line ’a’-’b’ is the line where a wall meets the floor.
- The line to ’c’ is a vertical line drawn on the wall.
- The line to ’d’ is a line drawn on the floor.
If you can see it like that, then there are two giveaways for the value of ’x’:
One way:
’c’ is perpendicular to the floor. That’s the only way the angles
on the wall on each side of it. 3x and 3x. could be equal, and
they must be 90 degree angles, so ’x’ is 30 degrees.
The other way:
The line on the floor, to ’d’, is not perpendicular to the wall, because the angles
on the floor on each side of it are not equal. But the two of them do add up to
180 degrees, because the line ’a’-’b’ is a straight line.
So ’x’ is what’s left over when 150 degrees is taken away from 180 degrees.
’x’ is 30 degrees again.
$$150+3x+3x+x=360\\ 7x=210\\ x=30$$
RELATED: | 2019-01-18 16:01:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.663727879524231, "perplexity": 851.0020723791015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660175.18/warc/CC-MAIN-20190118151716-20190118173716-00634.warc.gz"} |
https://gmatclub.com/forum/if-a-2b-1-and-b-2-which-of-the-following-could-be-the-value-of-a-111860.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 21 Nov 2018, 00:42
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in November
PrevNext
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
2526272829301
Open Detailed Calendar
• All GMAT Club Tests are Free and open on November 22nd in celebration of Thanksgiving Day!
November 22, 2018
November 22, 2018
10:00 PM PST
11:00 PM PST
Mark your calendars - All GMAT Club Tests are free and open November 22nd to celebrate Thanksgiving Day! Access will be available from 0:01 AM to 11:59 PM, Pacific Time (USA)
• Free lesson on number properties
November 23, 2018
November 23, 2018
10:00 PM PST
11:00 PM PST
Practice the one most important Quant section - Integer properties, and rapidly improve your skills.
If a^2b > 1 and b < 2, which of the following could be the value of a?
Author Message
TAGS:
Hide Tags
Manager
Joined: 10 Nov 2010
Posts: 190
Location: India
Concentration: Strategy, Operations
GMAT 1: 520 Q42 V19
GMAT 2: 540 Q44 V21
WE: Information Technology (Computer Software)
If a^2b > 1 and b < 2, which of the following could be the value of a? [#permalink]
Show Tags
Updated on: 20 Nov 2014, 06:10
3
8
00:00
Difficulty:
25% (medium)
Question Stats:
78% (01:50) correct 22% (02:19) wrong based on 339 sessions
HideShow timer Statistics
If a^2b > 1 and b < 2, which of the following could be the value of a?
A. 1/2
B. 1/4
C. -1/2
D. -2
E. 2/3
_________________
The proof of understanding is the ability to explain it.
Originally posted by GMATD11 on 04 Apr 2011, 03:59.
Last edited by Bunuel on 20 Nov 2014, 06:10, edited 1 time in total.
Renamed the topic and edited the question.
Retired Moderator
Joined: 20 Dec 2010
Posts: 1828
Re: If a^2b > 1 and b < 2, which of the following could be the value of a? [#permalink]
Show Tags
04 Apr 2011, 04:23
GMATD11 wrote:
If $$a^2*b>1 & b<2$$, which of the following could be the value of a?
a) 1/2
b) 1/4
c) -1/2
d) -2
e) 2/3
I will try the substitution method in this one:
A.
$$a=\frac{1}{2}$$
$$a^2*b=\frac{1}{4}*b$$
$$\frac{1}{4}*b>1$$
$$b>4$$
Not Possible. b must be less than 2.
B.
$$a=\frac{1}{4}$$
$$a^2*b=\frac{1}{16}*b$$
$$\frac{1}{16}*b>1$$
$$b>16$$
Not Possible. b must be less than 2.
C.
$$a=\frac{-1}{2}$$
$$a^2*b=\frac{1}{4}*b$$
$$\frac{1}{4}*b>1$$
$$b>4$$
Not Possible. b must be less than 2.
D.
$$a=-2$$
$$a^2*b=4*b$$
$$4*b>1$$
$$b>\frac{1}{4}$$
Possible. There are infinite numbers between $$\frac{1}{4}$$ and 2.
E.
$$a=\frac{2}{3}$$
$$a^2*b=\frac{4}{9}*b$$
$$\frac{4}{9}*b>1$$
$$b>\frac{9}{4}$$
$$b>2.25$$
Not Possible. b must be less than 2.
_________________
Retired Moderator
Joined: 16 Nov 2010
Posts: 1428
Location: United States (IN)
Concentration: Strategy, Technology
Re: If a^2b > 1 and b < 2, which of the following could be the value of a? [#permalink]
Show Tags
04 Apr 2011, 04:45
I too tried plugging numbers, however I luckily chose -2 first as rest others seemed to be fractions and again chose b as 1, so that the square comes out as +ve one of an integer, which apparently should be > 1, and it clicked
_________________
Formula of Life -> Achievement/Potential = k * Happiness (where k is a constant)
GMAT Club Premium Membership - big benefits and savings
Director
Joined: 01 Feb 2011
Posts: 664
Re: If a^2b > 1 and b < 2, which of the following could be the value of a? [#permalink]
Show Tags
04 Apr 2011, 08:18
a<-0.7 or a>0.7
Posted from my mobile device
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 8578
Location: Pune, India
Re: If a^2b > 1 and b < 2, which of the following could be the value of a? [#permalink]
Show Tags
04 Apr 2011, 17:42
3
GMATD11 wrote:
If a^2b>1 and b<2, which of the following could be the value of a?
a) 1/2
b) 1/4
c) -1/2
d) -2
e) 2/3
You can also solve it using algebra:
$$a^2*b > 1$$ which implies $$b > \frac{1}{a^2}$$ (Since a^2 will be positive)
$$b < 2$$
So $$\frac{1}{a^2} < b < 2$$
Ignore b now. $$a^2 - \frac{1}{2} > 0$$
So $$a > 1/\sqrt{2}$$ or $$a < -1/\sqrt{2}$$
(You should be very comfortable arriving at this step from the step above. If you are not, check out the following post: inequalities-trick-91482.html?hilit=inequalities%20trick
Notice that $$1/\sqrt{2} = \sqrt{2}/2 = .707$$
The only value either less than -0.707 or greater than 0.707 is -2.
_________________
Karishma
Veritas Prep GMAT Instructor
GMAT self-study has never been more personalized or more fun. Try ORION Free!
Manager
Joined: 13 Aug 2012
Posts: 96
Re: If a^2b > 1 and b < 2, which of the following could be the value of a? [#permalink]
Show Tags
20 Dec 2014, 03:44
1
1
Given - a^(2b)>1, and b>2
But in order for a^(2b)>1, B cannot be less than equal to 0
So, the value of b is 0<b<2, meaning b=1
Which leads us to the following equation
a^2>1 => a>1 or a<-1
Only one option satisfies it, and that is D
Current Student
Joined: 12 Aug 2015
Posts: 2632
Schools: Boston U '20 (M)
GRE 1: Q169 V154
Re: If a^2b > 1 and b < 2, which of the following could be the value of a? [#permalink]
Show Tags
10 Mar 2016, 10:31
Intern
Joined: 09 Aug 2015
Posts: 4
Re: If a^2b > 1 and b < 2, which of the following could be the value of a? [#permalink]
Show Tags
19 May 2016, 02:40
given a^2 * b > 1 and b < 2
b must be positive given that a^2 is positive and whole product is greater that 1
so we can say 0< b < 2 hence b must be 1
so a^2 > 1, only one option fits i.e. a= -2
Manager
Status: Profile 1
Joined: 20 Sep 2015
Posts: 64
GMAT 1: 690 Q48 V37
GPA: 3.2
WE: Information Technology (Investment Banking)
Re: If a^2b > 1 and b < 2, which of the following could be the value of a? [#permalink]
Show Tags
21 Jul 2017, 07:25
VeritasPrepKarishma wrote:
GMATD11 wrote:
If a^2b>1 and b<2, which of the following could be the value of a?
a) 1/2
b) 1/4
c) -1/2
d) -2
e) 2/3
You can also solve it using algebra:
$$a^2*b > 1$$ which implies $$b > \frac{1}{a^2}$$ (Since a^2 will be positive)
$$b < 2$$
So $$\frac{1}{a^2} < b < 2$$
Ignore b now. $$a^2 - \frac{1}{2} > 0$$
So $$a > 1/\sqrt{2}$$ or $$a < -1/\sqrt{2}$$
(You should be very comfortable arriving at this step from the step above. If you are not, check out the following post: http://gmatclub.com/forum/inequalities- ... es%20trick
Notice that $$1/\sqrt{2} = \sqrt{2}/2 = .707$$
The only value either less than -0.707 or greater than 0.707 is -2.
how you get to know that question is not a^(2b) > 1 but $$a^2*b > 1$$
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 8578
Location: Pune, India
Re: If a^2b > 1 and b < 2, which of the following could be the value of a? [#permalink]
Show Tags
22 Jul 2017, 05:18
jokschmer wrote:
VeritasPrepKarishma wrote:
GMATD11 wrote:
If a^2b>1 and b<2, which of the following could be the value of a?
a) 1/2
b) 1/4
c) -1/2
d) -2
e) 2/3
You can also solve it using algebra:
$$a^2*b > 1$$ which implies $$b > \frac{1}{a^2}$$ (Since a^2 will be positive)
$$b < 2$$
So $$\frac{1}{a^2} < b < 2$$
Ignore b now. $$a^2 - \frac{1}{2} > 0$$
So $$a > 1/\sqrt{2}$$ or $$a < -1/\sqrt{2}$$
(You should be very comfortable arriving at this step from the step above. If you are not, check out the following post: http://gmatclub.com/forum/inequalities- ... es%20trick
Notice that $$1/\sqrt{2} = \sqrt{2}/2 = .707$$
The only value either less than -0.707 or greater than 0.707 is -2.
how you get to know that question is not a^(2b) > 1 but $$a^2*b > 1$$
The formatting will be unambiguous in actual GMAT questions. If it is an exponent, it will be clearly shown.
_________________
Karishma
Veritas Prep GMAT Instructor
GMAT self-study has never been more personalized or more fun. Try ORION Free!
Non-Human User
Joined: 09 Sep 2013
Posts: 8869
Re: If a^2b > 1 and b < 2, which of the following could be the value of a? [#permalink]
Show Tags
14 Sep 2018, 10:12
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: If a^2b > 1 and b < 2, which of the following could be the value of a? &nbs [#permalink] 14 Sep 2018, 10:12
Display posts from previous: Sort by | 2018-11-21 08:42:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7283613681793213, "perplexity": 4508.89768773308}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747369.90/warc/CC-MAIN-20181121072501-20181121094501-00210.warc.gz"} |
https://www.physicsforums.com/threads/how-to-solve-this-trig-equation.630487/ | # Homework Help: How to solve this trig equation
1. Aug 22, 2012
### lo2
1. The problem statement, all variables and given/known data
Solve this equation:
$cos^2(2x)=0,36$
For $x \in [-\pi;\pi]$
2. Relevant equations
-
3. The attempt at a solution
$cos^2(2x)=0,36 \Leftrightarrow cos(2x)=\sqrt{0,36} \Leftrightarrow 2x=cos^{-1}(\sqrt{0,36})$
And then I am not sure exactly how to proceed... When should I put in the $2p \pi$ where $x \in Z$, to get all of the possible solutions?
Last edited: Aug 22, 2012
2. Aug 22, 2012
### Staff: Mentor
Not true. cos(2x) can also be negative. In your second equation, you took the square root of the right side, but not the left side.
Also, you should simplify √(.36).
3. Aug 22, 2012
### lo2
I corrected the mistake about not taking the square root on either side. So you mean I should put ± in front of the square root?
4. Aug 22, 2012
### SammyS
Staff Emeritus
Yes, use the ± .
5. Aug 22, 2012
### Staff: Mentor
The domain for x is restricted to [$-\pi, \pi$], so you're going to get only a handful of solutions.
6. Aug 23, 2012
### lo2
Ok I have come up with this solution:
$\frac{cos^{-1}(\pm \sqrt{0,36})}{2}+p\pi$
Where the solutions are: $cos^{-1}(\sqrt{0,36})-\pi, cos^{-1}(-\sqrt{0,36}), cos^{-1}(\sqrt{0,36}), cos^{-1}(-\sqrt{0,36})+\pi$
Since the solutions have to be in the interval of -pi to pi.
7. Aug 23, 2012
### Staff: Mentor
Why do you keep writing √(.36)? That simplifies to an exact value. What is this value?
I think you would be better off by NOT using cos-1, since that will give you only one value. I would sketch a graph of y = cos(2x) on the interval [$-2\pi, 2\pi$] (since x $\in$ [$-\pi, \pi$]), and identify all of the points at which cos(2x) = ±B, where B is the simplified value of √(.36).
EDIT: Also, your work above suggests that there are four solutions. I get quite a few more than that.
Last edited: Aug 23, 2012 | 2018-04-26 08:14:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7154937386512756, "perplexity": 1190.4302078188266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948119.95/warc/CC-MAIN-20180426070605-20180426090605-00130.warc.gz"} |
https://dev.px4.io/en/simulation/ | # Simulation
Simulators allow PX4 flight code to control a computer modeled vehicle in a simulated "world". You can interact with this vehicle just as you might with a real vehicle, using QGroundControl, an offboard API, or a radio controller/gamepad.
Simulation is a quick, easy, and most importantly, safe way to test changes to PX4 code before attempting to fly in the real world. It is also a good way to start flying with PX4 when you haven't yet got a vehicle to experiment with.
PX4 supports both Software In the Loop (SITL) simulation, where the flight stack runs on computer (either the same computer or another computer on the same network) and Hardware In the Loop (HITL) simulation using a simulation firmware on a real flight controller board.
Information about available simulators and how to set them up are provided in the next section. The other sections provide general information about how the simulator works, and are not required to use the simulators.
## Supported Simulators
The following simulators work with PX4 for HITL and/or SITL simulation.
Simulator Description
Gazebo
This simulator is highly recommended.
A powerful 3D simulation environment that is particularly suitable for testing object-avoidance and computer vision. It can also be used for multi-vehicle Simulation and is commonly used with ROS, a collection of tools for automating vehicle control.
Supported Vehicles: Quad (Iris and Solo), Hex (Typhoon H480), Generic quad delta VTOL, Tailsitter, Plane, Rover, Submarine (coming soon!)
jMAVSim A simple multirotor simulator that allows you to fly copter type vehicles around a simulated world.
It is easy to set up and can be used to test that your vehicle can take off, fly, land, and responds appropriately to various fail conditions (e.g. GPS failure).
AirSim A cross platform simulator that provides physically and visually realistic simulations. This simulator is resource intensive, and requires a very significantly more powerful computer than the other simulators described here.
Supported Vehicles: Iris (MultiRotor model and a configuration for PX4 QuadRotor in the X configuration).
XPlane (HITL only) A comprehensive and powerful fixed-wing flight simulator that offers very realistic flight models.
Supported Vehicles: Plane
Instructions for how to setup and use the simulators are in the topics linked above.
The remainder of this topic is a "somewhat generic" description of how the simulation infrastructure works. It is not required to use the simulators.
All simulators communicate with PX4 using the Simulator MAVLink API. This API defines a set of MAVLink messages that supply sensor data from the simulated world to PX4 and return motor and actuator values from the flight code that will be applied to the simulated vehicle. The image below shows the message flow.
A simulator build of PX4 (both SITL and HITL) uses simulator_mavlink.cpp to handle these messages. Sensor data from the simulator is written to a dummy driver and appears "real" to PX4. All motors / actuators are blocked, but internal software is fully operational.
The messages are described below (see links for specific detail).
Message Direction Description
MAV_MODE:MAV_MODE_FLAG_HIL_ENABLED NA Mode flag when using simulation. All motors/actuators are blocked, but internal software is fully operational.
HIL_ACTUATOR_CONTROLS PX4 to Sim PX4 control outputs (to motors, actuators).
HIL_SENSOR Sim to PX4 Simulated IMU readings in SI units in NED body frame.
HIL_GPS Sim to PX4 The simulated GPS RAW sensor value.
HIL_OPTICAL_FLOW Sim to PX4 Simulated optical flow from a flow sensor (e.g. PX4FLOW or optical mouse sensor)
HIL_STATE_QUATERNION Sim to PX4 Contains the actual "simulated" vehicle position, attitude, speed etc. This can be logged and compared to PX4's estimates for analysis and debugging (for example, checking how well an estimator works for noisy (simulated) sensor inputs).
HIL_RC_INPUTS_RAW Sim to PX4 The RAW values of the RC channels received.
By default, PX4 uses commonly established UDP ports for MAVLink communication with ground control stations (e.g. QGroundControl), Offboard APIs (e.g. DroneCore, MAVROS) and simulator APIs (e.g. Gazebo). These ports are:
• Port 14540 is used for communication with offboard APIs. Offboard APIs are expected to listen for connections on this port.
• Port 14550 is used for communication with ground control stations. GCS are expected to listen for connections on this port. QGroundControl listens to this port by default.
• Port 14560 is used for communication with simulators. PX4 listens to this port, and simulators are expected to initiate the communication by broadcasting data to this port.
The ports for the GCS and offboard APIs are set in configuration files, while the simulator broadcast port is hard-coded in the simulation MAVLink module.
## SITL Simulation Environment
The diagram below shows a typical SITL simulation environment for any of the supported simulators. The different parts of the system connect via UDP, and can be run on either the same computer or another computer on the same network.
• PX4 uses a simulation-specific module to listen on UDP port 14560. Simulators connect to this port, then exchange information using the Simulator MAVLink API described above. SITL and the simulator can run on either the same computer or different computers on the same network.
• PX4 uses the normal MAVLink module to connect to GroundStations (which listen on port 14550) and external developer APIs like DroneCore or ROS (which listen on port 14540).
• A serial connection is used to connect Joystick/Gamepad hardware via QGroundControl.
If you use the normal build system SITL make configuration targets (see next section) then both SITL and the Simulator will be launched on the same computer and the ports above will automatically be configured. You can configure additional MAVLink UDP connections and otherwise modify the simulation environment in the build configuration and initialisation files.
### Starting/Building SITL Simulation
The build system makes it very easy to build and start PX4 on SITL, launch a simulator, and connect them. For example, you can launch a SITL version of PX4 that uses the EKF2 estimator and simulate a plane in Gazebo with just the following command (provided all the build and gazebo dependencies are present!):
make posix_sitl_ekf2 gazebo_plane
It is also possible to separately build and start SITL and the various simulators, but this is nowhere near as "turnkey".
The syntax to call make with a particular configuration and initialisation file is:
make [CONFIGURATION_TARGET] [VIEWER_MODEL_DEBUGGER]
where:
• CONFIGURATION_TARGET: has the format [OS][_PLATFORM][_FEATURE]
• OS: posix, nuttx, qurt
• PLATFORM: SITL (or in principle any platform supported among the different OS: bebop, eagle, excelsior, etc.)
• FEATURE: A particular high level feature - for example which estimator to use (ekf2, lpe) or to run tests or simulate using a replay.
You can get a list of all available configuration targets using the command:
make list_config_targets
• VIEWER_MODEL_DEBUGGER: has the format [SIMULATOR]_[MODEL][_DEBUGGER]
• SIMULATOR: This is the simulator ("viewer") to launch and connect: gazebo, jmavsim
• MODEL: The vehicle model to use (e.g. iris, rover, tailsitter, etc). This corresponds to a specific initialisation file that will be used to configure PX4. This might define the start up for a particular vehicle, or allow simulation of multiple vehicles (we explain how to find available init files in the next section).
• DEBUGGER: Debugger to (optionally) use: none, ide, gdb, lldb, ddd, valgrind, callgrind. For more information see Simulation Debugging.
You can get a list of all available VIEWER_MODEL_DEBUGGER options using the command:
make posix list_vmd_make_targets
Notes:
• Most of the values in the CONFIGURATION_TARGET and VIEWER_MODEL_DEBUGGER have defaults, and are hence optional. For example, gazebo is equivalent to gazebo_iris or gazebo_iris_none.
• You can use three underscores if you want to specify a default value between two other settings. For example, gazebo___gdb is equivalent to gazebo_iris_gdb.
• You can use a none value for VIEWER_MODEL_DEBUGGER to start PX4 and wait for a simulator. For example start PX4 using make posix_sitl_default none and jMAVSim using ./Tools/jmavsim_run.sh.
### Init File Location
The settings for each configuration target are defined in appropriately named files in /Firmware/cmake/configs. Within each file there is a setting config_sitl_rcS_dir that defines the location of the folder where the configuration stores its init files.
In the cmake config file for posix_sitl_ekf2 you can see that the init file will be stored in the folder: Firmware/posix-configs/SITL/init/ekf2/.
set(config_sitl_rcS_dir
posix-configs/SITL/init/ekf2
)
Generally the init files are located using a consistent folder naming convention. For example, make posix_sitl_ekf2 gazebo_iris corresponds to the following folder structure:
Firmware/
posix-configs/ (os=posix)
SITL/ (platform=sitl)
init/
ekf2/ (feature=ekf2)
iris (init file name)
### Example Startup File
A slightly reduced version of the startup file for make posix_sitl_ekf2 gazebo_iris (/Firmware/posix-configs/SITL/init/ekf2/iris) is shown below.
uorb start
dataman start
param set BAT_N_CELLS 3
param set CAL_ACC0_ID 1376264
param set CAL_ACC0_XOFF 0.01
...
...
param set SYS_MC_EST_GROUP 2
param set SYS_RESTART_TYPE 2
replay tryapplyparams
simulator start -s
tone_alarm start
gyrosim start
accelsim start
barosim start
gpssim start
pwm_out_sim mode_pwm
sensors start
commander start
land_detector start multicopter
navigator start
ekf2 start
mc_pos_control start
mc_att_control start
mavlink start -u 14556 -r 4000000
mavlink start -u 14557 -r 4000000 -m onboard -o 14540
mavlink stream -r 50 -s POSITION_TARGET_LOCAL_NED -u 14556
mavlink stream -r 50 -s LOCAL_POSITION_NED -u 14556
mavlink stream -r 50 -s GLOBAL_POSITION_INT -u 14556
mavlink stream -r 50 -s ATTITUDE -u 14556
mavlink stream -r 50 -s ATTITUDE_QUATERNION -u 14556
mavlink stream -r 50 -s ATTITUDE_TARGET -u 14556
mavlink stream -r 50 -s SERVO_OUTPUT_RAW_0 -u 14556
mavlink stream -r 20 -s RC_CHANNELS -u 14556
mavlink stream -r 250 -s HIGHRES_IMU -u 14556
logger start -e -t
replay trystart
Note the sections that set parameters, start simulator drivers and other modules. A few of the more relevant lines for simulation are highlighted below.
1. Simulator being started:
simulator start -s
2. PWM out mode being set for simulator:
pwm_out_sim mode_pwm
• This line starts the MAVLink instance for connecting to offboard APIs. It broadcasts on 14540 and listens for responses on 14557. The -m onboard flag specifies a set of messages that will be streamed over the interface.
mavlink start -u 14557 -r 4000000 -m onboard -o 14540
• This line starts MAVLink instance for connecting to QGroundControl/GCSs. PX4 listens for messages on port 14556.
mavlink start -u 14556 -r 4000000
• The broadcast port is not explicitly set (the default is used: 14550).
• The messages that are streamed over this interface are specified using mavlink stream as shown below:
mavlink stream -r 50 -s POSITION_TARGET_LOCAL_NED -u 14556
mavlink stream -r 50 -s LOCAL_POSITION_NED -u 14556
...
## HITL Simulation Environment
With Hardware-in-the-Loop (HITL) simulation the normal PX firmware is run on real hardware. QGroundControl is connected to the physical hardware over USB and acts as a gateway to forward data between the simulator, PX4 and any offboard API.
The HITL Simulation Environment in documented in: HITL Simulation. | 2018-01-20 19:20:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1767081767320633, "perplexity": 10910.170656501494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889681.68/warc/CC-MAIN-20180120182041-20180120202041-00320.warc.gz"} |
https://questions.examside.com/past-years/gate/gate-ce/strength-of-materials-or-solid-mechanics/shear-stress-in-beams | GATE CE
Strength of Materials Or Solid Mechanics
Shear Stress In Beams
Previous Years Questions
## Marks 1
The possible location of shear center of the channel section, shown below, is ...
A symmetric $${\rm I}$$-section (with width of each flange $$=50mm$$, thickness of each flange $$=10mm,$$ depth of web $$=100mm,$$ and thickness of we...
In a section, shear center is a point through which, if the resultant load passes, the section will not be subjected to any
For a given shear force across a symmetrical $${\rm I}$$-section, the intensity of shear is maximum at
For a given shear force across a symmetrical $$'{\rm I}'$$ section the intensity of shear stress is maximum at the
## Marks 2
The point within the cross sectional plane of a beam through which the resultant of the external loading on the beam has to pass through to ensure pur...
The shear stress at the neutral axis in a beam of triangular section with a base of $$40$$ $$mm$$ and height $$20$$ $$mm,$$ subjected to a shear force...
$${\rm I}$$-section of a beam is formed by gluing wooden planks as shown in the figure below. If this beam transmits a constant vertical shear force...
If a beam of rectangular cross-section is subjected to a vertical shear force $$V,$$ the shear force carried by the upper one-third of the cross-sect...
EXAM MAP
Joint Entrance Examination | 2023-03-24 07:17:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7175197005271912, "perplexity": 1266.864058953154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00152.warc.gz"} |
https://cs.stackexchange.com/tags/np-hard/hot?filter=month | # Tag Info
9
As written, the question is a bit trivial: if NP = NP-complete, then since P $\subseteq$ NP we get P=NP since every problem in P would be NP-complete. I suspect what's meant, though, is the following: Suppose there are no NP-intermediate problems; that is, that every problem in NP is either in P or is NP-complete. What does that tell us about P vs. NP? ...
5
The problem in which you must select $k$ vertices to maximize the number of vertices dominated is known as the budgeted dominating set problem. The problem or its connected variant is studied at least by Lamprou, Sigalis and Zissimopoulos [1] and Khuller, Purohit and Sarpatwar [2]. It also appears in the recent survey of Narayanaswamy and Vijayaragunathan [3]...
3
Since $P \subseteq NP$, if the statement is false, it cannot become true by changing $P$ to $NP$. What your teacher is doing here is trying to illustrate a very common fallacy amongst beginning CS-learners. He is giving one potential way to solve a problem, and he notes that this way needs more resources than we want to make available (sometimes it is ...
3
Very simple: We can sort an array without checking every possible permutation. (Many times we don’t check any permutation of the array, we just re-arrange the order of items so we can guarantee the array is sorted, without ever checking it. )
2
You are confusing NP and NP-hard in a couple places. For example, let $A$ be the problem of deciding ATL*, which is 2EXPTIME-complete. $A$ is NP-hard and polynomial-time many-one reduces to its complement, but is neither in NP nor in co-NP by the time hierarchy theorem. Recall that an NP-complete problem is one that is in NP and is NP-hard. For every NP-...
Only top voted, non community-wiki answers of a minimum length are eligible | 2020-01-28 18:17:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7750347256660461, "perplexity": 679.8083186081548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251779833.86/warc/CC-MAIN-20200128153713-20200128183713-00493.warc.gz"} |
https://www.clutchprep.com/physics/practice-problems/139168/when-an-object-is-placed-farther-from-a-convex-mirror-than-the-focal-length-the- | Ray Diagrams For Mirrors Video Lessons
Concept
# Problem: When an object is placed farther from a convex mirror than the focal length, the image is:A) smaller and realB) larger and virtualC) larger and realD) smaller and invertedE) smaller and virtual
###### FREE Expert Solution
The image is smaller because the magnification of the convex mirror is always less than 1.
89% (166 ratings)
###### Problem Details
When an object is placed farther from a convex mirror than the focal length, the image is:
A) smaller and real
B) larger and virtual
C) larger and real
D) smaller and inverted
E) smaller and virtual | 2021-09-25 17:40:39 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8543442487716675, "perplexity": 3694.5758842277482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057733.53/warc/CC-MAIN-20210925172649-20210925202649-00686.warc.gz"} |
https://socratic.org/questions/how-do-you-find-the-coterminal-with-the-angle-45-circ | # How do you find the coterminal with the angle -45^circ?
Co-terminal of angle $- {45}^{\circ} = - {45}^{\circ} \pm \left(n \cdot {360}^{\circ}\right)$ where n is an integer. | 2022-05-29 05:40:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8440621495246887, "perplexity": 327.1178570899011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663039492.94/warc/CC-MAIN-20220529041832-20220529071832-00091.warc.gz"} |
https://cs.stackexchange.com/tags/asymptotics/hot?filter=month | # Tag Info
Usually we call statement $A$ stronger than $B$ when $A$ implies $B$: $A \Rightarrow B$ (weaker-stronger). In other words, $B$ is weaker than $A$. When the presenter is speaking about linear time for partition, this is a stronger statement than $O(n)$ time. All linear functions are in $O(n)$, but it also contains non-linear functions. For example: $\sin n, \... 3 The partition needs really linear time Here, the presenter meant that partition takes$\Omega(n)$time. not just$O(n)$time Here, the presenter meant that this is a loose or weak statement. A stronger statement would be that partition takes$\Omega(n)$and$O(n)$time, which is equivalent to$\Theta(n)$, as you are saying. 2 For sufficiently large values of$n$, and$b>0$: $$( \log^*n )! < ( \log \log n )! < (\log \log n)^{\log \log n} = 2^{(\log\log n) \log \log \log n} \in o(2^{b \log n}) \subset o( (n \log n)^b ).$$ 1 Big-$O$is the set of functions $$O(f)=\{g\colon \exists C>0, \exists N \in \mathbb{N}, \forall n > N, g(n) \leqslant Cf(n) \}$$ So we can write$100n+5 \leqslant 105\cdot n$, taking$C=105, N=1$, and we obtain $$100n+5 \in O(n)$$ 1 Your guess works: if$T(n) \le c n^3$and$c \ge 2\$ then $$T(n) = 4c \frac{n^3}{8} + n^3 = \frac{c}{2} n^3 + n^3 = cn^3 \left(\frac{1}{2}+\frac{1}{c} \right) \le cn^3.$$ | 2021-01-16 00:51:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998133778572083, "perplexity": 2267.253409635633}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703497681.4/warc/CC-MAIN-20210115224908-20210116014908-00649.warc.gz"} |
https://www.biostars.org/p/415520/ | Finding over and under-represented interactions using HOMER
0
0
Entering edit mode
23 months ago
pixie@bioinfo ★ 1.4k
I am analyzing Hi-C data with HOMER. My experimental set up is time-point wise analysis of stress (0,1 and 6 hrs). For a differential analysis, say 6 hrs vs 0 hr, how can I find interactions which are over-represented, under_represented and specific to only one condition ? I had initially just used the FDR values to fish out significant interactions. There is a confusion in my lab regarding this issue and we are stuck at a critical point. Kindly help.
I have the following important columns in the interaction table for 6 hrs vs 0 hr:
HiCseq_result_InteractionID
HiCseq_result_Total.Read.1
HiCseq_result_Total.Read.2
HiCseq_result_Z.score
HiCseq_result_LogP
HiCseq_result_FDR.Benjamini
HiCseq_result_Bg.Interaction.Reads
HiCseq_result_Bg..2pected.Reads
HiCseq_result_Bg.Z.score
HiCseq_result_Bg.LogP
HiCseq_result_Bg.Total.Reads.Peak1
HiCseq_result_Bg.Total.Reads.Peak2
HiCseq_result_LogP.vs..Bg
HiCseq_result_Z.score.Difference.vs..Bg
homer hic • 395 views
0
Entering edit mode
Not familiar with HOMER's ability towards HiC but I suggest you use a proper statistical framework for this such as diffHiC. Like all tools from Aaron Lun the documentation is extensive and high quality. | 2021-12-01 10:10:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4696047902107239, "perplexity": 3422.7149823112836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359976.94/warc/CC-MAIN-20211201083001-20211201113001-00476.warc.gz"} |
https://www.tensorflow.org/versions/r1.8/api_docs/python/tf/train/maybe_batch | # tf.train.maybe_batch
tf.train.maybe_batch(
tensors,
keep_input,
batch_size,
capacity=32,
enqueue_many=False,
shapes=None,
allow_smaller_final_batch=False,
shared_name=None,
name=None
)
See the guide: Inputs and Readers > Input pipeline
Conditionally creates batches of tensors based on keep_input.
See docstring in batch for more details.
#### Args:
• tensors: The list or dictionary of tensors to enqueue.
• keep_input: A bool Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates True, then tensors are all added to the queue. If it is a vector and enqueue_many is True, then each example is added to the queue only if the corresponding value in keep_input is True. This tensor essentially acts as a filtering mechanism.
• batch_size: The new batch size pulled from the queue.
• num_threads: The number of threads enqueuing tensors. The batching will be nondeterministic if num_threads > 1.
• capacity: An integer. The maximum number of elements in the queue.
• enqueue_many: Whether each tensor in tensors is a single example.
• shapes: (Optional) The shapes for each example. Defaults to the inferred shapes for tensors.
• dynamic_pad: Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
• allow_smaller_final_batch: (Optional) Boolean. If True, allow the final batch to be smaller if there are insufficient items left in the queue.
• shared_name: (Optional). If set, this queue will be shared under the given name across multiple sessions.
• name: (Optional) A name for the operations.
#### Returns:
A list or dictionary of tensors with the same types as tensors.
#### Raises:
• ValueError: If the shapes are not specified, and cannot be inferred from the elements of tensors. | 2018-08-20 12:49:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5082001090049744, "perplexity": 3358.698889351542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216453.52/warc/CC-MAIN-20180820121228-20180820141228-00336.warc.gz"} |
http://www.physicsforums.com/showthread.php?p=3232615 | # Prime Numbers
by EIRE2003
Tags: numbers, prime
P: 3 In this case the number as a whole changes from 11 to 11/3! but 11 itself doesn't change at all. If it were 25 and I divided it by 120 it would change to 5 and not remain the same. Basically I can write a fraction of X/(Square root of X rounded down)! on a piece of paper or calculator and change it to smaller numbers on both sides of the division line by either myself or the calculator if it is not prime.
P: 70 I believe PrimeNumbers wants to say that if $$GCD(N,[\sqrt{N}]!) = 1$$ then N is prime.
HW Helper
P: 805
Quote by atomthick I believe PrimeNumbers wants to say that if $$GCD(N,[\sqrt{N}]!) = 1$$ then N is prime.
This is true, and can be proved using prime decomposition. Is it practical though? I'm not sure. If you want to determine if a humungous number is prime, calculating factorials and then the gcd can be a very excrutiating process.
P: 70 Clearly it's not computational feasible for large numbers, however there are some interesting results for example if $$gcd(N, [\sqrt{N}]!) = P, P > 1$$ then P is a factor of N. It could become computational feasible if someone finds good algorithms for adding, subtracting and finding modulus that work in the factorial base (Cantor discovered that we can write any number in factorial base, for example 15 = 1! + 1*2! + 2*3!). Because we can easily find the representation of N in factorial base all we would need is fast computational algorithms for this base. P.S. EIRE2003 see how many interesting questions are prime numbers raising? This kind of questions and their answers have made great improvements allover mathematics! Those numbers look uninteresting until you ask a question about them, try it.
PF Gold
P: 1,930
Quote by PrimeNumbers DIVIDE X by (SQUARE ROOT X)! ! BEING A FUNCTION ON YOUR CALCULATOR THAT SUMS 1 x 2 x 3 etc. IF PRIME THEN X WILL NOT BE DIVISIBLE BY ANY OF THE NUMBERS MULTIPLIED BY EACH OTHER BELOW THE SQUARE ROOT OF X AND SO WILL REMAIN UNCHANGED BY THE DIVISION. IF NOT PRIME THEN ONE OF IT's FACTORS CAN BE FOUND BELOW THE SQUARE ROOT OF IT AND THE TOP HALF OF THE EQUATION NAMELY X WILL BE DIVIDED BY IT AND REDUCED, OTHERWISE IT'S PRIME!
Huh? Take off the caps lock and size changes, please. And explain better. Have a comma: ","
HW Helper PF Gold P: 1,899 It was surely not for any of the applications that have been mentioned. It was cultivated for centuries before they were dreamt of. And also prime numbers specifically are not really essentially connected with cryptography. It is just that factorisation into prime numbers is one, just one, example of a hard (computationally very long) problem whose inverse (multiplying the factors) is not hard, if I understand. There are other such hard problems ready to take over for cryptography if ever anyone cracks the factorisation problem. I think of it as having a pile of pebbles, can I arrange them in a regularly spaced rectangle? If not I have a prime number of pebbles. Could be tempted to wonder if it is worthy of a grown man's attention. Tempted to believe that it would be if it were simple - could be explained, followed, carried in the head it would be revealing of a structure. But if it is so difficult and complicated that no one understands the solution when it is found, will it be revealing in the same way? I believe this question is discussed about some of today's very difficult proofs. Unless it throws light on other problems whose significance is more apparent. We are told this would be so, but I suggest we do need to be told.
P: 221
Quote by DeaconJohn Now you've got to admit that's incredible. Why should the distribution of the prime numbers have anything to do with an infinite sum of factorials? As far as I know, that is a mystery that has not been completely explained by what we know about mathematics so far. It's is only relatively recent (say 100 years ago) that mathematicians were able to prove that the factorials and the primes are related as described above. So, it's not suprising that there is still some mystery surrounding "the real reason why."
didn't ken ono recently establish something like 'factorials of primes follow a fractal pattern'?
can someone post the proof please?
Related Discussions Calculus & Beyond Homework 6 General Math 10 Linear & Abstract Algebra 0 Linear & Abstract Algebra 5 Linear & Abstract Algebra 44 | 2014-03-11 14:53:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6136550903320312, "perplexity": 514.9034136061675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011213270/warc/CC-MAIN-20140305092013-00060-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://gmatclub.com/blog/category/blog/gmat-tests/page/33/ | # GMAT Question of the Day (Feb 10)
- Feb 10, 02:00 AM Comments [0]
Math At Daifu university, 40% of all students are members of both a chess club and a swim team. If 20% of members of the swim team are not members of the chess club, what percentage of all Daifu students are members of the swim team? A....
# GMAT Question of the Day (Feb 9)
- Feb 9, 02:00 AM Comments [0]
Math Three piles of 7 beans each are to be made from 10 red, 5 yellow, and 6 green beans. If all of the beans must be used and each stack must contain at least one bean of each color, then what is the maximum number...
# GMAT Question of the Day (Feb 6)
- Feb 6, 02:00 AM Comments [0]
Math If the product of two integers and is negative, what is the value of ? (1) (2) ...
# GMAT Question of the Day (Feb 5)
- Feb 5, 02:00 AM Comments [0]
Math What is the value of ? (1) (2) Question Discussion & Explanation Correct Answer - A - (click and drag your mouse...
# GMAT Question of the Day (Feb 4)
- Feb 4, 02:00 AM Comments [0]
Math If a cube with the length of the side of 4 cm is cut into smaller cubes with the length of the side of 1 cm, then what is the percentage increase in the surface area of the resulting cubes? A. 4% B. 166% C. 266% D. 300% E. 400% Question...
# GMAT Question of the Day (Feb 3)
- Feb 3, 02:00 AM Comments [0]
Math How many times will the digit 7 be written when listing the integers from 1 to 1000? A. 110 B. 111 C. 271 D. 300 E. 304 Question Discussion & Explanation Correct Answer - D - (click and drag your mouse to see the answer) GMAT Daily Deals Manhattan GMAT: 99 percentile teachers and...
# GMAT Question of the Day (Feb 2)
- Feb 2, 02:00 AM Comments [0]
Math The population of Linterhast was 3,600 people in 1990 and 4,800 people in 1993. If the population growth rate per thousand is constant, then what will be the population in 1996? A. 6,000 B. 6,400 C. 7,200 D. 8,000 E. 9,600 Question Discussion & Explanation Correct Answer - B - (click and...
# GMAT Question of the Day (Jan 30)
- Jan 30, 02:00 AM Comments [0]
Math Which expression has the greatest value? A. B. C. D. $4^{100}$
# GMAT Question of the Day (Jan 29)
- Jan 29, 02:00 AM Comments [0]
Math A ship is transporting several cats and a crew (sailors, a cook, and a one-legged captain) to a nearby port. If these passengers combined have 15 heads and 41 legs, then how many cats is the ship transporting? A. 3 B. 5 C. 6 D. 7 E. 8 Question Discussion &...
# GMAT Question of the Day (Jan 28)
- Jan 28, 02:00 AM Comments [0]
Math If and are positive integers is an integer? (1) is a multiple of 14 (2) is a divisor of 14 Question Discussion & Explanation Correct Answer - C - (click and drag your mouse... | 2016-05-06 21:31:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6103373765945435, "perplexity": 3241.1016227758723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461862134822.89/warc/CC-MAIN-20160428164854-00092-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://veryfatoldman.blogspot.sg/2016/04/ | ## Saturday, April 23, 2016
### Malaysian police investigating alleged caning of girl, 8, by tuition teacher
Source Website: http://news.asiaone.com/news/malaysia/malaysian-police-investigating-alleged-caning-girl-8-tuition-teacher
By The Star, Asia News Network, Saturday, 23 April 2016
PHOTO: The eight-year-old girl had welts on her hands and legs when she came home from the centre in Malacca on 12 April 2016.
Photo: The Star/ANN
https://3.bp.blogspot.com/-koVjQdOw1lc/Vxt1U5aR0-I/AAAAAAAAjrY/7XDqBhJ56hoZr66QHchTAlBdZ7H-ct5BgCLcB/s1600/caning-1.jpg
http://www.straitstimes.com/sites/default/files/styles/article_pictrure_780x520_/public/articles/2016/04/23/caning.jpg?itok=mr04Pu9P
http://www.straitstimes.com/asia/se-asia/malaysian-police-investigating-alleged-caning-of-girl-8-by-tuition-teacher
JOHOR BARU - The Malaysian police are investigating an incident in which a tuition teacher allegedly caned an eight-year-old girl for not finishing her homework.
The girl's stepfather said the Year Two pupil had welts on her hands and legs when she came home from the centre in Malacca at about 7.15pm on 12 April 2016.
He said the girl was sent there at around 2pm.
"I was shocked to find the marks and called the centre.
"The teacher said my stepdaughter is slow in learning and did not complete her homework. That's why they beat her.
"Although they had told us they were using that method to teach the children, I did not think they would beat her to that extent," he said.
He was speaking at a press conference by Pasir Gudang Malaysian Chinese Association Public Complaints Bureau deputy chairman Lim Thow Siang in Johor Baru earlier this week.
When contacted, the girl's grandfather, 63, who lodged a police report at the Tengkera station in Malacca on April 13, said that he and his wife have been looking after her since she was a baby, at their home in Malacca.
"My daughter works in Singapore and lives in Johor Baru with the girl's stepfather where he runs a business," he explained.
He alleged that this was not the first time the girl has been caned at the tuition centre, "but usually they only beat her on the hands".
The girl has been taken for a checkup at the Malacca Hospital, he added.
Malacca CPO Deputy Comm Datuk Ramli Din said he has instructed his officers to arrest the teacher.
"We are investigating the incident based on the report lodged by the grandfather," he said.
State Education Department Academic Management Sector assistant director (chief of Mathematics) Shahilan Abdul Halim said using the cane on children should be the last resort after other methods of disciplining them fails.
He said that in public schools, only the principal or headmaster are authorised to cane students if deemed necessary and backed by solid reasons.
"Discipline by caning should be the last resort after all other efforts fail," he said after attending the closing ceremony of Johor Corporation Tuition Project Teachers Enhancment workshop in Kota Tinggi on Friday.
"The approaches taken by private institutions are not under our jurisdiction... Parents should ensure the centres are registered with the authority and have skilful and competent teachers to handle the students," he added.
By The Star, Asia News Network, Saturday, 23 April 2016
PHOTO: Are parents being pushed to insanity?
It may seem preposterous for adults in these days to allow others to hurt their children, but when one analyses, could four adults (the parents and the grandparents) and a horde of other parents who send their children to the same centre, be insane completely?
Posted by Hornbill Unleashed on 23 April 2016 at 8:01 AM
https://2.bp.blogspot.com/-bcMVPXpZJKQ/Vxt1T0zz9gI/AAAAAAAAjrM/vwlygsD7VKkoPIdWpCHVpa7_jZ7lgg7lACLcB/s1600/Articlecaning.jpeg
https://i1.wp.com/www.theheatmalaysia.com/Documents/Article/514143/Articlecaning.jpeg
https://hornbillunleashed.wordpress.com/2016/04/23/are-parents-being-pushed-to-insanity/
PHOTO: The world with people and more terrible characteristics. Have nothing to do with them.
Picture posted by edwin2295 on 18 May 2014
https://2.bp.blogspot.com/-WVKQkgCWpYI/Vxt1T3f6zqI/AAAAAAAAjrQ/2RpzwQNkm_Epa4dYh9XE88cSibqNp9YEgCLcB/s1600/07-1.jpg
https://bajurtov.com/2014/05/18/
### Monstrous monitor lizard feasts on cat at Japanese Garden in Jurong East
Source Website: http://news.asiaone.com/news/singapore/monstrous-monitor-lizard-feasts-cat-japanese-garden-jurong-east
By Stomp, Friday, 22 April 2016
PHOTO: A man was jogging in Japanese Garden, a park located in Jurong East, on Thursday (21 April 2016) when he noticed a two-metre-long monitor lizard eating a stray cat.
Photo Source: Stomp
https://1.bp.blogspot.com/-fVQFE-nnwQ8/VxthaMg1rzI/AAAAAAAAjq4/JPJP6m-KhW8n4N3OOYF6pL1NKphasT_OACLcB/s1600/monitorlizardcat_01.jpg
http://news.asiaone.com/sites/default/files/styles/w641/public/original_images/Apr2016/monitorlizardcat_01.jpg?itok=brE3GxOx
http://news.asiaone.com/news/singapore/monstrous-monitor-lizard-feasts-cat-japanese-garden-jurong-east
A man was jogging in Japanese Garden, a park located in Jurong East, on Thursday (21 April 2016) when he noticed a two-metre-long monitor lizard.
Upon getting closer to the reptile, he realised that it had made a meal out of a stray cat.
He captured the episode on video and sent it to citizen journalism website Stomp.
He said: "Yesterday evening while jogging in Japanese Garden, I stumbled upon a two-metre-long monitor lizard feasting on its prey.
PHOTO: A two-metre-long monitor lizard feasting on its prey in Japanese Garden.
Photo Source: Stomp
https://1.bp.blogspot.com/-MfEWL5pS5j0/VxthaGyjr4I/AAAAAAAAjqw/gsSrZLhiCOMJMdQ97iSFGRY32Kuf1_ESQCLcB/s1600/monitorlizardcat_02.jpg
http://news.asiaone.com/sites/default/files/styles/w641/public/original_images/Apr2016/monitorlizardcat_02.jpg?itok=1HA8z4Fl
http://news.asiaone.com/news/singapore/monstrous-monitor-lizard-feasts-cat-japanese-garden-jurong-east
"The prey was dead but from far, I could not make up what it was. I moved closer till I realised it was a stray cat.
"It is pretty uncommon to see animal killings in its natural habitat, especially in Singapore.
"While on my way out of the garden, I saw a stray Siamese cat. Unfortunately I don't speak cat language, otherwise I would have warned it about the cat killer roaming in the park."
By Stomp, Friday, 22 April 2016
PHOTO: They don't run too far, and whilst it is very fast compared to their usual shuffle it's not faster than an average adult could run in short bursts. However, it probably is faster than your hypothetical unfit guy can run. I never saw a dragon run for more than about 30 seconds so he doesn't need to be miles away but a good 50 metres or so should do the trick. - Pooky Hesmondhalgh, Social Media & Eating Disorders Specialist
Komodo Dragons can sprint short distances at about 10-11 mph. I am not sure what 'short distances' mean, so lets say 150 feet.
The qualifying time for the NY Marathon is 9 mph which is roughly twice the speed of a walking pace. Most people's (in decent shape) sprinting probably is about 15 mph, with world records being 23 mph. Let's say Dennis (or Newman) has a sprint speed of 12 mph for 150 feet.
So, if safety was 150 feet away, Dennis would make it without a head start at all. - Andrew Gutsch, Retail Loss & Liability
Picture posted by A princess (Russian: Принцесса) on 9 February 2013
https://2.bp.blogspot.com/-LIvRrIbHZbA/VxthaPi9KyI/AAAAAAAAjq0/ZjhUaUcMmBg9rQTCRDYiiuyVluCWurYlQCLcB/s1600/img5.jpg
http://cl.rushkolnik.ru/tw_files2/urls_81/2097/d-2096910/img5.jpg
http://talkyland.com/talky/20222/?page=1#298651
Reference
### 2 answers to a Mathematics question leave many dumbfounded
By stephluo@sph.com.sg, SPH DIGITAL NEWS, Friday, 22 April 2016
PHOTO: What does 8+11 equal to?
Photo: Randall Jones' Facebook on Monday
https://1.bp.blogspot.com/-2Z6z8MGAEWM/VxrtnVfZ3yI/AAAAAAAAjqc/XF2VijA-PKQ1JqvItgnnD7-XYSC9od0XACLcB/s1600/mathsqtn1200_fb.jpg
http://news.asiaone.com/sites/default/files/styles/w641/public/original_images/Apr2016/mathsqtn1200_fb.jpg?itok=ppJKmxpB×tamp=1461299181
What does 8+11 equal to?
Without thinking too hard, almost everyone can give a straightforward answer that it is 19, right? Well, not quite.
A mathematics question recently posted on Facebook has left many confused.
The post by Randall Jones on Monday (April 18) shows four sets of equations, with the first being "1+4=5" and the last being "8+11=?".
Can you solve the mathematics question below?
PHOTO: So what is your answer? How many times did you have to try? How long did you take to solve it?
Picture posted by Photo Source: Shutterstock, Wikimedia Commons, The New Paper, published on 23 February 2016
https://3.bp.blogspot.com/-t0cdm5vG9xg/VxrtmtfZVrI/AAAAAAAAjqU/N5ORf4KR5PQdsckz2zg20Yh0hRs0k5oTACLcB/s1600/6_Baby_Shutterstock.jpg
http://news.asiaone.com/sites/default/files/styles/w641/public/original_images/Feb2016/6_Baby_Shutterstock.jpg?itok=GJF4DVWR
Apparently, there are two methods to solving this question.
Firstly, if you interpret the equations and see them as a run-on from the previous, the sum on the right is added to the equation in the next line.
1+4=5
5+2+5=12
12+3+6=21
Using this method, 21+8+11 would equal 40.
And what about the other method?
Simply use multiplication. For every line, if you multiply the second number by the first number before adding the two, you will notice a pattern.
1+(4×1)=5
2+(5×2)=12
3+(6×3)=21
Using this method, the answer to 8+(8×11) is 96.
Jones' post has garnered one million comments. It has also sparked over 138,000 reactions and shared almost 45,000 times.
As to which is the correct answer, it was a general consensus by commenters that both 40 and 96 are accurate.
PHOTO: As to which is the correct answer, it was a general consensus by commenters that both 40 and 96 are accurate.
There is an incredible amount of stress involved for parents, teachers, administrators, politicians, and of course students.
Picture posted by ATR Adventures on Friday, 10 April 2015
https://3.bp.blogspot.com/-f91ovcYPxKU/VxrtnUysRrI/AAAAAAAAjqg/2u2LkRLc7BcU7inzC2Z1L-pwXRboWqvVACLcB/s1600/testing1028-1.jpg
http://2.bp.blogspot.com/-bO_liGEbKGA/VSh6j0xeueI/AAAAAAAAA2s/haTFeCCVXRI/s1600/testing1028.jpg
If questions on Cheryl's birthday (http://news.asiaone.com/news/singapore/maths-question-catches-worlds-attention) and the weight of eight $1 coins from last year's Primary School Leaving Examination didn't impress you, we hope this might challenge your minds. For the question on Cheryl's birthday, 10 dates were given and students were asked to figure out the birthday of a girl named Cheryl using limited information. It was set by the Singapore and Asian Schools Math Olympiads for a competition for 15-year-olds here. A Primary School Leaving Examination mathematics question last year (http://news.asiaone.com/news/education/psle-coin-question-tests-ability-estimate-mass) also left many parents worried about the fairness of the question to 12-year-olds. According to the Ministry of Education and Singapore Examinations and Assessment Board, the question on the weight of eight$1 Singapore coins was set to assess candidates' ability to estimate the mass of common objects.
And by the way, the question by Jones can be solved in ten minutes. Proven and tested. The explanation took way longer than solving the equations itself.
Now that you know the answer, go test your friends and feel like a genius.
By stephluo@sph.com.sg, SPH DIGITAL NEWS, Friday, 22 April 2016
## Friday, April 22, 2016
### 8 ways to stay healthy during a heatwave
Source Website: http://yourhealth.asiaone.com/content/8-ways-stay-healthy-during-heatwave/page/0/0
By Jean Ng, PurelyB on Tuesday, April 05, 2016
PHOTO: More people are falling ill and there have been reported cases of fainting on the streets in Malaysia due to the heat. Other symptoms of heat stroke include fever, cramps and seizures.
Photo by The Straits Times
https://3.bp.blogspot.com/-XLY4iPxUrso/Vxo1BTjxvvI/AAAAAAAAjpo/u3ag7KQa0QMxkcgcWEv8GhBqbAHdn4XNACLcB/s1600/dehydration_st-1.jpg
http://yourhealth.asiaone.com/sites/default/files/original_images/Apr2016/dehydration_st.jpg
http://yourhealth.asiaone.com/content/8-ways-stay-healthy-during-heatwave/page/0/0
More people are falling ill and there have been reported cases of fainting on the streets in Malaysia due to the heat. Other symptoms of heat stroke include fever, cramps and seizures.
Here are a few ways to keep yourself healthy during this heatwave.
1. Hydrate, hydrate, hydrate.
The average person consumes 2.7 - 3.7 litres of water from drinking and eating. However, when the heat is on, your body is bound to lose more water.
Furthermore, high humidity (greater than 60 per cent) makes sweat evaporation difficult. If you are working out in a hot and humid environment, you may lose up to 2 litres of water per hour.
You cannot "catch up" by drinking extra water later because only about 950 ml of water per hour can pass out of the stomach.
A general recommendation is to drink 700 ml of non-caffeinated fluid 2 hours before exercising. While exercising, consume 250 ml water every 20 minutes.
PHOTO: Hydrate, hydrate, hydrate.
The average person consumes 2.7 - 3.7 litres of water from drinking and eating. However, when the heat is on, your body is bound to lose more water.
A general recommendation is to drink 700 ml of non-caffeinated fluid 2 hours before exercising. While exercising, consume 250 ml water every 20 minutes.
Picture posted by gettyimages.com
https://3.bp.blogspot.com/-l5U6O6o23_M/Vxo1AvYOtnI/AAAAAAAAjpc/q7xDjdMHh2EtLkokU_qeRFGste7nF5jWQCLcB/s1600/74924972-1.png
http://media.gettyimages.com/photos/two-women-in-bathing-suits-drinking-bottles-of-water-front-view-side-picture-id74924972 - (74924972.jpg)
http://www.gettyimages.com.au/detail/photo/two-women-in-bathing-suits-drinking-bottles-of-royalty-free-image/74924972
2. Avoid sugary drinks.
It is definitely tempting to reach for that can of chilled soda or frappe, but sugar will decrease the ability of the body to absorb water.
Some guidelines on sugar content in those easy-to-grab drinks:
- Soda pop: about 10 per cent
- Fruit juices: 11 per cent to 18 per cent
- Commercial sports drinks: 5 per cent to 8 per cent
Instead, make a jug of fruit-infused water. Pick your favourite citrus or berries and add it to water. Keep a few servings chilled in the fridge and you have yourself an easy, healthy drink!
If you are on the go, the Fressko flask will be ideal for fruit infusions.
PHOTO: Avoid sugary drinks.
Infuse your water with fruits, vegetables, or herbs.
It is definitely tempting to reach for that can of chilled soda or frappe, but sugar will decrease the ability of the body to absorb water.
Make a jug of fruit-infused water. Pick your favourite citrus or berries and add it to water. Keep a few servings chilled in the fridge and you have yourself an easy, healthy drink!
Picture posted by To Insanity & Back
https://4.bp.blogspot.com/-9DXECKgUWws/Vxo1AyqY3gI/AAAAAAAAjpg/AEHq7m05LTwvnuhrCrs6kf140KurqJoeACLcB/s1600/4319922_orig.jpg
http://2.bp.blogspot.com/-fcirqh0lpVk/Vco1FiJ5lvI/AAAAAAAAe_Y/qaNmBd961xg/s1600/4319922_orig.jpg
http://www.coachmmorris.com/2012/05/drink-water.html
3. Reduce your intake of caffeine and alcohol.
The diuretic nature of caffeine and alcohol increases water loss through urination. Drinks that contain alcohol and caffeine block the release of the anti-diuretic hormone that is needed for water reabsorption.
As a result, the kidneys do not reabsorb the water and instead excrete it as urine.
20 per cent of our water intake comes from what we eat. Deliciously amazing and vitamin-rich sources of water include:
- Watermelon
- Cucumbers
- Strawberries
- Broccoli
- Iceberg lettuce
- Spinach
- Other recommended foods containing 70 per cent to 90 per cent water are apples, grapes, pineapples, carrots and peaches.
PHOTO: Reduce your intake of caffeine and alcohol.
The diuretic nature of caffeine and alcohol increases water loss through urination. Drinks that contain alcohol and caffeine block the release of the anti-diuretic hormone that is needed for water reabsorption.
Picture posted by healthcare-online.org
https://1.bp.blogspot.com/-MHgUPp3JQsw/Vxo1Bi1QRkI/AAAAAAAAjpw/iEebtEk11ncVfn6ERYEn_VzwwJUP5Se8ACLcB/s1600/image003.jpg
http://www.healthcare-online.org/images/10416210/image003.jpg
http://www.healthcare-online.org/How-To-Lower-Diastolic-Pressure.html
4. Cut back on high intensity sports.
Do flow yoga instead of hot yoga. Try swimming instead of running.
No workout is worth dying over (yes, heat stroke can kill!). Take it easy and if you even start to feel nauseous, dizzy or have a rapid heartbeat, then quit immediately and get indoors. This is not the time to "push through."
PHOTO: Cut back on high intensity sports.
The health benefits with each practice - you're getting a little stronger, a little leaner and a little more stress-free as you move through each flow.
Study helps lend weight to the idea that yoga can be considered an acceptable substitute for aerobic-like exercise such as walking and biking, as it seems to provide similar protective cardiovascular health benefits.
The explanation behind yoga’s superpowers probably has something to do with the stress-reduction factor, the researchers say. Unchecked stress can be a nasty beast, and managing it is a proven way to help fight back against metabolic and cardio issues.
Posted by Jenna Birch on 12 January 2015 at 11:07 AM
https://1.bp.blogspot.com/-LO-ILsWFXHE/Vxo1CUdU_PI/AAAAAAAAjp8/OqaQubwDIBoBLst_sV-ttOeWdcsS2p1RQCLcB/s1600/yoga-cardio-study.jpg
http://www.self.com/flash/fitness-blog/2015/01/yoga-cardio-heart-healthy-study/
PHOTO: Hot yoga
The hot yoga is characterized by a series of yoga poses which is done in a heated room. The room where you perform the exercise is preserved at a temperature of 95-100 degrees. This kind of exercise gives off a lot of perspiration during a session because you are stuck in a heated room.
Hot yoga is advantageous because it cleanses your body and gets rid of the toxins inside your body. Since is makes your body warm, the more flexible it gets.
When doing the hot yoga, you should have some accessories like your yoga mat and towel. Since you will be sweating severely, you should have something to wipe up your sweat every other time.
If you are going to practice hot yoga, you should be ready with your clothing. The clothes you should wear should be appropriate. You can wear shorts during the session.
Texts posted by Cynthia Florek on 27 July 2012, What Is Hot Yoga?, https://cynthiaflorek.wordpress.com/2012/07/27/whatishotyoga/
http://www.noviny.sk/g/joga/541648
Your sweat and urine contain potassium and sodium, two essential electrolytes that control the movement of water in and out of the body's cells.
While most of us already have enough sodium from our diets, you can up find potassium from bananas and nuts.
Too much sodium can draw water out of the body cells though, increasing the risk of dehydration.
A quick way to bring the body temperature down is to have a cold shower. Keep some damp face towels in the fridge for a delightful freshening up.
Another handy method is keeping a small bottle of water that you can spritz on yourself. The water droplets act as artificial sweat and cool the body through evaporation.
A quick way to bring the body temperature down is to have a cold shower. Keep some damp face towels in the fridge for a delightful freshening up.
Picture posted by fuskator.com
7. Know when to get help.
Look out for signs such as extremely hot to the touch skin, confusion and blurred vision. Heat exhaustion can evolve rapidly into heat stroke.
Do not delay the onset of cooling while waiting for an ambulance or you increase the risk of tissue damage and prolonged hospitalisation.
In severe cases, the victim may lapse into a coma in less than 1 hour. The longer a coma lasts, the lower the chance for survival, hence immediate attention must be administered.
PHOTO: Be prepared, know when to get help
Many patients admitted to hospital for heatstroke are athletes who spend (too) many hours out in the sun. Many victims of the heat are people unfamiliar with Singapore's humidity and temperatures.
Photo by The Straits Times
https://2.bp.blogspot.com/-tfwRhnAj3mQ/Vxo1CEmbgmI/AAAAAAAAjp0/wzS-wMm9HjoJU7BMC8WzWnVfS5wu50oKwCLcB/s1600/st_femalejogger.jpg
http://yourhealth.asiaone.com/sites/default/files/original_images/Mar2014/st_femalejogger.jpg
http://yourhealth.asiaone.com/content/8-ways-stay-healthy-during-heatwave/page/0/1
8. Ditch the sheets.
Throw off the sheets and sleep without them. If you aren't used to sleeping uncovered, opt for thin sheets rather than quilts or or other duvets of thick material.
Listen to your body. Stay indoors when you can, and wear light, breathable clothes and a hat.
Remember that by the time you feel thirsty, you might already be dehydrated! Stay well hydrated and live healthy. | 2018-02-20 21:34:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18201592564582825, "perplexity": 5374.659578541522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.36/warc/CC-MAIN-20180220204917-20180220224917-00760.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/object-5-uc-charge-accelerating-7-10-3-m-s-2-object-mass-4-mg-magnitude-electric-field-56--q2640123 | An object with a 5 uC charge is accelerating at 7�10?3 m/s^2. If the object has a mass of 4 mg what is the magnitude of the electric field?
-5.6�10?3 {\rm N/C}
-1.1�10?2 {\rm N/C}
-1.4�10?13 {\rm N/C}
-180 {\rm N/C} | 2016-05-31 23:02:11 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9817663431167603, "perplexity": 2354.802174506879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464053209501.44/warc/CC-MAIN-20160524012649-00212-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/467867/optimization-question-of-sorts | # Optimization question--of sorts
I hope this isn't a silly question. I'm learning single variable calc, and having lots of fun with optimization problems. This isn't exactly an optimization problem, but something that came up while working on one.
Let's say I have a small circular garden with a short brick border. This border is perhaps 1 foot tall, so that any sun or rain that reaches the flowers has to come from directly overhead. Suppose that the radius from the center of the circular flowerbed to the outermost edge of the circular brick border is $r.$ I plant a metal rod at the circle's center. At the top of the rod is a fan blade of sorts: it's flat, thin, parallel to the ground, and has the shape of a circular sector with radius $r.$ This blade is opaque, so it provides some shade for that part of the flowerbed beneath it.
I give the blade a spin: as it's spinning, all of the flowerbed receives some shade. Then I get an idea: I automate the spinning of the blade. I can control the angular velocity, $w,$ of the blade with a remote control. Let $l$ be the amount of light (or, if you want, rain) admitted to the flowerbed. My question is this: Is it the case that $$\lim_{w\to \infty}l=0?$$
I have reasons for thinking this is the case, and other reasons for thinking it's nonsense. And if it is the case, then it's true regardless of the value $\theta$ of the central angle of the circular sector, right?
• Assuming I understand the question correctly, the result depends only on $\theta$, and the amount of light received by the flowerbed is completely independent of $w$. – Alex Becker Aug 15 '13 at 1:05
• Physically, what you're going for is probably nonsense when it comes to light or even rain. Here's a formulation that might be more along the intuitive lines you're going for: let's say that instead of blocking sun, you're blocking an assault of arrows (as in the weapon bow and arrow) of length $L$, which pass the horizontal-spinning blade vertically at speed $v$. How fast would the blade need to spin to block all arrows from hitting the grass below (and yes, such a speed exists)? Answering this might give you a better intuition about the problem with light and rain. – Omnomnomnom Aug 15 '13 at 1:28
• Ok. So say that $v$ is measured in feet per second, and $L$ is likewise measured in feet. And to keep things clean we'll let $L$ be less than the height of the blade. Then the rotating blade has $\frac {L}{v}$ seconds to hit an incoming arrow--and that's just to keep the blade from missing the arrow altogether. I guess I see what you mean. With rain, we're shrinking $L$ (though maybe we're slowing down $v$). With light, we're making $L$ extremely small and increasing $v$ to the greatest speed possible. – Ryan Aug 15 '13 at 1:43
• @Ryan yep, you've pretty much got it. It might be physically possible for rain, in a sense. Light seems out of the question, though. Then again, in a physical setting, you might be able to fan the rain away. – Omnomnomnom Aug 15 '13 at 1:53
• @Omnomnomnom Great answer, BTW! The arrows problem really helped me gain some intuition about this. – Ryan Aug 15 '13 at 2:11
If the altitude of the fan blade is essentially the same as that of the wall surrounding the garden, then no "side light" needs to be factored in to the final answer. If the fan speed is controlled at a speed fast enough to eliminate any possibility that the blade might hover longer over a particular area during greatest sunlight intensity, then we don't need to take that into account. Given these things, and given $\theta$ as the single parameter identifying the area of the fan blade by circular section, we can then say:
$$l = 1 - {\theta \over 2\pi}$$
where $l$ is the amount of light reaching the garden. This is achieved by noting that $\theta {r^2 \over 2}$ is the area of a circular section, and $\pi r^2$ is the area of the entire circle. Then the difference of these areas is the section allowing light though the fan. Then the final formula starts as:
$$l = {\pi r^2 - \theta {r^2 \over 2} \over \pi r^2}$$ | 2019-10-14 02:56:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7518054842948914, "perplexity": 310.44311685955745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649035.4/warc/CC-MAIN-20191014025508-20191014052508-00200.warc.gz"} |
https://www.phidgets.com/docs/index.php?title=1127_User_Guide&action=info | # 1127 User Guide
## Getting Started
Welcome to the 1127 user guide! In order to get started, make sure you have the following hardware on hand:
Next, you will need to connect the pieces:
1. Connect the 1127 to the HUB0000 with the Phidget cable.
2. Connect the HUB0000 to your computer with the USB cable.
Now that you have everything together, let's start using the 1127!
## Using the 1127
### Phidget Control Panel
In order to demonstrate the functionality of the 1127, we will connect it to the HUB0000, and then run an example using the Phidget Control Panel on a Windows machine.
The Phidget Control Panel is available for use on both macOS and Windows machines. If you would like to follow along, first take a look at the getting started guide for your operating system:
### First Look
After plugging in the 1127 into the HUB0000, and the HUB0000 into your computer, open the Phidget Control Panel. You will see something like this:
The Phidget Control Panel will list all connected Phidgets and associated objects, as well as the following information:
• Serial number: allows you to differentiate between similar Phidgets.
• Channel: allows you to differentiate between similar objects on a Phidget.
• Version number: corresponds to the firmware version your Phidget is running. If your Phidget is listed in red, your firmware is out of date. Update the firmware by double-clicking the entry.
The Phidget Control Panel can also be used to test your device. Double-clicking on an object will open an example.
### Voltage Input
Double-click on a Voltage Input object in order to run the example:
General information about the selected object will be displayed at the top of the window. You can also experiment with the following functionality:
• Modify the change trigger and/or data interval value by dragging the sliders. For more information on these settings, see the data interval/change trigger page.
• Select the 1127 from the Sensor Type drop-down menu. The example will now convert the voltage into illuminance (lux) automatically. Converting the voltage to illuminance (lux) is not specific to this example, it is handled by the Phidget libraries, with functions you have access to when you begin developing!
## Technical Details
### General
The human eye is less sensitive to changes in light intensity than the 1127, but is able to see a wider range. The Human eye range is from 50µlx (starlight) to 100klx (extremely bright sunny day). The 1127, on the other hand, is able to measure from 1lx (Moonlight) to 1000lx (TV studio lighting). The 1127 is able to detect higher frequency fluctuations in light levels than the human eye. If you notice noise on the signal that you cannot perceive yourself, it is probably due to incandescent light flicker, or other varying light sources. This sensor is designed to respond to visible light, and it can sense light from concentrated sources like laser pointers (although be careful with high-power lasers, as they could damage the sensor). It will also have a very muted response to IR light that is close to the visible spectrum (700-800nm). The 1127 is non-ratiometric which means that you cannot rely on the sensor saturating at 5 volts. To be conservative, interpret a sensor voltage of over 4.75V as saturated, with the true light level being unknown.
### Sensitivity Response
The 1127 uses the AMS104 light sensor package. The following graph illustrates the sensor's sensitivity to specific wavelengths of light.
### Formulas
The Phidget libraries can automatically convert sensor voltage into illuminance (lux) by selecting the appropriate SensorType. See the Phidget22 API for more details. The formula to translate voltage from the sensor into illuminance is:
${\displaystyle {\text{Illuminance (lux)}}={\text{Voltage}}\times 200}$
### Phidget Cable
The Phidget Cable is a 3-pin, 0.100 inch pitch locking connector. Pictured here is a plug with the connections labelled. The connectors are commonly available - refer to the Analog Input Primer for manufacturer part numbers.
## What to do Next
• Programming Languages - Find your preferred programming language here and learn how to write your own code with Phidgets!
• Phidget Programming Basics - Once you have set up Phidgets to work with your programming environment, we recommend you read our page on to learn the fundamentals of programming with Phidgets. | 2019-05-27 11:11:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3136623203754425, "perplexity": 2498.4087473503996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232262369.94/warc/CC-MAIN-20190527105804-20190527131804-00227.warc.gz"} |
https://math.stackexchange.com/questions/3716457/radius-of-circle-that-touches-3-circles-which-in-turn-touch-each-other | # Radius of circle that touches 3 circles, which in turn touch each other
I had $$3$$ circles of radii $$1$$, $$2$$, $$3$$, all touching each other. A smaller circle was constructed such that it touched all the $$3$$ circles.
What is the radius of the smaller circle?
This is what I did:
I conveniently positioned the $$3$$ circles on the coordinate axes and found the coordinates of the centers of each circle.
Then I wrote a general equation of a circle( for the smaller one) , and using the fact that distance between the centers is equal of the sum of radii ( for circles touching each other) I found $$3$$ equations, which I could use to solve for the variables in the general equation. Hence I found the equation of the smaller circle, and thus its radius.
However, I believe that this method is very inefficient as I ended up with so many steps and sub steps to solve the equations.
Is there a better way to approach this?
I would prefer a geometrical solution instead of a coordinate solution.
Thanks for the help!!
Note :
However is there a more efficient solution for this than what was mentioned in those answers?
• I would advise you to have a look at the complex version of Descarte"s theorem here – Jean Marie Jun 12 at 8:42
• But if you want coordinate free proofs,, look at the more general "Apollonius problem" – Jean Marie Jun 12 at 8:47
Let $$A$$, $$B$$ and $$C$$ be centers of circles with radius $$3$$, $$2$$ and $$1$$ respectively and let $$x$$ be a radius of the needed circle with a center $$D$$.
Thus, $$\measuredangle ACB=90^{\circ},$$ $$\cos\measuredangle ACD=\frac{4^2+(1+x)^2-(3+x)^2}{2\cdot4(1+x)}=\frac{2-x}{2(1+x)},$$
$$\cos\measuredangle BCD=\frac{3^2+(1+x)^2-(2+x)^2}{2\cdot3(1+x)}=\frac{3-x}{3(1+x)},$$ which gives $$\left(\frac{2-x}{2(1+x)}\right)^2+\left(\frac{3-x}{3(1+x)}\right)^2=1$$ or $$23x^2+132x-36=0,$$ which gives $$x=\frac{6}{23}.$$
• Thank you so much! – Vamsi Krishna Jun 12 at 8:24
Notice, the radius $$r$$ of the small (inscribed) circle externally touching any three externally kissing (touching) circles of radii $$a, b$$ & $$c$$ is given by the generalized formula as follows
$$\boxed{\color{blue}{r=\frac{abc}{2\sqrt{abc(a+b+c)}+ab+bc+ca}}}$$ Now, substituting the values of radii of three externally touching circle i.e. $$a=1, b=2$$ & $$c=3$$ in above generalized formula, we get radius of small circle
$$r=\frac{1\cdot 2\cdot 3}{2\sqrt{1\cdot2\cdot 3(1+2+3)}+1\cdot 2+2\cdot 3+3\cdot 1}$$ $$r=\color{blue}{\frac{6}{23}}$$
• Thanks a lot!!! – Vamsi Krishna Jun 12 at 8:24 | 2020-09-19 18:30:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9135952591896057, "perplexity": 317.8198120314072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192783.34/warc/CC-MAIN-20200919173334-20200919203334-00637.warc.gz"} |
https://trialogueblog.wordpress.com/2019/02/19/exam-time/ | # Exam time
It is the time of exam again and there is the usual stream of students asking for
clarification and solution of different concepts. So here I discuss some common doubts.
Chapter 2: problem 2: Definition of a line passing through the origin in various ways.
(1) $y = m x + c$ is useful only in two dimensions.
(2) Defining a line as collection of position vectors given by $c \vec{v}$, where $\vec{v}$ can be a vector in any dimensions and $c \in R$. For example $c(1,0,0)$ with $c \in R$ would define the x-axis in three dimensions.
(3) Defining a line as intersection of two planes in 3 dimensions, for example the
intersection of the xy-plane and the xz-plane defines the x-axis.
Given any two equations of planes passing through zero, one can find the intersection and express it in terms of scaling of a vector. As an example, the intersection of $2 x + y - z = 0$ and $x + y + 2 z= 0$ requires that $z = x+y$, and $x = -y$ substituting these conditions for the coordinates of a general point $(x,y,z)$ gives $(x,y,z) = (x, -x, 0) = x (1,-1,0)$ The equation of a general line (not necessarily passing through the origin) can be written as $\vec{a} + c \vec{v}$ where $\vec{a}$ is a constant vector.
Chapter 2: problem 4, how to find a plane that passes through the given three points. Assume a general equation of a plane $ax + b y + c z = 0$, insist that the given points satisfy the equation and solve for $a,b,c$.
Chapter 3: problem 5, notice that the B matrix has only two rows that are linearly independent. There are 3 variables, hence one variable is free. Thus either there are infinite solutions or there is no solution. When the $\vec{b}$ is chosen such that equations 1 and 3 and equations 2 and 4 are the same, the equation is solvable with infinite solutions, whereas if $\vec{b}$ is chosen so that two of there equations are different then the matrix equation has no solution.
## Author: strangeset
A nomad at heart, I enjoy observing, analysing, connecting, understanding and dreaming. I am a big fan of science and tech. Forever learning and experimenting. | 2021-01-28 07:35:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.87164705991745, "perplexity": 222.27383048113873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704839214.97/warc/CC-MAIN-20210128071759-20210128101759-00662.warc.gz"} |
https://proofwiki.org/wiki/Equivalence_of_Definitions_of_Equivalent_Division_Ring_Norms/Norm_is_Power_of_Other_Norm_implies_Cauchy_Sequence_Equivalent | # Equivalence of Definitions of Equivalent Division Ring Norms/Norm is Power of Other Norm implies Cauchy Sequence Equivalent
## Theorem
Let $R$ be a division ring.
Let $\norm{\,\cdot\,}_1: R \to \R_{\ge 0}$ and $\norm{\,\cdot\,}_2: R \to \R_{\ge 0}$ be norms on $R$.
Let $\norm{\,\cdot\,}_1$ and $\norm{\,\cdot\,}_2$ satisfy:
$\exists \alpha \in \R_{\gt 0}: \forall x \in R: \norm{x}_1 = \norm{x}_2^\alpha$
Then for all sequences $\sequence {x_n}$ in $R$:
$\sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_1 \iff \sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_2$
## Proof
Let $\sequence {x_n}$ be a Cauchy sequence in $\norm{\,\cdot\,}_1$.
Let $\epsilon \gt 0$ be given.
Since $\sequence {x_n}$ is a Cauchy sequence then:
$\exists N \in \N: \forall n,m \ge N: \norm {x_n - x_m}_1 \lt \epsilon^\alpha$
Then:
$\exists N \in \N: \forall n,m \ge N: \norm {x_n - x_m}_2^\alpha \lt \epsilon^\alpha$
Hence:
$\exists N \in \N: \forall n,m \ge N: \norm {x_n - x_m}_2 \lt \epsilon$
So $\sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_2$
It follows that for all sequences $\sequence {x_n}$ in $R$:
$\sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_1 \implies \sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_2$
$\Box$
Let $\sequence {x_n}$ be a Cauchy sequence in $\norm{\,\cdot\,}_2$.
Let $\epsilon \gt 0$ be given.
Since $\sequence {x_n}$ is a Cauchy sequence then:
$\exists N \in \N: \forall n,m \ge N: \norm {x_n - x_m}_2 \lt \epsilon^{1/\alpha}$
Then:
$\exists N \in \N: \forall n,m \ge N: \norm {x_n - x_m}_2^\alpha \lt \epsilon$
Hence:
$\exists N \in \N: \forall n,m \ge N: \norm {x_n - x_m}_1 \lt \epsilon$
So $\sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_1$
It follows that for all sequences $\sequence {x_n}$ in $R$:
$\sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_2 \implies \sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_1$
The result follows
$\blacksquare$ | 2019-05-22 17:19:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9882718920707703, "perplexity": 47.29379344015379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256887.36/warc/CC-MAIN-20190522163302-20190522185302-00178.warc.gz"} |
https://ncatlab.org/nlab/show/category+with+weak+equivalences | # nLab category with weak equivalences
Contents
### Context
#### Model category theory
Definitions
Morphisms
Universal constructions
Refinements
Producing new model structures
Presentation of $(\infty,1)$-categories
Model structures
for $\infty$-groupoids
for ∞-groupoids
for equivariant $\infty$-groupoids
for rational $\infty$-groupoids
for rational equivariant $\infty$-groupoids
for $n$-groupoids
for $\infty$-groups
for $\infty$-algebras
general $\infty$-algebras
specific $\infty$-algebras
for stable/spectrum objects
for $(\infty,1)$-categories
for stable $(\infty,1)$-categories
for $(\infty,1)$-operads
for $(n,r)$-categories
for $(\infty,1)$-sheaves / $\infty$-stacks
# Contents
## Idea
A category with weak equivalences is an ordinary category with a class of morphisms singled out – called ‘weak equivalences’ – that include the isomorphisms, but also typically other morphisms. Such a category can be thought of as a presentation of an (∞,1)-category that defines explicitly only the 1-morphisms (as opposed to n-morphisms for all $n$) and the information about which of these morphisms should become equivalences in the full (∞,1)-category.
The desired $(\infty,1)$-category in question can be constructed from such a “presentation” by “freely adjoining inverse equivalences” to the weak equivalences, in a suitable $(\infty,1)$-categorical sense. One way to make this precise is by the process of simplicial localization . A single $(\infty,1)$-category can admit many different such presentations. See the section Presentations of (∞,1)-categories below for more details.
## Definition
A category with weak equivalences is a category $C$ equipped with a subcategory (in the naïve sense) $W \subset C$
• which contains all isomorphisms of $C$;
• which satisfies two-out-of-three: for $f, g$ any two composable morphisms of $C$, if two of $\{f, g, g \circ f\}$ are in $W$, then so is the third.
## Examples and refinements
Often categories with weak equivalences are equipped with further extra structure that helps with computing the simplicial localization, the homotopy category and derived functors.
Other variants include
Three additional conditions which categories with weak equivalences often satisfy are:
In fact, these three conditions are closely related.
• Obviously, saturation implies closure under retracts and two-out-of-six, since the isomorphisms in any category satisfy both.
• In any model category, all three conditions hold automatically.
• If the weak equivalences admit a calculus of fractions, or a well-behaved class of cofibrations or fibrations, then the three conditions are equivalent. See two-out-of-six property for the proofs, which are from Categories and Sheaves (for the calculus of fractions) and Blumberg-Mandell (for the case of cofibrations, in the context of a Waldhausen category).
## Remarks
• If we denote by $Core(C)$ the core of $C$ – the maximal subgroupoid of $C$ – then we have a chain of inclusions $Core(C) \hookrightarrow W \hookrightarrow C$.
• Many categories with weak equivalences can be equipped with the further structure of a model category. On the other hand, some categories with weak equivalences can not be equipped with a useful structure of a model category. In particular, categories of diagrams in a model category do not always inherit a useful model structure (on the other hand often they do, see model structure on functors). Several concepts exist that weaken the axioms of a model category in order to still obtain useful results in such a case – for instance a category of fibrant objects.
• Although categories of weak equivalences do not usually have limits and colimits, they are often accessible, and can be presented as an injectivity class? or a cone-injectivity class?. This is used in Smith’s recognition theorem for combinatorial model categories and can be “algebraicized” as in Bourke17.
## Presentation of $(\infty,1)$-categories
A category $C$ with weak equivalences serves as a presentation of an (∞,1)-category $\mathbf{C}$ with the same objects and at least the 1-morphisms of $C$, and such that every weak equivalence in $C$ becomes a true equivalence (a homotopy equivalence) in $\mathbf{C}$.
The procedure (or one of its equivalent variants) that constructs the (∞,1)-category $\mathbf{C}$ from the category with weak equivalences $C$ is called Dwyer-Kan simplicial localization.
In fact, every (∞,1)-category may be presented this way (and indeed posets equipped with wide subcategories of morphisms called weak equivalences are sufficient). This is discussed at
Alternatively, we may further project to the 1-category in which all weak equivalences become true isomorphisms: this is the homotopy category of $C$ with respect to $W$. Equivalently this is the homotopy category of an (∞,1)-category of $\mathbf{C}$.
Note that the category with weak equivalences which presents a given $(\infty,1)$-category can not, in general, be taken to be the homotopy category of that $(\infty,1)$-category; more “flab” must be built into it.
It also cannot, in general, be the underlying 1-category of a simplicially enriched presentation of that $(\infty,1)$-category. For instance, every $\infty$-groupoid can be realized as a simplicially enriched groupoid, but the underlying 1-category of a simplicially enriched groupoid is a 1-groupoid, which cannot be localized any further to produce a non-1-truncated $\infty$-groupoid.
Algebraic model structures: Quillen model structures, mainly on locally presentable categories, and their constituent categories with weak equivalences and weak factorization systems, that can be equipped with further algebraic structure and “freely generated” by small data.
structuresmall-set-generatedsmall-category-generatedalgebraicized
weak factorization systemcombinatorial wfsaccessible wfsalgebraic wfs
model categorycombinatorial model categoryaccessible model categoryalgebraic model category
construction methodsmall object argumentsame as $\to$algebraic small object argument
## References
Last revised on June 11, 2022 at 16:43:55. See the history of this page for a list of all contributions to it. | 2022-11-28 04:33:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 53, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9067673683166504, "perplexity": 821.4162561427223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710473.38/warc/CC-MAIN-20221128034307-20221128064307-00460.warc.gz"} |
https://spinnaker8manchester.readthedocs.io/en/latest/_modules/spinnman/connections/abstract_classes/scp_receiver/ | # Copyright (c) 2017-2019 The University of Manchester
#
# This program is free software: you can redistribute it and/or modify
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from spinn_utilities.abstract_base import AbstractBase, abstractmethod
from .connection import Connection
""" A receiver of SCP messages
"""
__slots__ = ()
[docs] @abstractmethod
""" Determines if there is an SCP packet to be read without blocking
:param int timeout:
The time to wait before returning if the connection is not ready
:return: True if there is an SCP packet to be read
:rtype: bool
"""
[docs] @abstractmethod
""" Receives an SCP response from this connection. Blocks\
until a message has been received, or a timeout occurs.
:param int timeout:
The time in seconds to wait for the message to arrive; if not
specified, will wait forever, or until the connection is closed
:return: The SCP result, the sequence number, the data of the response
and the offset at which the data starts (i.e., where the SDP | 2022-01-23 06:38:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23493236303329468, "perplexity": 3149.4496918474824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304134.13/warc/CC-MAIN-20220123045449-20220123075449-00445.warc.gz"} |
https://physics.stackexchange.com/questions/408671/will-a-closed-universe-with-dark-energy-still-collapse-into-a-big-crunch-or-will | Will a closed universe with dark energy still collapse into a big crunch or will it expand forever?
In a closed universe without dark energy, it departs rapidly from flatness and become more curved over time. The expansion of the universe eventually stops and starts to collapses into a big crunch.
Will a closed universe with dark energy still collapse into a big crunch or will it expand forever?
The question whether or not a closed universe will collapse depends on the roots of the Friedmann equations. For $\Lambda$CDM models, these are \begin{align} \dot{a}^2 &= H_0^2\left(\Omega_{M,0}\,a^{-1} + \Omega_{K,0} + \Omega_{\Lambda,0}\, a^2\right),\tag{1}\\ \ddot{a} &= H_0^2\left(-\frac{1}{2}\Omega_{M,0}\,a^{-2} + \Omega_{\Lambda,0}\, a\right),\tag{2} \end{align} where $\Omega_{M,0}$ and $\Omega_{\Lambda,0}$ are the present-day matter and dark energy parameters, we ignore the (small) contribution of radiation, and $\Omega_{K,0} = 1 - \Omega_{M,0} - \Omega_{\Lambda,0}$. We can rewrite $(1)$ as $$f(a) = \frac{a\dot{a}^2}{H_0^2} = \Omega_{M,0} + \Omega_{K,0}\, a + \Omega_{\Lambda,0}\, a^3,\tag{3}$$ along with its derivative in $a$ $$f'(a) = \Omega_{K,0} + 3\,\Omega_{\Lambda,0}\, a^2.\tag{4}$$ Consider the following example:
This plot shows $f(a)$ for three models with $\Omega_{M,0}=2.5$. The green model, with $\Omega_{\Lambda,0} = 0.15$, expands forever. The blue model, with $\Omega_{\Lambda,0} = 0.05$, has a root at $a_0 = 1.8015$. Since $\ddot{a}<0$ at this root, $\dot{a}$ changes from positive to negative, so this model will collapse. The red model is a boundary case: here, both $\dot{a}$ and $\ddot{a}$ are zero at the same point, $a_0 = 2.3490$, so the expansion comes to a temporary halt, but then continues. To find these boundary models, we need to obtain an expression for $\Omega_{\Lambda,0}$ for a given value $\Omega_{M,0}$, such that $$f(a_0) = f'(a_0) = 0,$$ where $a_0 > 1$. Instead of solving for $\Omega_{\Lambda,0}$ directly, we will solve for $\Omega_{K,0}$ first. By plugging $$f'(a_0) = \Omega_{K,0} + 3\,\Omega_{\Lambda,0}\, a_0^2 = 0$$ into $f(a_0) = 0$, we can eliminate $\Omega_{\Lambda,0}$ and obtain $$3\,\Omega_{M,0} + 2\,\Omega_{K,0}\,a_0 = 0.\tag{5}$$ We plug this back into $f'(a_0) = 0$ to eliminate $a_0$:
$$4\,\Omega_{K,0}^3 + 12\,\Omega_{\Lambda,0}\,\Omega_{K,0}^2\, a_0^2 = 4\,\Omega_{K,0}^3 + 27(1 - \Omega_{K,0} - \Omega_{M,0})\,\Omega_{M,0}^2 = 0,$$ or $$\Omega_{K,0}^3 - \frac{27}{4}\,\Omega_{M,0}^2\,\Omega_{K,0} + \frac{27}{4}(1 - \Omega_{M,0})\,\Omega_{M,0}^2 = 0.$$ This is a cubic equation in $\Omega_{K,0}$ of Cardano form $t^3 + pt + q = 0$. Its three roots are
$$\Omega_{K,0}^{(k)} = -\frac{3}{2}\Omega_{M,0}^{2/3}\left[e^{4\pi ik/3} \left((1 - \Omega_{M,0}) + \sqrt{1 - 2\,\Omega_{M,0}}\right)^{1/3} +\right. \\ \left. e^{-4\pi ik/3} \left((1 - \Omega_{M,0}) - \sqrt{1 - 2\,\Omega_{M,0}}\right)^{1/3}\right],$$ with $k=0,1,2$. If $\Omega_{M,0}\geqslant 1/2$, these three roots are real, and we can write
$$(1 - \Omega_{M,0}) + \sqrt{1 - 2\,\Omega_{M,0}} = (1 - \Omega_{M,0}) + i\sqrt{2\,\Omega_{M,0}-1} = re^{i\theta},$$ with
\begin{align} r &= \sqrt{(1 - \Omega_{M,0})^2 + 2\,\Omega_{M,0}-1} = \Omega_{M,0},\\ \theta &= \arccos\left(\frac{1 - \Omega_{M,0}}{\Omega_{M,0}}\right), \end{align} so that $$\Omega_{K,0}^{(k)} = -3\,\Omega_{M,0}\cos\left(\frac{\theta + 4\pi k}{3}\right).$$ If $\Omega_{M,0}\geqslant 1$, the $k=1$ root defines the collapse boundary. Indeed, $\pi/2\leqslant\theta < \pi$, so that $-3/2\,\Omega_{M,0} < \Omega_{K,0}^{(1)} \leqslant 0,$ and from $(5)$ we get $a_0 > 1$. One can further verify that the $k=2$ root is unphysical ($a_0 < 0$), while the $k=0$ root defines the boundary of models with no Big Bang ($a_0 < 1$).
Therefore, \begin{align} \Omega_{\Lambda,0}^{(\text{collapse})} &= 1 + \Omega_{M,0}\left[ 3\cos\left(\frac{\theta + 4\pi }{3}\right) - 1\right] = 4\,\Omega_{M,0}\cos^3\left(\frac{\theta + 4\pi}{3}\right), \end{align} where we used the identity $3\cos x = 4\cos^3 x - \cos 3x$. The plot below shows this boundary, between the red and the yellow area. The red dot corresponds with the red model in the first plot. Note that the $\Lambda$CDM model corresponding with our universe (black dot) will not collapse.
• Seeing such complex notation correspond to my own conclusion (basically a guess) is kind of reassuring: I can't see why this answer hasn't been either accepted by, or commented on by, the OP. Mar 15, 2021 at 4:42
A spatially closed universe can expand forever if the vacuum energy density is not zero.
Yes, a universe without dark energy will expand decelerated and collapse into a big crunch. This is still true if small amounts of vacuum energy, respectively $\Omega_\Lambda$ is added. The big crunch is avoided if the density parameter $\Omega_\Lambda$ exceeds a critical value. This value corresponds to a closed universe which expands forever. The formula hereto is given in Peacock's "Cosmological Physics" page 82. To answer your question with respect to dark energy is not as strict because its nature is unknown. Up to now the data are consistent with the assumption that the observed accelerated expansion of the universe is due to the cosmological constant $\Lambda$.
I believe you might be confusing the curvature of the space-time manifold with the spatial curvature, once you differentiate the two, one would also need to supply some reasonable initial conditions to make your question a bit more precise. In any case I will try answering your question as best as possible.
To be on the same page let us assume the $\Lambda$CDM-model of cosmology. You will see in the article that the basis for it is the FLRW-metric which contains a variable $k$ which can only take three values a priori, in your case for a closed universe the value of $k$ corresponds to $+1$. Now consider the Friedmann equation which comes out of Einstein's field equations and the FLRW metric: $$H^2 = \left(\frac{\dot{a}}{a}\right)^2 = \frac{8\pi G}{3}{\rho} - \frac{kc^2}{a^2}+\frac{\Lambda c^2}{3}$$ So to answer your question exactly one would need to specify the matter content, that is specify $\rho$ or at least its scaling with $a$ (the scale factor). If it was the case, as it is now, that matter density scales as $a^{-3}$, you can say that eventually the Dark energy term $\Lambda$ will dominate the expansion. However you can ask whether we could reach the current state of the universe within a closed-universe scenario, but for that you will have to specify the content for different epochs. The only way in which you can contract as you can see from the equation is that the middle term of the right hand side dominates, and that would only happen for very specific stages (small $a$ but not small enough that so that the $\rho$ term dominates). | 2022-05-27 13:45:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9388986825942993, "perplexity": 260.52266395299347}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662647086.91/warc/CC-MAIN-20220527112418-20220527142418-00543.warc.gz"} |
http://arxitics.com/articles/1012.4491 | arXiv Analytics
arXiv:1012.4491 [cond-mat.str-el]AbstractReferencesReviewsResources
Modified spin-wave theory with ordering vector optimization II: Spatially anisotropic triangular lattice and $J_1J_2J_3$ model with Heisenberg interactions
Published 2010-12-20Version 1
We study the ground state phases of the $S=1/2$ Heisenberg quantum antiferromagnet on the spatially anisotropic triangular lattice and on the square lattice with up to next-next-nearest neighbor coupling (the $J_1J_2J_3$ model), making use of Takahashi's modified spin-wave (MSW) theory supplemented by ordering vector optimization. We compare the MSW results with exact diagonalization and projected-entangled-pair-states calculations, demonstrating their qualitative and quantitative reliability. We find that MSW theory correctly accounts for strong quantum effects on the ordering vector of the magnetic phases of the models under investigation: in particular collinear magnetic order is promoted at the expenses of non-collinear (spiral) order, and several spiral states which are stable at the classical level, disappear from the quantum phase diagram. Moreover, collinear states and non-collinear ones are never connected continuously, but they are separated by parameter regions in which MSW breaks down, signaling the possible appearance of a non-magnetic ground state. In the case of the spatially anisotropic triangular lattice, a large breakdown region appears also for weak couplings between the chains composing the lattice, suggesting the possible occurrence of a large non-magnetic region continuously connected with the spin-liquid state of the uncoupled chains.
Journal: New J. Phys. 13 (2011) 075017
Related articles: Most relevant | Search more
arXiv:cond-mat/0512629 (Published 2005-12-24)
Modified spin-wave theory of nuclear magnetic relaxation in one-dimensional quantum ferrimagnets: Three-magnon versus Raman processes
arXiv:1302.6663 [cond-mat.str-el] (Published 2013-02-27)
Long-Range Order of the Three-Sublattice Structure in the S = 1 Heisenberg Antiferromagnet on a Spatially Anisotropic Triangular Lattice
arXiv:1104.4707 [cond-mat.str-el] (Published 2011-04-25, updated 2011-09-22)
Effects of spin vacancies on magnetic properties of the Kitaev-Heisenberg model | 2020-07-14 16:31:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4863605499267578, "perplexity": 3634.056150630251}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897168.4/warc/CC-MAIN-20200714145953-20200714175953-00584.warc.gz"} |
http://www.usaco.org/index.php?page=viewproblem2&cpid=815 | ## Problem 3. Taming the Herd
Contest has ended.
Early in the morning, Farmer John woke up to the sound of splintering wood. It was the cows, and they were breaking out of the barn again!
Farmer John was sick and tired of the cows' morning breakouts, and he decided enough was enough: it was time to get tough. He nailed to the barn wall a counter tracking the number of days since the last breakout. So if a breakout occurred in the morning, the counter would be $0$ that day; if the most recent breakout was $3$ days ago, the counter would read $3$. Farmer John meticulously logged the counter every day.
The end of the year has come, and Farmer John is ready to do some accounting. The cows will pay, he says! But something about his log doesn't look quite right...
Farmer John wants to find out how many breakouts have occurred since he started his log. However, he suspects that the cows have tampered with his log, and all he knows for sure is that he started his log on the day of a breakout. Please help him determine, for each number of breakouts that might have occurred since he started the log, the minimum number of log entries that must have been tampered with.
#### INPUT FORMAT (file taming.in):
The first line contains a single integer $N$ ($1 \leq N \leq 100$), denoting the number of days since Farmer John started logging the cow breakout counter.
The second line contains $N$ space-separated integers. The $i$th integer is a non-negative integer $a_i$ (at most $100$), indicating that on day $i$ the counter was at $a_i$, unless the cows tampered with that day's log entry.
#### OUTPUT FORMAT (file taming.out):
The output should consist of $N$ integers, one per line. The $i$th integer should be the minimum over all possible breakout sequences with $i$ breakouts, of the number of log entries that are inconsistent with that sequence.
6
1 1 2 0 0 1
#### SAMPLE OUTPUT:
4
2
1
2
3
4
If there was only 1 breakout, then the correct log would look like 0 1 2 3 4 5, which is 4 entries different from the given log.
If there were 2 breakouts, then the correct log might look like 0 1 2 3 0 1, which is 2 entries different from the given log. In this case, the breakouts occurred on the first and fifth days.
If there were 3 breakouts, then the correct log might look like 0 1 2 0 0 1, which is just 1 entry different from the given log. In this case, the breakouts occurred on the first, fourth, and fifth days.
And so on.
Problem credits: Brian Dean and Dhruv Rohatgi
Contest has ended. No further submissions allowed. | 2022-01-16 21:34:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6668844819068909, "perplexity": 1529.4089895618015}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300244.42/warc/CC-MAIN-20220116210734-20220117000734-00348.warc.gz"} |
https://mathematica.stackexchange.com/questions/136247/components-rules-from-arraycomponents | Components rules from ArrayComponents
What is the most efficient way to get component rules from ArrayComponents, so additionaly to
data = {"a", "b", "a"};
cmp = ArrayComponents[data]
{1,2,1}
I would like to get:
Thread[data -> cmp] // DeleteDuplicates
{"a"->1, "b"->2}
(or reversed), and the point is that the data is big and I don't want to compare it again to get those relations.
Failed to find the solution in documentation or here.
• But you can run it once to generate the ArrayComponents?, so use temp = Flatten@ Trace[ArrayComponents[{"a", "b", "d", "c", "a"}], Replace]; and then Extract[]? – Feyre Jan 26 '17 at 11:05
• Slower than goldberg's answer, already ran comparison. – Feyre Jan 26 '17 at 11:38
• Is the reason that "a" gets 1 and "b" gets 2 that "a" occurs first in your list, or that "a" occurs before "b" in the alphabetical order? – Jacob Akkerboom Jan 26 '17 at 11:44
• @JacobAkkerboom occurence matters I suppose. – Kuba Jan 26 '17 at 11:48
data = {"a", "b", "a"};
{"a" -> 1, "b" -> 2}
I suggest this because building an association automatically removes duplicates. It should be fairly fast because it is hashing.
Here are three functions that do the job relatively fast.
<< Developer
ranFirstPos =
Compile[
{{ints, _Integer, 1}, {max, _Integer}},
Block[
{res, ii}
,
ii = 1;
Table[
While[
ints[[ii]] != jj
,
ii++
];
ii
,
{jj, 1, max}
]
]
]
jacobFu[data_] :=
Module[
cmp = ToPackedArray@ArrayComponents@data;
max = Max@cmp;
]
Or
kubaImprovedFu[data_] :=
Module[
{cmp}
,
cmp = ArrayComponents[data];
]
The following alternative does not really answer the question of "how to deal with the result of ArrayComponents", as it doesn't use ArrayComponents. This one can be made slightly faster by using SystemUtilitiesHashTable rather than an Association.
jacobFuAssocHash[data_] :=
Module[{jj, assoc},
assoc = Association[];
jj = 1;
Reap[
Do[
If[
! KeyExistsQ[assoc, elem],
assoc[elem] = True;
Sow[elem -> jj];
jj++
]
,
{elem, data}
]
]
][[2, 1]]
Timing comparison
From other posts, we define
kubaFu[data_] :=
Module[
{cmp}
,
cmp = ArrayComponents[data];
]
goldbergFu[data_] :=
This gives us
nn = 10^7;
data = FromCharacterCode /@ RandomInteger[{0, 65536 - 1}, nn];
jacobRes = jacobFu[data];//Timing//First
jacobAsHaRes = jacobFuAssocHash[data]; // Timing // First
kubaImRes = kubaImprovedFu[data];//Timing//First
goldbergRes = goldbergFu[data]; // Timing // First
kubaRes = kubaFu[data]; // Timing // First
jacobRes === jacobAsHaRes === kubaImRes === kubaRes === goldbergRes
7.20605
9.41933
10.207
12.3219
21.7913
True
Inspired by @Feyre's comment, we can modify the behavior of Dispatch before running ArrayComponents to capture the rules:
componentRules[list_] := InternalInheritedBlock[{Dispatch, flag=True},
Unprotect[Dispatch];
a_Dispatch /; flag := Block[{flag=False}, Throw[Normal[a][[4;;]]]];
Catch @ ArrayComponents[list]
]
A brief speed comparison:
nn = 10^7;
data = FromCharacterCode /@ RandomInteger[{0,65536-1}, nn];
r1 = jacobFu[data]; //AbsoluteTiming
r2 = componentRules[data]; //AbsoluteTiming
r1 === r2
{7.67635, Null}
{3.21433, Null}
True
Note that this answer and @MrWizard's answer are the only ones that work for matrices or arrays with rank greater than 1.
• In v10.1 I seem to need an Unprotect[Dispatch]; in there to get things working. Thanks for demonstrating such an interesting approach! – Mr.Wizard Aug 26 '17 at 6:18
• @Mr.Wizard Thanks, fixed. – Carl Woll Aug 26 '17 at 6:46
It seems that ArrayComponents itself can be a bit slow. Seeking an alternative I tried this:
data = FromCharacterCode @ RandomInteger[{97, 122}, {500000, 4}];
r1 = componentRules[data]; // RepeatedTiming (* Carl Woll's function *)
r2 =
Thread[# -> Range@Length@#] &@DeleteDuplicates[Flatten@data]; // RepeatedTiming
r1 === r2
{0.84, Null}
{0.260, Null}
True
• This gives an incorrect answer. For instance, result is {"a"->1, "a"->2} for the list {"a", "a", "b"}. – Carl Woll Aug 26 '17 at 5:43
• @CarlWoll Thanks! I thought I was missing something but I couldn't see it. Embarrassing. <:-o – Mr.Wizard Aug 26 '17 at 6:15
• Much better. Note that ArrayComponents is written in top-level, so I'm not surprised that you can do better. – Carl Woll Aug 26 '17 at 6:44
SparseArray, like Assocoation, takes the first of repeated entries:
data = {"a", "b", "a"};
cmp = ArrayComponents[data];
Most@ArrayRules@SparseArray[cmp -> data]
{{1} -> "a", {2} -> "b"}
To get rid of braces
MapAt[## & @@ # &, %, {{All, 1}}]
{1 -> "a", 2 -> "b"}
Also:
sa = SparseArray[cmp -> data];
• Not particularly fast, at least in 10.1, but an interesting approach. The Thread method just added is faster. – Mr.Wizard Aug 26 '17 at 5:05
• @Mr.Wizard, haven't checked timings. (I suspect ArrayRules is the culprit). – kglr Aug 26 '17 at 5:09 | 2020-01-23 07:57:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2373700588941574, "perplexity": 13217.621343695568}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250609478.50/warc/CC-MAIN-20200123071220-20200123100220-00312.warc.gz"} |
https://istopdeath.com/find-the-angle-between-the-vectors-3-3-12/ | # Find the Angle Between the Vectors (-3,-3) , (-1,2)
(-3,-3) , (-1,2)
The equation for finding the angle between two vectors θ states that the dot product of the two vectors equals the product of the magnitudes of the vectors and the cosine of the angle between them.
u⋅v=|u||v|cos(θ)
Solve the equation for θ.
θ=arc⋅cos(u⋅v|u||v|)
Find the dot product of the vectors.
To find the dot product, find the sum of the products of corresponding components of the vectors.
u⋅v=u1v1+u2v2
Substitute the components of the vectors into the expression.
-3⋅-1-3⋅2
Simplify.
Remove parentheses.
-3⋅-1-3⋅2
Simplify each term.
Multiply -3 by -1.
3-3⋅2
Multiply -3 by 2.
3-6
3-6
Subtract 6 from 3.
-3
-3
-3
Find the magnitude of u.
To find the magnitude of the vector, find the square root of the sum of the components of the vector squared.
(u1)2+(u2)2
Substitute the components of the vector into the expression.
(-3)2+(-3)2
Simplify.
Raise -3 to the power of 2.
9+(-3)2
Raise -3 to the power of 2.
9+9
18
Rewrite 18 as 32⋅2.
Factor 9 out of 18.
9(2)
Rewrite 9 as 32.
32⋅2
32⋅2
Pull terms out from under the radical.
32
32
32
Find the magnitude of v.
To find the magnitude of the vector, find the square root of the sum of the components of the vector squared.
(u1)2+(u2)2
Substitute the components of the vector into the expression.
(-1)2+(2)2
Simplify.
Raise -1 to the power of 2.
1+(2)2
Raise 2 to the power of 2.
1+4
5
5
5
Substitute the values into the equation for the angle between the vectors.
θ=arccos(-3(32)⋅(5))
Simplify.
Cancel the common factor of -3 and 3.
Factor 3 out of -3.
arccos(3⋅-132⋅5)
Cancel the common factors.
Factor 3 out of 32⋅5.
arccos(3⋅-13(2⋅5))
Cancel the common factor.
arccos(3⋅-13(2⋅5))
Rewrite the expression.
arccos(-12⋅5)
arccos(-12⋅5)
arccos(-12⋅5)
Simplify the denominator.
Combine using the product rule for radicals.
arccos(-12⋅5)
Multiply 2 by 5.
arccos(-110)
arccos(-110)
Multiply -110 by 1010.
arccos(-110⋅1010)
Combine and simplify the denominator.
Multiply -110 and 1010.
arccos(-101010)
Raise 10 to the power of 1.
arccos(-1010110)
Raise 10 to the power of 1.
arccos(-10101101)
Use the power rule aman=am+n to combine exponents.
arccos(-10101+1)
arccos(-10102)
Rewrite 102 as 10.
Use axn=axn to rewrite 10 as 1012.
arccos(-10(1012)2)
Apply the power rule and multiply exponents, (am)n=amn.
arccos(-101012⋅2)
Combine 12 and 2.
arccos(-101022)
Cancel the common factor of 2.
Cancel the common factor.
arccos(-101022)
Divide 1 by 1.
arccos(-10101)
arccos(-10101)
Evaluate the exponent.
arccos(-1010)
arccos(-1010)
arccos(-1010)
Move the negative in front of the fraction.
arccos(-1010)
Evaluate arccos(-1010).
1.89254688
1.89254688
Find the Angle Between the Vectors (-3,-3) , (-1,2) | 2022-10-04 10:37:04 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8101920485496521, "perplexity": 993.7158552689243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00149.warc.gz"} |
http://www.takiguchishika.com/blueprint-template-vunieq/4368a4-black-cherry-yogurt-muffins | 2 Answers. On heating above 373K monohydrate changes to an anhydrous white powder callled soda ash . Become a Study.com member to unlock this How do you calculate the percent water in the hydrated washing soda? It is soluble in water. Na2CO3.10H2O is a colorless crystal solid at room temperature. 1 decade ago. Answer: Question 36. answer! Other calculations e.g. Two Na, 1C, 3 O, 20 H and 2 more O atoms. Only hazard codes with percentage values above 10% are shown. The molar mass of sodium carbonate is 106g/mole (google). Answer: The mass percentage of water of crystallization in washing soda is 62.9 %. Our experts can answer your tough homework and study questions. molar mass water = 18 g/mol. Ionic compounds that contain a transition metal are often highly colored. With the help of a chemical equation explain the method of preparation of both Na 2 CO 3.10H 2 O and Na 2 CO 3.Also list two uses of Na 2 CO 3.10H 2 O. Then, you find out the mass of Na2CO3 needed to give .025 mol (the amount of moles in the solution) moles= mass / molar mass Therefore, mass = moles X molar mass =0.025 x 106 2.65g Hope this helps . Write the chemical name of Na 2 CO 3.10H 2 O and Na 2 CO 3.Write the significance of 10H 2 O. it is readily soluble in water. What is the experimental mass percent of water . !â âOn the occasion of Chocol Examples of molar mass computations: NaCl, Ca(OH)2, K4[Fe(CN)6], CuSO4*5H2O, water, nitric acid, potassium permanganate, ethanol, fructose. Calculate the number of molecules of sulphur (S 8) present in 128 g of sulphur. Lv 7. 4 years ago. The percentage value in parenthesis indicates the notified classification ratio from companies that provide hazard codes. Convert grams Na2CO3.10H2O to moles or moles Na2CO3.10H2O to grams. ⦠na2co3.10h20----->na2co3 + 10.h20. Chemistry, 21.06.2019 20:10, maribel2421. mol-1 Hygroscopic white powder (anhydrous), it is the sodium salt of carbonic acid [CO3, H2O]; in everyday language, also known as washing soda, because of its sodium content and its usually ⦠Sciences, Culinary Arts and Personal â¦. What is the percentage of water in the hydrate {eq}\rm Na_2CO_3\cdot 10H_2O{/eq}? Unit 5.5 Percent of Water in a Hydrate Percent of Water in a Hydrate Many ionic compounds naturally contain water as part of the crystal lattice structure. Chemistry-Bonding Na2CO3.10H2O. 8 years ago. If 10.0 moles of aluminum react with 10.0 moles of fluorine gas, how many moles of aluminum fluoride wil thanks so much! Answers: 2 Get = Other questions on the subject: Chemistry. if you are satisfy with my ans then mark me as a brainlist. Strongarm. The percentage by weight of any atom or group of atoms in a compound can be computed by dividing the total weight of the atom (or group of atoms) in the formula by the formula weight and multiplying by 100. $\endgroup$ â CCR Oct 6 '14 at 11:21 â¦, ate Day, I wish that each and every day of your life is as sweet a blessing as chocolateâ¦.. Best wishes to you.ââ¤ï¸â¤ï¸â¤ï¸â¤ï¸, Q. Answer using three significant figures. Add up the atomic masses of all the atoms in the substance first. = > 62.93. A 2.714 g sample of washing soda is heated until a constant mass of 1.006 of Na2CO3 is reached. Molar mass of Na2CO3*10H2O is 286.1412 g/mol Convert between Na2CO3*10H2O weight and moles. Information may vary between notifications depending on impurities, additives, and other factors. Molar mass = 286. molar mass of water = 180. To do that, start with what you know. Its formula is Na2CO3 X xH20. 2C4H10 + 13O2-->10H2O + 8CO2 Help. Na2CO3.10H2O Molar mass = 286 molar mass of water = 180 % of water = > 180/286*100 = > 62.93 00. What is the empirical formula for this compound? Mention the term used for water molecules attached with a salt. On heating below 373K...it looses 9 molecules of crystallization to form a monohydrate (na2co3.h20) na2co3.10h20----->na2co3.h20 + 9.h20. Favorite Answer. The mass percent of each element is called the percentage composition of the mixture or solution. What is the percentage of water in na2co3 x ⦠Molar mass MgSO4 = ⦠Your goal here is to figure out how many grams of water you have in "100 g" of sodium sulfate decahydrate, "Na"_2"SO"_4 * 10"H"_2"O". Answer Save. A compound is found to contain 50.05% sulfur and 49.95% oxygen by mass. The molar mass of water is 18.01g/mole so for 10 moles of water we have a mass of 180.1g. Percentage of Oxygen [O] in Sodium Carbonate [Na2CO3] = 48/106 x 100 = 45.28%. Create your account. Write the chemical name of Na 2 CO 3.10H 2 O and Na 2 CO 3.Write the significance of 10H 2 O. Mention the term used for water molecules attached with a salt. 7542-12-3. This site is using cookies under cookie policy. © copyright 2003-2021 Study.com. Chemical formula of washing soda is Na2CO3.10 H2O. Love you forever my janu â¤ï¸â¤ï¸!! Na 2 CO 3 ⢠10H 2 O is a hydrate that contains 10 water molecules attached to it. Mass percent of an elemental component or a group is one ans is the amount present relative to that of the whole compound in percent (per 100 parts of the whole). Sodium carbonate decahydrate, Na2CO3 ⢠10H2O _ % by mass H2O. Question 35. sodium carbonate decahydrate is Na2CO3.10H20. percent water in Na2CO3*10H2O? i just did it before the reply but this was very clear, cheers 1. ⦠What is the percentage of water in the hydrate {eq}\rm Na_2CO_3\cdot 10H_2O{/eq}? Well molarity is always calculated for the solution .So when crystals of $\ce{Na2CO3 * 10H2O}$ is dissolved in water then water of crystallization plays no role in determination of molarity since only no of moles of$\ce{Na2CO3}$ is taken in account. Historically it was extracted from the ashes of plants growing in sodium-rich soils. SO S2O SO2 Services, Working Scholars® Bringing Tuition-Free College to the Community. All rights reserved. Bobby. Calculate the mass of water produced when 3.09g of butane reacts with excess oxygen. There are ten water molecules in one molecule of the given hydrated compound. Sodium carbonate concentrate, Na2CO3 72 mM in water, IC eluent concentrate (20x) for Metrosep A Supp 7 Sodium carbonate forms such a hydrate, in which 10 water molecules are present for every formula unit of sodium carbonate. 14 Write scientific reasons:*(1) Atomic radius goes on decreasing whilegoing from left to right in a period.â, Use this equation- 2 Al + 3 F2 -> 2 AlF31. Formula unit mass of Na2CO3.10H2O = 2 x Atomic mass of Na + Atomic mass of C + 3 x Atomic mass of O + 10 x Molecular mass of H 2 O = 2 x 23 + 12 +3 x 16 + 10 x 18 = 286 u. A hydrate is a substance that has water molecules physically attached to a compound. Find the percentage of water in sodium carbonate decachydrate, Na2CO3â¢10H2O, which has a molar mass of 286.14 g/mol. You can specify conditions of storing and accessing cookies in your browser, Calculate the percentage of water of crystallisation washing soda NA2CO3.10H2O, Wishing you all the sweetness and happiness in this world as we celebrate Chocolate Dayâ¦. To calculate the mass percent of water of crystallization in given compound, we use the equation: Putting values in above equation, we get: Hence, the mass percentage of water of crystallization in washing soda is 62.9 %. The percentage of water is 62,9 %. Percentage of Carbon [C] in Sodium Carbonate [Na2CO3] = 12/106 x 100 = 11.32%. All other trademarks and copyrights are the property of their respective owners. Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library. if there is something wrong ⦠Molar mass Na2CO3 x 10 H2O = 286 g/mol % water = 180 x 100 / 286 = 62.9 . Sodium carbonate concentrate, 0.1 M Na2CO3 in water, eluent concentrate for IC. Na2CO3.10H2O is used for water purification and paper manufacturing. When sodium carbonate crystals are exposed to air, it loses water of crystallisation and turn to white powder. What is the percentage of water to the nearest tenth in this compound Na2CO3 10H2O? 63.0%. What is the percent by mass of water in na2so4 ⢠10h2o? Thanks! l be produced?Please i need this a soon as possible. Its weight is - 23 * 2 + 12 + 48 + 10 * ( 2 + 16 ) = 106 + 180 = 286. More specifically, start with the fact that 1 mole of sodium sulfate decahydrate contains one mole of sodium sulfate, 1 xx "Na"_2"SO"_4 ten moles of water⦠Sodium carbonate, Acculute Standard Volumetric Solution, Final Concentration 0.1N. Relevance. Sodium sulfate decahydrate is 55.9 %"H"_2"O". Think You Can Provide A Better Answer ? % purity, % percentage & theoretical yield, dilution of solutions (and diagrams of apparatus), water of crystallisation, quantity of reactants required, atom economy 14.1 % purity of a product 14.2a % reaction yield 14.2b atom economy 14.3 dilution of solutions Chemistry. Limiting Reactants & Calculating Excess Reactants, Mole-to-Mole Ratios and Calculations of a Chemical Equation, Limiting Reactant: Definition, Formula & Examples, Collision Theory: Definition & Significance, Dalton's Law of Partial Pressures: Calculating Partial & Total Pressures, Calculating Reaction Yield and Percentage Yield from a Limiting Reactant, Boyle's Law: Gas Pressure and Volume Relationship, Rate of a Chemical Reaction: Modifying Factors, The pH Scale: Calculating the pH of a Solution, Gay-Lussac's Law: Gas Pressure and Temperature Relationship, The Activity Series: Predicting Products of Single Displacement Reactions, Heat of Fusion & Heat of Vaporization: Definitions & Equations, The Quantum Mechanical Model: Definition & Overview, Lewis Structures: Single, Double & Triple Bonds, Calculating Molarity and Molality Concentration, How to Calculate Percent Yield: Definition, Formula & Example, General Chemistry Syllabus Resource & Lesson Plans, DSST Principles of Physical Science: Study Guide & Test Prep, Principles of Physical Science: Certificate Program, Glencoe Chemistry - Matter And Change: Online Textbook Help, Physical Science for Teachers: Professional Development, OSAT Chemistry (CEOE) (004): Practice & Study Guide, Organic & Inorganic Compounds Study Guide, Science 102: Principles of Physical Science, High School Physical Science: Help and Review, Biological and Biomedical % of water = > 180/286*100. Molar mass Na2CO3 = 106 g/mol. What is x? The concentration of a species can be expressed in terms of its mass in multiple ways. this process is called as EFFERVECENSE Mass Percent of a Component: The concentration of a species can be ⦠Starch and are common polysaccharide carbohydrates found in plants. Water %age = 180 / 286 * 100 = 62.94%. 1332-57-6. A hydrate is a compound that has one or more water molecules bound to each formula unit. Molar mass of Na2CO3.10H2O = 286.14124 g/mol. Sodium carbonate, Na 2 CO 3, (also known as washing soda, soda ash and soda crystals) is the inorganic compound with the formula Na 2 CO 3 and its various hydrates. All forms are white, water-soluble salts that yield moderately alkaline solutions in water. calculate the wavelength of the third line in the lyman series of hydrogen. With the help of a chemical equation, explain the method of preparation of both Na2CO 3.10H 2 O and Na 2 CO 3.Also list two uses of Na 2 CO 3. What is the percentage of water in the following compound? Process is called as EFFERVECENSE molar mass of water in sodium carbonate concentrate 0.1! Are satisfy with my ans then mark me as a brainlist two Na, 1C, O... The percent by mass H2O to a compound is found to contain 50.05 % sulfur and %! { eq } \rm Na_2CO_3\cdot 10H_2O { /eq } the term used water. Atoms in the following compound = 180 x 100 = 45.28 % molecule of the third line in the washing... The atoms in the hydrate { eq } \rm Na_2CO_3\cdot 10H_2O { /eq } calculate the percent by H2O. As a brainlist x ⦠sodium carbonate crystals are exposed to air it... The atoms in the substance first 45.28 % nearest tenth in this compound Na2CO3 10H2O compound... This video and our entire Q & a library carbonate is 106g/mole google... M Na2CO3 in water, IC eluent concentrate for IC trademarks and are. In 128 g of sulphur ( S 8 ) present in 128 g of.... Start with what you know, Na2CO3â¢10H2O, which has a molar mass MgSO4 = ⦠what is the water... The percent by mass H2O the substance first of its mass in multiple ways only hazard codes satisfy my. Value in parenthesis indicates the notified classification ratio from companies that provide codes... Alkaline solutions in water 180 / 286 = 62.9 decahydrate, Na2CO3 72 mM water! Na2Co3¢10H2O, which has a molar mass of water in the lyman of! Mention the term used for water molecules attached with a salt a Component: mass... For water molecules in one molecule of the given hydrated compound in na2so4 ⢠_... Molecules in one molecule of the third line in the following compound is found to contain 50.05 % and... Of water to the nearest tenth in this compound Na2CO3 10H2O mass Na2CO3 x ⦠sodium sulfate decahydrate is %! 10.0 moles of water in Na2CO3 x ⦠sodium sulfate decahydrate is Na2CO3.10H20 percentage value in parenthesis the! = 286. molar mass Na2CO3 = 106 g/mol ratio from companies that provide hazard with!, which has a molar mass of water to the nearest tenth this. To grams ( 20x ) for Metrosep a Supp 7 Other calculations e.g } \rm Na_2CO_3\cdot 10H_2O { /eq?... [ Na2CO3 ] = 12/106 x 100 = 62.94 % [ C ] in sodium carbonate concentrate, â¢! 10.0 moles of water is 18.01g/mole so for 10 moles of fluorine gas, how many moles of aluminum with... Percent by mass a mass of water we have a mass of of... The concentration of a species can be ⦠sodium carbonate, Acculute Volumetric. Mass in multiple ways in washing soda is 62.9 % of Na2CO3 is reached sample. The wavelength of the mixture or Solution all the atoms in the substance first the series! A Supp 7 Other calculations e.g Metrosep a Supp 7 Other calculations e.g when sodium carbonate, Standard... The wavelength of the third line in the hydrate { eq } \rm Na_2CO_3\cdot {! White powder Other calculations e.g crystallization in washing soda is 62.9 % plants growing in sodium-rich soils do you the... Of plants growing in sodium-rich soils and 49.95 % oxygen by mass changes to an anhydrous white powder of! Or more water molecules attached to it ] in sodium carbonate decahydrate Na2CO3.10H20... Trademarks and copyrights are the property of their respective owners a Component: the concentration of a can. Notified classification ratio from companies that provide hazard codes with percentage values above 10 are... Is reached species can be ⦠sodium sulfate decahydrate is 55.9 % '' H '' _2 '' ''... That contain a transition metal are often highly colored property of their respective owners a molar mass water... Up the atomic masses of all the atoms in the following compound CO 3 ⢠10H 2.! Na2Co3.10H2O to grams compound is found to contain 50.05 % sulfur and %. Extracted from the ashes of plants growing in sodium-rich soils values above 10 % are shown Na2CO3. Water-Soluble salts that yield moderately alkaline solutions in water, IC eluent for. Mention the term used for water molecules attached with a salt water to nearest... You are satisfy with my ans then mark me as a brainlist 128 g of sulphur ( 8., Na2CO3 72 mM in water, eluent concentrate ( 20x ) for Metrosep a Supp Other! Concentration of a species can be ⦠sodium carbonate, Acculute Standard Volumetric Solution, concentration. Mass = 286. molar mass = 286. molar mass of Na2CO3 * 10H2O and... In the hydrate { eq } \rm Na_2CO_3\cdot 10H_2O { /eq }, Final concentration.! 3 O, 20 H and 2 more O atoms tenth in na2co3 10h2o percentage of water compound Na2CO3 10H2O of Carbon C!, 1C, na2co3 10h2o percentage of water O, 20 H and 2 more O.. A substance that has water molecules attached to a compound is found to contain 50.05 sulfur... & Get your Degree, Get access to this video and our entire Q & a library of 2... Of butane reacts with excess oxygen EFFERVECENSE molar mass = 286. molar mass of 180.1g vary notifications. Copyrights are the property of their respective owners property of their respective.. = 180 / 286 * 100 = 62.94 % Get = Other questions on the subject: Chemistry O... G of sulphur ( S 8 ) present in 128 g of sulphur are white, water-soluble salts yield. ¢ 10H 2 O and Na 2 CO 3 ⢠10H 2 O a! 2 CO 3.10H 2 O and Na 2 CO 3 ⢠10H 2 O and Na 2 CO â¢... Is heated until a constant mass of water in the hydrate { eq } \rm Na_2CO_3\cdot 10H_2O { }... Google ) the notified classification ratio from companies that provide hazard codes = 12/106 x na2co3 10h2o percentage of water = 11.32 % crystallisation... \Rm Na_2CO_3\cdot 10H_2O { /eq } 45.28 %, water-soluble salts that yield moderately alkaline solutions in water moles water... The atoms in the hydrate { eq } \rm Na_2CO_3\cdot 10H_2O { }... Percent of each element is called the percentage of water is 18.01g/mole so for 10 moles aluminum... X ⦠sodium sulfate decahydrate is 55.9 % '' H '' _2 '' O '' to! In this compound Na2CO3 10H2O 286.14 g/mol 106 g/mol Standard Volumetric Solution, Final concentration.! To grams in sodium carbonate decachydrate, Na2CO3â¢10H2O, which has a mass. And Na 2 CO 3.10H 2 O and Na 2 CO 3.10H O. Is 106g/mole ( google ) on the subject: Chemistry of molecules of sulphur we have a mass of we! 13O2 -- > 10H2O + 8CO2 Help 286 * 100 = 45.28 % 72 mM water. Of sulphur ( S 8 ) present in 128 g of sulphur one molecule the! Wavelength of the given hydrated compound vary between notifications depending on impurities na2co3 10h2o percentage of water additives, Other... Na2Co3 ] = 48/106 x 100 / 286 * 100 = 45.28.... Are ten water molecules attached with a salt write the chemical name of Na 2 3.Write. When sodium carbonate [ Na2CO3 ] = 12/106 x 100 = 45.28 % on impurities, additives, Other. It was extracted from the ashes of plants growing in sodium-rich soils hydrated washing soda Please need! 20X ) for Metrosep a Supp 7 Other calculations e.g one or water... Constant mass of water in the following compound changes to an anhydrous white powder SO2 molar mass =! With 10.0 moles of aluminum fluoride wil ⦠carbonate, Acculute Standard Volumetric Solution, concentration! Concentration of a species can be ⦠sodium carbonate crystals are exposed to air, it water! Or Solution on impurities, additives, and Other factors, Acculute Standard Volumetric Solution, Final concentration.... Line in the substance first [ Na2CO3 ] = 12/106 x 100 = 62.94 % _2 O! A substance that has one or more water molecules attached to a compound is to... Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q & a.! A brainlist EFFERVECENSE molar mass = 286. molar mass of water in sodium carbonate concentrate, Na2CO3 ⢠10H2O %! 106G/Mole ( google )? Please i need this a soon as possible Na2CO3 ] = x. Me as a brainlist that, start with what you know is a compound that has molecules... Is 18.01g/mole so for 10 moles of water to the nearest tenth in this Na2CO3. Compound is found to contain 50.05 % sulfur and 49.95 % oxygen by of. How many moles of fluorine gas, how many moles of water of crystallization in washing soda carbohydrates in! Mass = 286. molar mass of water is 18.01g/mole so for 10 moles of fluorine gas, many. The significance of 10H 2 O significance of 10H 2 O is a substance that has one or water. Of its mass in multiple ways tenth in this compound Na2CO3 10H2O physically to. Acculute Standard Volumetric Solution, Final concentration 0.1N * 100 = 45.28.. Given hydrated compound 3.09g of butane reacts with excess oxygen have a mass of 1.006 of *! Atoms in the lyman series of hydrogen yield moderately alkaline solutions in water IC... To grams expressed in terms of its mass in multiple ways the mass percent of a:... Get = Other questions on the subject: Chemistry additives, and Other factors the! { /eq } '' _2 '' O '' has a molar mass Na2CO3 106. 10H2O _ % by mass H2O Na2CO3 x ⦠sodium carbonate decahydrate Na2CO3.10H20!
3 Cheese Hamburger Helper Calories, Honda Cbr 150 Price Philippines 2020, Jean Kirstein Season 4 Episode 1, Aerogarden Farm Canada, Rentskis Com Beaver Creek, Quotes About Contentment In God, Clinical Trials Masters Degree Online, How To Grow Carrots From Carrot Tops Youtube, Thanga Thamarai Magale Masstamilan, What Is Preferred Between Div And Table, Language Model Perplexity Python, Weight Gain Shakes For Cancer Patients, Morrisons Healthy Ready Meals, | 2022-05-20 00:50:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5042377710342407, "perplexity": 5908.253152708159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530553.34/warc/CC-MAIN-20220519235259-20220520025259-00785.warc.gz"} |
http://mathhelpforum.com/calculus/25845-converge-diverge.html | 1. converge or diverge
does this series converge or diverge?
n = 2 E infinity
2/(n^2 -1)
..i've attempted to solve this, and i know the answer is 3/2
just not sure why i could not divide the numerator and denominator by the largest exponent in the denominator (n^2). is that because its not in the form of infininty/infinity?
also, tried using summation notation, it did not work out.
i know it can't be the harmonic series because that would diverge.
2. Originally Posted by rcmango
does this series converge or diverge?
n = 2 E infinity
2/(n^2 -1)
.
$\left| \frac{2}{n^2 - 1} \right| \leq \frac{2}{n^2 - \frac{1}{2}n^2} = \frac{4}{n^2}$
Now use comparasion test.
3. Originally Posted by rcmango
does this series converge or diverge?
n = 2 E infinity
2/(n^2 -1)
..i've attempted to solve this, and i know the answer is 3/2
just not sure why i could not divide the numerator and denominator by the largest exponent in the denominator (n^2). is that because its not in the form of infininty/infinity?
also, tried using summation notation, it did not work out.
i know it can't be the harmonic series because that would diverge.
to find the sum, you can realize that you have a telescoping sum. note that $\frac 2{n^2 - 1} = \frac 1{n - 1} - \frac 1{n + 1}$
write out some of the terms of the series (be sure to include the last few terms as well, that is, say, the (n - 2)th term, the (n - 1)th term, the nth term). try to see a pattern of what cancels out and come up with an expression for what's left. then let $n \to \infty$ and you should get your result
4. Okay, thanks for the help, i can't used the comparison test because i haven't learned it yet.
however i can use the telescoping series test, is that the same as the collapsing series?
also.. i've attached an image of what work i've done, I apoligize it is very messy, sorry was in a hurry.
btw, i see how the only terms of 1/2 and 1 are left only. but what happened to the (1/n-1) - (1/n+1) do those cancel out completely?
we used partial sums here to split the fraction up then?
thanks alot.
5. Originally Posted by rcmango
Okay, thanks for the help, i can't used the comparison test because i haven't learned it yet.
however i can use the telescoping series test, is that the same as the collapsing series?
i don't know. don't think i've heard the term "collapsing series" before.
also.. i've attached an image of what work i've done, I apoligize it is very messy, sorry was in a hurry.
where's the image?
btw, i see how the only terms of 1/2 and 1 are left only. but what happened to the (1/n-1) - (1/n+1) do those cancel out completely?
this is why i told you to write out some terms. you'd realize that the (n - 2)th term cancels the $\frac 1{n - 1}$, but the $\frac 1{n + 1}$ stays. you'd have to take the limit as $n \to \infty$ to get the answer
we used partial sums here to split the fraction up then?
i used partial fractions decomposition to get the two fractions | 2017-03-31 00:57:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9375353455543518, "perplexity": 450.7914574174096}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218205046.28/warc/CC-MAIN-20170322213005-00645-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://moduslaborandi.net/post/gitlabci-101/ | # Gitlab CI 101
This post is a tutorial for beginners, to start with continuous integration of Gitlab, GitlabCI, in a project with Docker. The main goal is understand the different pieces that take part in the process, and be able to automate some tasks during the development of your application.
## Assumptions
To follow properly this tutorial, you’ll need a small project (really smal!) and you need to know git. It will be also required that you have a free Gitlab.com account. This tutorial represents the natural continuation of the tutorial I published about docker, and it assumes that the cointainer part is solved. If you don’t feel skilled in this docker stuff, I recommend you to do first the tuturial about introduction to docker and introduction to Dockerfile.
## Continuous integration
According to the Wikipedia:
Continuous integration is an informatic model, initially proposed by Martin Fowler, consisting on making as many automatic integrations of a project as possible, so errors can be detected as soon as possible. We understand integration as compilation and test execution of an entire project.
So GitlabCI is a tool that makes it easy to test automatically a project and show errors as soon as they appear. Among the continuous integration tools family, others that are famous are Travis, that belongs to Github which is integrated with Github, and Jenkins. Of these three, GitlabCI and Jenkins are open source.
## We have a project
We start with a project, better if it’s a small one. The project I used it’s a half calculator, written in Python. Feel free to fork it if you find it useful for this tutorial. If you’re using it, I suggest that you go to the tag ‘v0.2’, which the first point to follow this tutorial:
$git reset --hard v0.2 At this point, we have a code base, some tests and a Dockerfile. ## Let's CI We’re using gitlab.com which is properly configured and isolates ourselves from the configuration, so we can start working. We want the following workflow: • we do some stuff and push some code to master (tipically we’d push to master after a peer review, for instance) • automatically, gitlab-ci takes this master branch, generates a doker image following the instructions in the Dockerfile • then it will run the tests of my new master branch inside a docker • it tests pass, then it will push the docker image to the gitlab registry • from now, this new docker image should be available in the registry It’s a very simple integration, and we don’t even deploy to a stage evironment, which could be desired. In this tutorial, we’ll just focus on the different parts. Now, we create a file called .gitlab-ci.yml (mind the starting dot) in the root of the project, so the tree of my super calculator would be like this: orbe :: formacion/gitlab-ci/calculus ‹master› » tree . ├── .gitlab-ci.yml ├── Dockerfile ├── requirements.txt └── src ├── calculus.py └── test_calculus.py Now, inside the .gitlab-ci.yml file, we write the following: # We're using the method "docker in docker" to build and run containers during the jobs [1] image: docker services: - docker:dind # We set a variable name, that we'll use afterwards variables: IMAGE_TAG:$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA # The code here is run before all jobs. # In this case, we need to log in the registry to be able to push the image before_script: - docker login -u gitlab-ci-token -p$CI_JOB_TOKEN $CI_REGISTRY # Our first job, wich buils the image, runs the test and pushes the image # If one step fails, the systems don't continue with the others test-library: stage: test script: - docker build -t$IMAGE_TAG .
- docker run $IMAGE_TAG pytest - docker push$IMAGE_TAG
# This section allow to configure when do we want the CI
# For this tutorial, we are only automating the tests for master branch
only:
- master
If you’d like to know more about the methods to build and run containers, like the “docker in docker” mentioned above [1], check the official documentation. Besides, you should take a look to the .gitlab-ci.yml reference to learn about the sections and the different options. An important topic in the yaml file are the gitlab variables, some automatic useful variables about the environment, like:
Variable GitLab Runner Description
CI_COMMIT_REF_SLUG 9.0 all $CI_COMMIT_REF_NAME lowercased, shortened to 63 bytes, and with everything except 0-9 and a-z replaced with -. Use in URLs and domain names. CI_COMMIT_SHA 9.0 all The commit revision for which project is built CI_COMMIT_TAG 9.0 0.5 The commit tag name. Present only when building tags. CI_DEBUG_TRACE all 1.7 Whether debug tracing is enabled You won’t need all the options in each job, but it’s useful to know what can be done. Once we have this new file, we add it to the repository, commit and push to gitlab.com (in master branch!). Now, go to the administration panel of the repository, to the Pipelines section. There you’ll see something like this: One last thing, now you can go to the Registry section and check that a new image is available in the registry. If you’d like to test it and close the circle, you should: # pull the image orbe :: ~ » docker pull registry.gitlab.com/yamila/calculus:7a14a50b3c18b6b923f7baf1495a928a88689dbc # run the tests orbe :: ~ » docker run registry.gitlab.com/yamila/calculus:7a14a50b3c18b6b923f7baf1495a928a88689dbc pytest ============================= test session starts ============================== platform linux -- Python 3.6.1, pytest-3.1.0, py-1.4.34, pluggy-0.4.0 rootdir: /calculus, inifile: collected 2 items test_calculus.py .. =========================== 2 passed in 0.01 seconds =========================== If you have a private project, you’ll see an “access forbiden” error when you try to pull the image; the reason is that you need docker logs in the registry: $ docker login registry.gitlab.com | 2019-03-23 21:01:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2850579023361206, "perplexity": 1992.3825170734856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203021.14/warc/CC-MAIN-20190323201804-20190323223804-00072.warc.gz"} |
https://brilliant.org/discussions/thread/gamblers-ruin-2/?ref_id=1561976 | # Gambler's ruin - 2
Continued from this note.
So far, we have examined the gambler's ruin problem on a small example. What can we say abut the more general case of starting out at some budget $$k$$ and deciding to cash out at some threshold $$n$$?
You walk into Martin Gale's Betting Room with an initial budget of $$k$$
As usual, you can play the following game any number of times (if you have what it costs)
• You pay Martin a dollar.
• Martin tosses a fair coin
• If the coin comes up heads Martin pays you two dollars. If it comes up tails, you get nothing.
You decide that you will play until you have increased your money to $$n$$, and then you will stop. Here, $$n>k$$. Of course you will also have to stop if you lose all your money (i.e. you are ruined).
What is the probability that you are ruined?
There are the number of ways to tackle this, but the approach I'm going to take is to set up a system of linear equations. Let $$r_{k,n}$$ denote the probability we want to compute, namely the probability that, starting out at a budget of $$k$$ you will lose all your money before you ever hit $$n$$.
Since it is pessimistic to talk about ruin, let's set up the system in terms of variables representing the probabilities of winning, i.e. the probability of cashing out at $$n$$. In what follows, we will keep $$n$$ fixed, and so we will subscript the variables with only the budget.
For $$0 \le i \le n$$, let $$w_{i}$$ be the probability that you cash out, starting from an initial budget of $$i$$. We want to compute $$w_{k}$$. If you play the game once, then with probability 1/2 you increase your budget by 1 and with probability 1/2 you decrease it by one. And then you get to play again, as if you were starting with your new budget. Thus
• $$w_0 =0$$ -- If you have no money you can't play, and therefore can't win.
• $$w_{i} = \frac{1}{2} (w_{i-1} + w_{i+1})$$ for $$1\le i\le n-1$$
• $$w_{n} = 1$$ -- Because of your decision to take the money and quit when you reach $$n$$.
This gives a system of linear equations in $$w_0 \dots w_n$$. Combining the first two of these we easily see that $w_2 = 2w_1$ We will use this as a base case of an induction. Now suppose it is true that $$w_{i} = i w_1$$ for all $$i<j \le n$$. Then we'll show it is also true for $$j$$ as follows: We know that $w_{j-1} = \frac{w_{j-2}+w_{j}}{2} \,.$ Using the induction hypothesis to substitute for $$w_{j-1}$$ and $$w_{j-2}$$ we have $w_{j} = 2(j-1)w_1 - (j-2) w_1 = j w_1 \,.$ But now, using the fact that $$w_n =1$$, we have $$w_1 = \frac{1}{n}$$, and finally $$w_k = \frac{k}{n}$$ It follows that the probability of ruin is $$\frac{n-k}{n}$$.
Now what about the question of how long you play for?
You walk into Martin Gale's Betting Room with an initial budget of $$k$$
As usual, you can play the following game any number of times (if you have what it costs)
• You pay Martin a dollar.
• Martin tosses a fair coin
• If the coin comes up heads Martin pays you two dollars. If it comes up tails, you get nothing.
You decide that you will play until you have increased your money to $$n$$, and then you will stop. Here, $$n>k$$. Of course you will also have to stop if you lose all your money (i.e. you are ruined).
How many games do you expect to play before you stop?
Again, setting up a system of linear equations solves the problem. This time the equations we have are - $$g_0 = g_n =0$$ and - $$g_{i} = 1+ \frac{ g_{i-1} + w_{g+1}}{2}$$ for $$1\le i\le n-1$$ where $$g_i$$ is the expected number of games starting at a budget of $$i$$. It is not too hard to solve this and see that we get $$g_k = k(n-k)$$
To be continued...
Note by Varsha Dani
2 months, 1 week ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$ | 2019-04-23 03:00:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9162358045578003, "perplexity": 445.7029151951716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578584186.40/warc/CC-MAIN-20190423015050-20190423041050-00357.warc.gz"} |
http://crypto.stackexchange.com/tags/key-exchange/new | Tag Info
1
No, IKEv2 has nothing analogous to 'main mode' and 'aggressive mode', and they eliminated the initial 'quick mode', When IKEv1 was originally written, they wanted a strong separation between IKE and IPSec; they had a vision where IKE might be used for things other than IPSec (other "Domains of Interpretation"). So, they completely isolated the "negotiate ...
2
I believe that you misunderstand what DH is doing. DH-key-exchange was innovated to defence man-in-the-middle attack, because hackers can not pretend the one you want to communicate without correct share key? or hacker don't know the key generator that Alice and Bob pre-agreed? Well, no, defending against active attackers, that is, attackers who can ...
0
The basic DH key exchange is unauthenticated. Authentication needs a different mechanism and has nothing to do with the key exchange. Depending on the attacker model, authentication is not possible, and especially it is not save vs man in the middle (e.g. in the Dolev Yao model). The attacker can just initiate a key exchange with both Alice and Bob, and ...
0
Add to the list FHMQV (probably covered by MQV and HMQV patents), and SM2 (Chinese standard for authenticated key agreement, patented by Chinese government, IPR terms unclear). I personally would probably use FHMQV (permissions/licensing issues aside). It is highly recommended to avoid trying to design your own. If you cannot use any of the existing ...
2
Yes for sure you can do that. Mapping this protocol to an elliptic curve setting is just like mapping DH key exchange to ECDH key exchange. In AugPAKE you work in a prime order $q$ subgroup of $Z_p^*$ and in the EC setting you use a prime order $q$ elliptic curve group. Observe that in the EC setting a multiplication of group elements in AugPAKE is then ...
Top 50 recent answers are included | 2014-04-19 05:12:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4605972170829773, "perplexity": 2989.373917047846}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://mathoverflow.net/feeds/question/26001 | Are the rationals homeomorphic to any power of the rationals ? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-19T12:08:35Z http://mathoverflow.net/feeds/question/26001 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/26001/are-the-rationals-homeomorphic-to-any-power-of-the-rationals Are the rationals homeomorphic to any power of the rationals ? HenrikRüping 2010-05-26T12:22:55Z 2010-05-26T16:51:51Z <p>I asked myself, which spaces have the property that $X^2$ is homeomorphic to $X$. I started to look at some examples like $\mathbb{N}^2 \cong \mathbb{N}$, $\mathbb{R}^2\ncong \mathbb{R}, C^2\cong C$ (for the cantor set $C$). And then I got stuck, when I considered the rationals. So the question is:</p> <p>Is $\mathbb{Q}^2$ homeomorphic to $\mathbb{Q}$ ?</p> http://mathoverflow.net/questions/26001/are-the-rationals-homeomorphic-to-any-power-of-the-rationals/26004#26004 Answer by Tom Smith for Are the rationals homeomorphic to any power of the rationals ? Tom Smith 2010-05-26T12:32:21Z 2010-05-26T12:32:21Z <p>I don't think so: the completion of $\mathbb{Q}^2$ is $\mathbb{R}^2$, so that a homeomorphism $\mathbb{Q}^2\to\mathbb{Q}$ would give a homeomorphism $\mathbb{R}^2\to\mathbb{R}$? </p> http://mathoverflow.net/questions/26001/are-the-rationals-homeomorphic-to-any-power-of-the-rationals/26006#26006 Answer by Xandi Tuni for Are the rationals homeomorphic to any power of the rationals ? Xandi Tuni 2010-05-26T12:35:05Z 2010-05-26T12:35:05Z <p>Yes, they are homeomorphic. To construct a homeomorphism from $\mathbb Q$ to $\mathbb Q^2$, one can proceed roughly as follows: express $q\in \mathbb Q$ as a continued fraction $[a_0, a_1,a_2,...]$ (of finite length) and associate with it the pair $([a_0,a_2,...], [a_1,a_3,...])$.</p> <p>Mind that this is a homeomorphism, but not an isometry (cf comment on Tom's answer).</p> <p>I vaguely remember that there is a ceneral Theorem in point set topology stating that all coutable topological spaces "of the same kind as $\mathbb Q$" are homeomorphic.</p> http://mathoverflow.net/questions/26001/are-the-rationals-homeomorphic-to-any-power-of-the-rationals/26009#26009 Answer by Robin Chapman for Are the rationals homeomorphic to any power of the rationals ? Robin Chapman 2010-05-26T12:37:35Z 2010-05-26T16:51:51Z <p>Yes, Sierpinski proved that every countable metric space without isolated points is homeomorphic to the rationals: <a href="http://at.yorku.ca/p/a/c/a/25.htm" rel="nofollow">http://at.yorku.ca/p/a/c/a/25.htm</a> .</p> <p>An amusing consequence of Sierpinski's theorem is that $\mathbb{Q}$ is homeomorphic to $\mathbb{Q}$. Of course here one $\mathbb{Q}$ has the order topology, and the other has the $p$-adic topology (for your favourite prime $p$) :-)</p> | 2013-05-19 12:08:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130986928939819, "perplexity": 890.5305750999717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697442043/warc/CC-MAIN-20130516094402-00043-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://abaqus-docs.mit.edu/2017/English/SIMACAEMATRefMap/simamat-c-pormetalplas.htm | # Porous metal plasticity
The porous metal plasticity model: is used to model materials with a dilute concentration of voids in which the relative density is greater than 0.9; is based on Gurson's porous metal plasticity theory (Gurson, 1977) with void nucleation and, in Abaqus/Explicit, a failure definition; and defines the inelastic flow of the porous metal on the basis of a potential function that characterizes the porosity in terms of a single state variable, the relative density. The following topics are discussed:
Related Topics About the material library Inelastic behavior In Other Guides *POROUS METAL PLASTICITY *POROUS FAILURE CRITERIA *VOID NUCLEATION Defining porous metal plasticity
ProductsAbaqus/StandardAbaqus/ExplicitAbaqus/CAE | 2022-08-08 16:02:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 60, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2684473991394043, "perplexity": 3902.89790578566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00291.warc.gz"} |
http://turbomachinery.asmedigitalcollection.asme.org/article.aspx?articleid=1467743 | 0
Research Papers
# A Criterion for Axial Compressor Hub-Corner Stall
[+] Author and Article Information
V.-M. Lei, Z. S. Spakovszky, E. M. Greitzer
Gas Turbine Laboratory, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139
For example, separation, with boundary layer fluid moving off the wall, occurs along the symmetry line on the end wall of a rectangular nozzle, where the streamwise pressure gradient is favorable (Greitzer et al. (1)). This type of three-dimensional separation, however, is associated with the confluence of boundary layer fluid due to cross-flow (Lighthill (2)) rather than by the inability of low stagnation pressure fluid to negotiate a pressure rise. In addition, there is no stagnation of the separating fluid, and the primary effect is rather a change of direction as the fluid leaves the wall.
We are indebted to Professor N. A. Cumpsty for his clarifying comments on this point.
We have found this phrase, due to Professor N. A. Cumpsty, useful in dispelling ambiguity surrounding discussions of the qualitative definition of compressor stall.
As shown later, the stall indicator $S$ correlates best with the diffusion parameter if the loading near the end wall is evaluated at a spanwise location of 10% chord. This is within the end wall boundary layer thickness in multistage compressors but sufficiently away from the end wall surface to avoid interference with localized low pressure regions associated with spanwise turning of the cross-flow in the hub corner. To generalize different blade passage geometries, the calculations suggest nondimensionalizing the spanwise distance by chord.
Vorticity of the opposite sign to the inviscid part of the flow, which is associated with the cross-stream pressure gradient, is created at the wall and diffused into the end wall region. Since $u∕U$ is almost always monotonic with distance from the wall, the opposite sign of the vorticity can be inferred from the opposite slopes of the two sides of the triangle.
This and the insensitivity to inlet blockage are reminiscent of the situation with straight diffusers where the flow regimes are not much affected by Reynolds number (Johnston (28)).
An aspect ratio of 0.5 was also considered but for such low values, the two end wall flows tend to merge developing full span stall and, under these circumstances, the separation indicator is no longer applicable.
We thank Dr. L. H. Smith for pointing this out.
This information, and the analysis, has been provided by Wellborn (31).
The assessment was based on CFD analysis.
J. Turbomach 130(3), 031006 (May 02, 2008) (10 pages) doi:10.1115/1.2775492 History: Received July 28, 2006; Revised February 12, 2007; Published May 02, 2008
## Abstract
This paper presents a new criterion for estimating the onset of three-dimensional hub-corner stall in axial compressor rotors and shrouded stators. A simple first-of-a-kind description of hub-corner stall formation is developed which consists of (i) a stall indicator, which quantifies the extent of the separated region via the local blade loading and thus indicates whether hub-corner stall occurs, and (ii) a diffusion parameter, which defines the diffusion limit for unstalled operation. The stall indicator can be cast in terms of a Zweifel loading coefficient. The diffusion parameter is based on preliminary design flow variables and geometry. Computational simulations and single and multistage compressor data are used to show the applicability of the criterion over a range of blade design parameters. The criterion also enables determination of specific flow control actions to mitigate hub-corner stall. As an illustration, a flow control blade, designed using the ideas developed, is seen to produce a substantial reduction in the flow nonuniformity associated with hub-corner stall.
<>
## Figures
Figure 1
Basic processes governing the formation of hub-corner stall together with limiting streamlines and separation lines
Figure 2
Definition of the Zweifel loading coefficient and relation to the stall indicator S
Figure 3
Cross-flow in the end wall region
Figure 4
Incoming end wall region skew due to moving end wall surfaces (i.e., rotor hubs or rotor drums underneath hub platforms in shrouded stators)
Figure 5
Comparison between the Lieblein DF and the diffusion parameter D. Squares denote cascades with skewed incoming end wall boundary layer.
Figure 6
Stagnation pressure loss coefficient ω (squares) and static pressure rise coefficient Cp (diamonds) as a function of diffusion parameter D for cascades with zero incidence. Solid symbols indicate hub-corner stall.
Figure 7
Formation of hub-corner stall for diffusion parameters D>0.4 (upper branch, S>0.12)
Figure 8
Limiting streamlines for two different compressor cascades: without (left) and with (right) hub-corner stall
Figure 9
Effects of blade aspect ratio, Reynolds number and incoming boundary layer thickness on hub-corner stall criterion
Figure 10
Evaluation of the hub-corner stall criterion for rotor and stator blade rows of five different production and research compressors (data courtesy of Rolls-Royce, Wellborn (31)).
Figure 11
Flow control of hub-corner stall via cross-flow of opposite sign to blade passage secondary flow: (a) upstream hub cavity leakage flow and (b) air injection in the blade passage
Figure 12
Effect of flow control on stall indicator and diffusion parameter: (a) hub cavity leakage flows and (b) suction surface air injection (squares)
Figure 13
Contours of computed stagnation pressure at cascade exit for (A) datum case, (B) streamwise air injection, and (C) streamwise plus spanwise air injection
Figure 14
Contours of stagnation pressure at compressor cascade exit: (A) datum case, (B) flow control with 0.4% of passage mass flow, and (C) flow control with 0.8% of passage mass flow. Top: Linear compressor cascade experiment; bottom: numerical simulation results.
## Discussions
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections | 2018-06-23 19:22:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33081451058387756, "perplexity": 4566.633546698277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865181.83/warc/CC-MAIN-20180623190945-20180623210945-00313.warc.gz"} |
https://procyonic.org/blog/category/videogames/ | # Goals, Anti-Goals and Multi-player Games
In this article I will try to address Keith Burgun‘s assertion that games should have a single goal and his analysis of certain kinds of goals as trivial or pathological. I will try to demonstrate that multi-player games either reduce to single player games or necessitate multiple goals, some of which are necessarily the sorts of goals which Burgun dismisses as trivial. I’ll try to make the case that such goals are useful ideas for game designers as well as being necessary components of non-trivial multi-player games.
(Note: I find Keith Burgun’s game design work very useful. If you are interested in game design and have the money, I suggest subscribing to his Patreon.)
# Notes on Burgun’s Analytical Frame
## The Forms
Keith Burgun is a game design philosopher focused on strategy games, which he calls simply games. He divides the world of interactive systems into four useful forms:
1. toys – an interactive system without goals. Discovery is the primary value of toys.
2. puzzle – bare interactive system plus a goal. Solving is the primary value of the puzzle.
3. contests – a toy plus a goal all meant to measure performance.
4. games – a toy, plus a goal, plus obfuscation of game state. The primary value is in synthesizing decision making heuristics to account for the obfuscation of the game state.
A good, brief, video introduction to the forms is available here. Burgun believes a good way to construct a game is to identify a core mechanism, which is a combination of a core action, a core purpose, and a goal. The action and purpose point together towards the goal. The goal, in turn, gives meaning to the actions the player can take and the states of the interactive system.
## On Goals
More should be said on goals, which appear in many of the above definitions. Burgun has a podcast which serves as a good long form explication of many of his ideas. There is an entire episode on goals here. The discussion of goals begins around the fifteen minute mark.
Here Burgun provides a related definition of games: contests of decision making. Goals are prominent in this discussion: the goal gives meaning to actions in the game state.
Burgun raises a critique of games which feature notions of second place. He groups such goals into a category of non-binary goals and gives us an example to clarify the discussion: goals of the form “get the highest score.”
His analysis of the poorness of this goal is that it seems to imply a few strange things:
1. The player always gets the highest score they are capable of because the universe is deterministic.
2. These goals imply that the game becomes vague after the previous high score is beaten, since the goal is met and yet the game continues.
The first applies to any interactive system at all, so isn’t a very powerful argument, as I understand it. Take a game with the rules of Tetris except that the board is initialized with a set of blocks already on the board. The player receives a deterministic sequence of blocks and must clear the already present blocks, at which point the game ends. This goal is not of the form “find the highest score” or “survive the longest” but the game’s outcome is already determined by the state of the universe at the beginning of the game. From this analysis we can conclude that if (1) constitutes a downside to the construction of a goal, it doesn’t apply uniquely to “high score” style goals.
(2) is more subtle. While it is true that in the form suggested, these rules leave the player without guidelines after the goal is met, I believe that in many cases a simple rephrasing of the goal in question resolves this problem. Take the goal:
G: Given the rules of Tetris, play for the highest score.
Since Tetris rewards you for clearing more lines at once and since Tetris ends when a block becomes fixed to the board but touches the top of the screen, we can rephrase this goal as:
G': Do not let the blocks reach the top of the screen.
This goal is augmented by secondary goals which enhance play: certain ways of moving away from the negative goal G' are more rewarding than others. Call this secondary goal g: clear lines in the largest groups possible. Call G' and goals like it “anti-goals.”
This terminology implies the definition.
If a goal is a particular game state towards which the player tries to move, an anti-goal is a particular state which the player is trying to avoid. Usually anti-goals are of the form “Do not allow X to occur” Where X is related to a (potentially open ended) goal.
Goals of the “high score” or “survive” variety are (or may be) anti-goals in disguise. Rephrased properly, they can be conceived of in anti-goal language. Of course there are good anti-goals and bad ones, just as there are good goals and bad goals. However, I would argue that the same criteria applies to both types of goals: a good (anti) goal is just one which gives meaning to the actions a person is presented with over an interactive system.
# Multi-Player Games and Anti-Goals
I believe anti-goals can be useful game design, even in the single player case. In another essay I may try to make the argument that anti-goals must be augmented with mechanics which tend to move the player towards the anti-goal against which players must do all the sorts of complex decision making which produces value for players.
However, there is a more direct way of demonstrating that anti-goals are unavoidable aspects of games, at least when games are multi-player. This argument also demonstrates that games with multiple goals are in a sense inevitable, at least in the case of multi-player games. First let me describe what I conceive of as a multi-player game.
multi-player game: A game where players interact via an interactive system in order to reach a goal which can only be attained by a single player.
The critical distinction I want to make is that a multi-player game is not just two or more people engaged in separate contests of decision making. If there are not actions mediating the interaction of players via the game state then what is really going on is many players are playing many distinct games. A true multi-player game must allow players to interact (via actions).
In a multi-player game, players are working towards a win state we can call G. However, in the context of the mechanics which allow interaction they are also playing against a (set of) anti-goals {A}, one for each player besides themselves. These goals are of the form “Prevent player X from reaching goal G“. Hence, anti-goals are critical ingredients to successful multi-player game design and are therefore useful ideas for game designers. Therefore, for a game to really be multi-player then there must be actions associated with each anti-goal {A}.
An argument we might make at this point is that if players are playing for {A} and not explicitly for G then our game is not well designed (for instance, it isn’t elegant or minimal). But I believe any multi-player game where a player can pursue G and not concern herself with {A}, even in the presence of game actions which allow interaction, is a set of single player games in disguise. If we follow our urge to make G the true goal for all players at the expense of {A} then we may as well remove the actions which intermediate between players and then we may as well be designing a single player game whose goal is G.
So, if we admit that multi-player games are worth designing, then we also admit that at least a family of anti-goals are worth considering. Note that we must explicitly design the actions which allow the pursuit of {A} in order to design the game. Ideally these will be related and work in accord with the actions which facilitate G but they cannot be identical to those mechanics without our game collapsing to the single player case. We must consider {A} actions as a separate (though ideally related) design space.
# Summary
I’ve tried to demonstrate that in multi-player games especially, anti-goals, which are goals of the for “Avoid some game state”, are necessary, distinct goal forms worth considering by game designers. The argument depends on demonstrating that a multi-player game must contain such anti-goals or collapse to a single player game played by multiple people but otherwise disconnected.
In a broader context, the idea here is to get a foot in the door for anti-goals as rules which can still do the work of a goal, which is to give meaning to choices and actions in an interactive system. An open question is whether such anti-goals are useful for single player games, whether they are useful but only in conjunction with game-terminating goals, or whether, though useful, we can always find a related normal goal which is superior from a design point of view. Hopefully, this essay provides a good jumping off point for those discussions.
# On Inform 7, Natural Language Programming and the Principle of Least Surprise
I’ve been pecking away at Inform 7 lately on account of its recently acquired Gnome front end. For those not in the know, Inform (and Inform 7) is a text adventure authoring language. I’ve always been interested in game programming but never had the time (or more likely the persistence of mind) to develop one of any sophistication myself. Usually in these cases one lowers the bar, and as far as interactive media goes, you can’t get much lower, complexity wise, than text adventures.
Writing a game in Inform amounts to describing the world and it’s rules in terms of a programming language provided by Inform. The system then collects the rules and descriptions and creates a game out of them. Time was, programming in Inform used to look like:
Constant Story "Hello World";
Include "Parser";
Include "VerbLib";
[ Initialise;
location = Living_Room;
"Hello World"; ];
Object Kitchen "Kitchen";
Object Front_Door "Front Door";
Object Living_Room "Living Room"
with
description "A comfortably furnished living room.",
n_to Kitchen,
s_to Front_Door,
has light;
Which is recognizably a programming language, if a bit strange and domain specific. These days, writing Inform looks like this: (from my little project):
"Frustrate" by "Vincent Toups"
Ticks is a number which varies.
Ticks is zero.
When play begins:
Now ticks is 1.
The Observation Room is a room. "The observation room cold and
surreal. Stars dot the floor underneath thick, leaded glass, cutting
across it with a barely perceptible tilt. This room seems to have been
adapted for storage, and is filled with all sorts of sub-stellar
detritus, sharp in the chill and out of place against the slowly
rotating sky. Even in the cold, the place smells of dust, old wood
finish, and mildew. [If ticks is less than two] As the sky cuts its
way across the milky way, the whole room seems to tilt. You feel
dizzy.[else if ticks is less than four]The plane of the galaxy is
sinking out of range and the portal is filling with the void of
space. It feels like drowning.[else if ticks is greater than 7]The
galactic plane is filling the floor with a powdering of
stars.[else]The observation floor looks out across the void of space.
You avert your eyes from the floor.[end if]"
Every turn: Now ticks is ticks plus one.
Every turn: if ticks is 10:
decrease ticks by 10.
As you can see, the new Inform adopts a “natural language” approach to programming. As the Inform 7 website puts it
[The] Source language [is] modelled closely on a subset of English, and usually readable as such.
Also reproduced in the Inform 7 manual is the following quote from luminary Donald Knuth:
Programming is best regarded as the process of creating works of literature, which are meant to be read… so we ought to address them to people, not to machines. (Donald Knuth, “Literate Programming”, 1981)
Which better than anything else illustrates the desired goal of the new system: Humans are not machines! Machines should accommodate our modes of expression rather than forcing us to accommodate theirs! If it wasn’t for the unnaturalness of programming languages, the logic goes, many more people would program. The creation of interactive fiction means to be inclusive, so why not teach the machine to understand natural language?
This is a laudable goal. I really think the future is going to have a lot more programmers in it, and a primary task of language architects is to design programming languages which “regular” people find intuitive and useful. For successes in that arena see Python, or Smalltalk or even Basic. Perhaps these languages are not the pinnacle of intuitive programming environments but whatever that ultimate language is, I doubt seriously it will look much like Inform 7.
This is unfortunate, because reading Inform 7 is very pleasant, and the language is even charming from time to time. Unfortunately, it’s very difficult to program in1, and I say that as something of a programming language aficionado. It’s true that creating the basic skeleton of a text adventure is very easy, but even slightly non-trivial extensions to the language are difficult to intuitively get right. For instance, the game I am working on takes place on a gigantic, hollowed out natural satellite, spinning to provide artificial gravity. The game begins in a sort of observation bubble, where the floor is transparent and the stars are visible outside. Sometimes this observation window should be pointing into the plane of the Milky Way, but other times it should be pointing towards the void of space because the station’s axis of rotation is parallel to the plane of the galaxy. The description of the room should reflect these different possibilities.
Inform 7 operates on a turn based basis, so it seems like it should be simple enough to create this sort of time dependent behavior by keeping track of time but it was frustrating to figure out how to “tell” the Inform compiler what I wanted.
First I tried joint conditionals:
When the player is in the Observation Room and
the turn is even, say: "The stars fill the floor."
But this resulted in an error message. Maybe the system doesn’t know about “evenness” so I tried:
When the player is in the Observation Room and
the turn is greater than 3, say "The stars fill the floor."
(Figuring I could add more complex logic later).
Eventually I figured out the right syntax, which involved creating a variable and having a rule set its value each turn and a separate rule reset the value with the periodicity of the rotation of the ship, but the process was very frustrating. In Python the whole game might look, with the proper abstractions, like:
while not game.over():
game.describe_location(player.position);
if (player.position == 'The Observation Room' and
game.turn() % 10):
print "The stars fill the floor."
Which is not perhaps as “englishy” as the final working Inform code (posted near the beginning of this article) but is much more concise and obvious.
But that isn’t the reason the Python version is less frustrating to write. The reason is the Principle of Least Surprise, which states, roughly, that once you know the system, the least surprising way of doing things will work. The problem with Inform 7 is that “the system” appears to the observer to be “written English (perhaps more carefully constructed that usual)”. This produces in the coder a whole slew of assumptions about what sorts of statements will do what kind of things and as a consequence, you try a lot of things which, according to your mental model, inexplicably don’t work.
It took me an hour to figure out how to make what amounts to a special kind of clock and I had the benefit of knowing that underneath all that “natural English” was a (more or less) regular old (prolog flavored) programming environment. I can’t imagine the frustration a non-programmer would feel when they first decided to do something not directly supported or explained in the standard library or documentation.
That isn’t the only problem, either. Natural english is a domain specific language for communicating between intelligent things. It assumes that the recepient of the stream of tokens can easily resolve ambiguities, invert accidental negatives (pay attention, people do this all the time in speech) and tell the difference between important information and information it’s acceptable to leave ambiguous. Not only are computers presently incapable of this level of deduction/induction, but generally speaking we don’t want that behavior anyway: we are programming to get a computer to perform a very narrowly defined set of behaviors. The implication that Inform 7 will “understand you” in this context is doubly frustrating. And you don’t want it to “understand,” you want it to do exactly.
A lot of this could be ameliorated by a good piece of reference documentation, spelling out in exact detail the programmatic environment’s behavior. Unfortunately, the bundled documentation is a big tutorial which does a poor job of delineated between constructs in the language and elements of it. It all seems somewhat magical in the tutorial, in other words, and the intrepid reader, wishing to generalize on the rules of the system, is often confounded.
Nevertheless, I will probably keep using it. The environment is clean and pleasant, and the language, when you begin to feel out the classical language under the hood, is ok. And you can’t beat the built in features for text based games. I doubt that Inform 7, though, will seriously take off. Too many undeliverable promises.
1 This may make it the only “Read Only” programming language I can think of.
# Elaborations on “You Aren’t Gonna Need It”
The Cunningham & Cunningham Wiki is a wonderful place to get lost in, and it is so (chaotically) packed with useful programming lore that you are bound to come out of a dive a bit more enlightened about what it is programmers actually do.
One of my favorite pages is You Aren’t Gonna Need It from which I pull the following quotation:
Always implement things when you actually need them, never when you just foresee that you need them.
The justification for this is pretty straightforward: adding things you don’t need takes time and energy, plus generates more code, which means more potential bugs and cognitive load for future development. Since your job is to deliver software that actually does something which you presumably have at least a provisional understanding of, speculative development is an obvious waste of time.
To this basic justification I add only the following small elaboration: If you don’t need it now, you probably don’t understand it anyway. Anything you implement speculatively is very likely to be wrong as well as useless.
## Why Software Is Hard
There are a lot of reasons software engineering is hard. Probably the primary reason it is hard is that we do not yet have a complete understanding of why it is so hard in the first place. Richard P Gabriel, a software philosopher and progenitor of the Worse is Better meme observes, probably correctly, that one reason for this fundamental ignorance is that software engineering is a comparative tyro among engineering disciplines: it is both extremely young (beginning only in 1945, approximately) and subject to radical change. A developer in 1945 would find the contemporary development environment utterly alien and vice versa, a state of affairs not afflicting, for instance, stone masons, who have, arguably, thousands of years of more or less dependable tradition to inform their work.
With characteristic insight, Dr Gabriel also observes that software engineering is also difficult because it isn’t physically constrained1, which means that humans, the product of 3.5 billion years of evolution in a physical environment, but not a computational one, have very little experience upon which to rely as they build objects in the space of computable functions.
Suffer me a brief, digressive analogy: Hyper Rogue III is a game which takes place in a two-dimensional hyperbolic plane. One implication of such a space is that it is very unlikely that a wanderer will ever return to a particular position unless she almost exactly follows her own trail of bread crumbs. Exploring the space of computable functions is similarly dangerous to wanderers, except more so: we are not well equipped to even identify the hills, valleys, walls and contours of this space.
Hence my elaboration: the wanderer in the landscape of programming is very likely to lose his way. He has neither physical constraints nor well developed intuition to guide him. And despite the existence of tools like git, which promise us the ability to backtrack and refactor with some confidence, software inevitably ossifies as it grows.
Hence my elaboration: you don’t build things you don’t need because, if you don’t need them, you probably don’t really understand what they should be. If you don’t understand what they should be, you’ll probably wander off base as you build them, and you probably won’t even notice that you have wandered off until something concrete impinges upon that code. When that happens, you might find that refactoring to solve the problem you have now is prohibitively difficult, because that feature you thought you would need has subtly impinged on other parts of your code.
Explicit, concrete requirements are the closest thing you have to the physical constraints which make stone masonry a more reliable discipline than software engineering, and nothing but rote experience will ever give you the physical intuition that stone masons can rely on.
So don’t wander off: You Aren’t Gonna Need It anyway. At least until you do.
1: Well, the relationship between the physical constraints on software and the software is, at any rate, not entirely trivial or transparent.
# Watch as I Liveblog “Death of the Corpse Wizard” development.
I’ve always wanted to make videogames. I’ve been programming in one way or another for nearly half my life, so you’d think I would have created at least one so far, but usually my scope gets too big and I end grinding to a halt or getting distracted by something else. Today I’m trying something different. For the rest of the day I’ll be liveblogging, at this post, my development of a small game called Death of the Corpse Wizard. I’ll be using HTML5/Canvas, Javascript, an Oryx Tileset I bought, and a game design I thought of in the shower yesterday. Here is the inaugural image:
Hello World
Very briefly the game intends to be an “Arena Roguelike”. Your character sits in the center of the screen and enemies approach from all sides. If an enemy bumps into you, you lose one vitality. If you bump into an enemy you take one vitality from it (most monsters have only one vitality). You may also choose, on your turn, to use a vitality to build or reinforce a wall, which costs one vitality. Monsters can attack walls and when the wall’s vitality is reduced to zero, it disappears. The goal of the game is to survive as long as possible.
For now, those are the only rules.
### Update 11:30 am, Strasbourg Time
Tile Sheet Support
I’ve finished support for Tile Sheets and loaded the Oryx sheet. Named a few tiles.
I’m working on the visual aspects first because they provide pleasing feedback. I will now start the game engine.
161 LOC so far, not counting libraries.
### Update 12:53 pm, Strasbourg Time
Tweening
I’ve added the entities and systems required for drawing the player, set up the user controls, and implemented Tweening so that the player sprite moves smoothly. Since this is now “playable”, I also uploaded the game here, so you can play it.
Some tidbits about the development process: I’m using Kran, an awesome, lightweight entity-component system which deserves more attention. It is the same system I used to generate my clocks.
### 3:30 pm, Strasbourg Time
Timers, Spawners
I have added turn-based timer logic, spawners, and the ability to create walls. Plus an HUD that shows the player’s vitality.
### 5:12 pm, Strasbourg Time
The game is now playable and you can even lose, sort of: monsters have AI and can attack you, you can kill them, and they can reduce your health to zero.
Monsters
### 6:04 pm, Strasbourg Time
Death Screen
Added a death screen and death condition! It will probably be the start screen too, at some point.
### 6:53 pm, Strasbourg Time
Playable
Death of the Corpse Wizard is now entirely playable: it has a goal, challenged, and failure conditions. I’m going to declare my first ever single-day Game Jam a success!
# What I learned from playing “Dark Souls” and “Kentucky Route Zero” in the same night.
Equus Oils
Crunch.
Because the world has not yet gone utterly mad, I had the chance, on my last vacation, to sit down and play video games with my friend Marlin and my brother. As a professional person living a semi-standard contemporary lifestyle, I don’t usually have time for dedicated gaming and so this hobby, which I enjoyed substantially at various stages of life when time was more easily spent, has fallen by the wayside.
In the same evening we played both Dark Souls and Kentucky Route Zero. I learned something about myself that night, and something about these weird things we call “video games.” Here is what I learned:
### VIDEO
“Kentucky Route Zero” and “Dark Souls” are radically different experiences. In the former, a disgustingly slick visual presentation delivers a series of significantly meaningless choices to the player as it explicates a mood, a setting and a story. The the latter game, just as much staggering expertise is devoted to the development of a perfectly balanced interactive system of punishment and reward. As a matter of course, “Dark Souls” furnishes its own fairly well developed and delivered mood and setting, but it is unmistakably the context for the play rather than the point of it.
“Kentucky Route Zero” is, undeniably, a fine piece of craft which is aware of itself and its cultural surroundings. And I have to admit, I found the experience engaging, particularly as a kind of spectacle. But “Kentucky Route Zero,” for all of its awareness of itself as a supposed “game,” seems mostly interested in undermining itself as an interactive experience. A great recipe for alienation is forcing a person to make choices which she knows do not matter, and “Kentucky Route Zero” seems pretty interested in this trick. Early in the game the player is forced to guess a password based on the fact that it is a “long poem, which really sums it all up.” They are prompted with a series of three choices of lines from three different poems and any series of choices works. The poems themselves are nice, and one does get a pathetic little frisson of pleasure at the prospect of mixing and matching them but in the end, the decisions don’t actually matter.
The entire game is a series of little, often very well conceived and executed vignettes in which the player’s choice doesn’t really matter. At the very best we can think of the game as a kind of psychological test which is never evaluated by anyone. This is thin gruel, and we might wonder why the creators of the game, who are clearly in possession of tremendous talent and skill, bothered with an interactive experience at all.
### GAMES
Let’s contrast this experience with “Dark Souls.” “Dark Souls” is the quintessential videogame, or at least the most quintessential game I’ve played recently. What do I mean by this? Well, “Dark Souls” is too a product of apparently profoundly skilled craftspeople, but where in “Kentucky Route Zero” the craft is devoted to style and narrative substance, “Dark Souls”‘s creators devoted most of their energy to the invisible, mechanical systematics of the game and to the job of communicating those systems to the player, clearly.
It has been for some time de rigueur for videogames (of which Final Fantasy VII or Metal Gear Solid are exemplars) to pack their narrative into “cutscenes,” which are just non-interactive, generally cinematic or intended as such, scenes in which story is advanced. “Dark Souls” has essentially none of these, recognizing them as fundamentally alien to a fundamentally interactive medium. Instead, “Dark Souls” thrusts you immediately into the business of moving your avatar through space. After a very brief interactive tutorial, the player finds herself contending with a series of interactions, diegetically presented as medieval combat, which require attentive reaction to an interlocking set of concerns.
The player has a weapon, a shield, health, and stamina. She generally faces similarly outfitted, sometimes oversized, versions of herself. Holding up one’s shield prevents one’s stamina from regenerating, as it does rather quickly, otherwise. Blocking a blow costs stamina, as does an attack or a dodge. If one’s stamina is drained she can neither attack effectively nor defend or dodge. On top of these actions, the game layers timing: it takes a moment to raise your shield, it takes varying amounts of time to strike, time which leaves you open to counter attack. It takes time for one’s stamina to regenerate, time during which the character is vulnerable. These systems, in and of themselves, are well designed but not particularly unusual for videogames.
What distinguishes “Dark Souls” is that it is not afraid to make the primary focus of the game these simple, balanced systems. It isn’t afraid to offer real reward and punishment for understanding them. The designers of the game were confident enough to sell the experience of “Dark Souls” as primarily the gameplay, rather than circumstantial narrative content. As such, “Dark Souls” can afford to apply a gentle touch to the subjects of setting and narrative. There is a latent story going on here, but, while we can enjoy it, and it may motivate us to play, the story is definitively not the game.1
### YOU DIED
Like the snooty intellectual I so obviously am, I have often bemoaned the boringness of videogame subject matter. Do we really need another game about big dudes hitting eachother with things until one of them dies? From this perspective I welcome the arrival of games like “Kentucky Route Zero,” as refreshing alternatives to mainstream game content.
However, “Dark Souls” reminds me that there is still enormous material to be mined the realm of mechanics, which are the elements of videogames as a medium which cannot be reproduced elsewhere. It is a definite problem that his material is being mined in a pretty boring, heteronormative context of dudes hitting dudes, but I can’t help but feel, at the end of the day, that “Dark Souls” is better at being what it is than “Kentucky Route Zero.”
### Footnotes
1: I’d like it if we could stop saying that “Dark Souls” is “hard.” “Dark Souls” is only “hard” because the relationship between the superficial and significant elements of the game, the progress the player makes in the world or plot vs the progress the player makes in her mastery of the mechanics, is misleading. The player experiences lots of progress in terms of her knowledge and ability to work the in-game systems, even when the false progress through the story or environments of the game is stalled. | 2021-05-10 07:16:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2839987277984619, "perplexity": 1715.7286733525991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989115.2/warc/CC-MAIN-20210510064318-20210510094318-00120.warc.gz"} |
https://math.stackexchange.com/questions/3130887/what-is-the-usage-of-the-fact-that-probx-is-polish-if-x-is-polish | # What is the usage of the fact that $Prob(X)$ is Polish if $X$ is Polish?
Let $$X$$ be a Polish space and $$Prob(X)$$ be the set of Borel probability measures on $$X$$, and let $$Prob(X)$$ be equipped with the weak-* topology (So that a sequence $$\mu_m$$ converges to $$\mu$$ in $$Prob(X)$$ if and only if $$\int_X f d\mu_m \to \int_X f d\mu$$ for all $$f\in C_b(X)$$ ).
In many stochastic process or probability textbooks, it is proven that $$Prob(X)$$ is a Polish space. However, I do not get any motivation nor application of this theorem. What is the point of proving that $$Prob(X)$$ is a Polish space? Do we really need this result when dealing with stochastic process? | 2019-07-23 08:12:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9476655721664429, "perplexity": 45.04150206102334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529007.88/warc/CC-MAIN-20190723064353-20190723090353-00283.warc.gz"} |
https://web2.0calc.com/questions/wendy-has-5-1-3-feet-of-ribbon | +0
# Wendy has 5 1/3 feet of ribbon.
0
178
5
Wendy has 5 1/3 feet of ribbon.
Part A: How many 2/3 foot pieces can Wendy cut from the 5 1/3 feet of ribbon? Show your work. (5 points)
Part B: Using the information in Part A, interpret the meaning of the quotient in terms of the two fractions given. (5 points)
Sep 6, 2022
#1
+29
-3
Part A - 5 1/3 = 16/3
Gets You
$$\frac{16/3}{2/3} = 16/3*3/2 = 8$$
Now using this, find the answer to Part B!
Sep 6, 2022
#2
0
16/3? wouldnt 51/3 simplify to 17/3?
Guest Sep 6, 2022
#3
0
It is NOT 51/3 but 5 + 1/3 ==16/3 !!
Guest Sep 6, 2022
#4
0
oh okay. thank you.
Guest Sep 12, 2022
#5
+29
-3
My pleasure
NJColonial6 Sep 12, 2022 | 2023-02-04 15:09:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8749902844429016, "perplexity": 8136.58445012861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500140.36/warc/CC-MAIN-20230204142302-20230204172302-00863.warc.gz"} |
https://studyadda.com/sample-papers/jee-main-mock-test-23_q40/1440/435482 | • # question_answer For a chemical reaction $\operatorname{A}\to Products$, the rate of disappearance of A is given by $\frac{-d{{C}_{A}}}{dt}=\frac{{{k}_{1}}{{C}_{A}}}{1+{{k}_{2}}{{C}_{A}}}$ At very low${{C}_{A}}$, the reaction of the .......... order with rate constant........ (Assume ${{k}_{1}},\,\,{{k}_{2}}$ are lesser than 1) A) $I,\,\,{{k}_{1}}/{{k}_{2}}$ B) $I,\,\,{{k}_{1}}$ C) $II,\,\,{{k}_{1}}/{{k}_{2}}$ D) $II,\,\,{{k}_{1}}/({{k}_{1}}+{{k}_{2}})$
$Rate=\frac{-d{{C}_{A}}}{dt}=\frac{{{k}_{1}}{{C}_{A}}}{1+{{k}_{2}}{{C}_{A}}}$ At very low ${{C}_{A}},\,(1+{{k}_{2}}{{C}_{A}})\approx 1$ $\therefore \,\, Rate={{k}_{1}}{{C}_{A}}$ $\operatorname{Hence} order = I$ $\operatorname{Rate}\,\,constant={{k}_{1}}$ | 2022-01-20 01:02:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8125700950622559, "perplexity": 5161.191873483274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301670.75/warc/CC-MAIN-20220120005715-20220120035715-00169.warc.gz"} |
http://mathhelpforum.com/math-topics/114010-equaion-dimensionally-homogenous.html | # Thread: Is this equaion dimensionally homogenous?
1. ## Is this equation dimensionally homogenous?
The Fay-Riddel equation is, I gather, an equation that gives the heat flux through, for example, the nose-cone of a space vehicle during re-entry.
I've been given a simplified version of this equation which looks as follows:
$\displaystyle q = k_h \sqrt{\frac{\rho}{R_n}} v^2$
From my understanding, these are the variables and their units:
$\displaystyle \text{ Heat flux, }q \, (W.m^{-2})$
$\displaystyle \text{ Thermal Conductivity, }k_h \, (W.m^{-1}.K^{-1})$
$\displaystyle \text{ Air Density, }\rho \, (kg.m^{-3})$
$\displaystyle \text{ Radius of Curvature of Nose-cone, }R_n \, (m)$
$\displaystyle \text{ Velocity, }v \, (m.s^{-1})$
Where m is metre, s is second, W is watt, K is kelvin, kg is kilogram.
Clearly it can't be homogeneous because there's a kelvin on the RHS, and none on the LHS...
Can anybody shed some light on this equation I've been given? Is it erroneous? | 2017-08-20 02:18:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8130656480789185, "perplexity": 1903.0878197565705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105961.34/warc/CC-MAIN-20170820015021-20170820035021-00576.warc.gz"} |
http://mathcentral.uregina.ca/QQ/database/QQ.09.13/h/bob2.html | SEARCH HOME
Math Central Quandaries & Queries
Question from Bob, a student: A ball falls from the roof, at a height of 10m. After each impact on the ground it bounces back up to 4/5 of its previous height. How many times will the ball appear in front of a rectangular window whose bottom edge is at a height of 5m, and whose top edge at a height of 6m?
Hi Bob,
On the first drop the ball passes the window once on the way down. On the first bounce the ball rises to $\frac45 \times 10 = 8 \mbox { m }$ and hence the ball passes the window once on the way down and again on the way back up then again on the way down so that's 3. How high does it go on the second bounce? Does it still bounce higher than the top of the window? Eventually it will not bounce high enough to clear the top of the window and on those trips you will only see the ball in the window once, on the way up and the way down. Keep counting and eventually the ball will not bounce high enough to clear the bottom of the window and you won't see it at all.
Penny
Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences. | 2020-07-02 13:10:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35052749514579773, "perplexity": 310.6450901904558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878753.12/warc/CC-MAIN-20200702111512-20200702141512-00506.warc.gz"} |
http://nag.com/numeric/MB/manual64_24_1/html/D01/d01auf.html | Integer type: int32 int64 nag_int show int32 show int32 show int64 show int64 show nag_int show nag_int
Chapter Contents
Chapter Introduction
NAG Toolbox
## Purpose
nag_quad_1d_fin_osc_vec (d01au) is an adaptive integrator, especially suited to oscillating, nonsingular integrands, which calculates an approximation to the integral of a function f(x)$f\left(x\right)$ over a finite interval [a,b]$\left[a,b\right]$:
b I = ∫ f(x)dx. a
$I= ∫ab f(x) dx .$
## Syntax
[result, abserr, w, iw, ifail] = d01au(f, a, b, epsabs, epsrel, 'key', key, 'lw', lw, 'liw', liw)
[result, abserr, w, iw, ifail] = nag_quad_1d_fin_osc_vec(f, a, b, epsabs, epsrel, 'key', key, 'lw', lw, 'liw', liw)
## Description
nag_quad_1d_fin_osc_vec (d01au) is based on the QUADPACK routine QAG (see Piessens et al. (1983)). It is an adaptive function, offering a choice of six Gauss–Kronrod rules. A global acceptance criterion (as defined by Malcolm and Simpson (1976)) is used. The local error estimation is described in Piessens et al. (1983).
Because nag_quad_1d_fin_osc_vec (d01au) is based on integration rules of high order, it is especially suitable for nonsingular oscillating integrands.
nag_quad_1d_fin_osc_vec (d01au) requires a function to evaluate the integrand at an array of different points and is therefore amenable to parallel execution (see Section [Parallelism and Performance]). Otherwise this algorithm with key = 6${\mathbf{key}}=6$ is identical to that used by nag_quad_1d_fin_osc (d01ak).
## References
Malcolm M A and Simpson R B (1976) Local versus global strategies for adaptive quadrature ACM Trans. Math. Software 1 129–146
Piessens R (1973) An algorithm for automatic integration Angew. Inf. 15 399–401
Piessens R, de Doncker–Kapenga E, Überhuber C and Kahaner D (1983) QUADPACK, A Subroutine Package for Automatic Integration Springer–Verlag
## Parameters
### Compulsory Input Parameters
1: f – function handle or string containing name of m-file
f must return the values of the integrand f$f$ at a set of points.
[fv] = f(x, n)
Input Parameters
1: x(n) – double array
The points at which the integrand f$f$ must be evaluated.
2: n – int64int32nag_int scalar
The number of points at which the integrand is to be evaluated. The actual value of n is equal to the number of points in the Kronrod rule (see specification of key).
Output Parameters
1: fv(n) – double array
fv(j)${\mathbf{fv}}\left(\mathit{j}\right)$ must contain the value of f$f$ at the point x(j)${\mathbf{x}}\left(\mathit{j}\right)$, for j = 1,2,,n$\mathit{j}=1,2,\dots ,{\mathbf{n}}$.
2: a – double scalar
a$a$, the lower limit of integration.
3: b – double scalar
b$b$, the upper limit of integration. It is not necessary that a < b$a.
4: epsabs – double scalar
The absolute accuracy required. If epsabs is negative, the absolute value is used. See Section [Accuracy].
5: epsrel – double scalar
The relative accuracy required. If epsrel is negative, the absolute value is used. See Section [Accuracy].
### Optional Input Parameters
1: key – int64int32nag_int scalar
Indicates which integration rule is to be used.
key = 1${\mathbf{key}}=1$
For the Gauss 7$7$-point and Kronrod 15$15$-point rule.
key = 2${\mathbf{key}}=2$
For the Gauss 10$10$-point and Kronrod 21$21$-point rule.
key = 3${\mathbf{key}}=3$
For the Gauss 15$15$-point and Kronrod 31$31$-point rule.
key = 4${\mathbf{key}}=4$
For the Gauss 20$20$-point and Kronrod 41$41$-point rule.
key = 5${\mathbf{key}}=5$
For the Gauss 25$25$-point and Kronrod 51$51$-point rule.
key = 6${\mathbf{key}}=6$
For the Gauss 30$30$-point and Kronrod 61$61$-point rule.
Default: 6$6$
Constraint: key = 1${\mathbf{key}}=1$, 2$2$, 3$3$, 4$4$, 5$5$ or 6$6$.
2: lw – int64int32nag_int scalar
The dimension of the array w as declared in the (sub)program from which nag_quad_1d_fin_osc_vec (d01au) is called. The value of lw (together with that of liw) imposes a bound on the number of sub-intervals into which the interval of integration may be divided by the function. The number of sub-intervals cannot exceed lw / 4${\mathbf{lw}}/4$. The more difficult the integrand, the larger lw should be.
Default: 800$800$
Constraint: lw4${\mathbf{lw}}\ge 4$.
3: liw – int64int32nag_int scalar
The dimension of the array iw as declared in the (sub)program from which nag_quad_1d_fin_osc_vec (d01au) is called.
The number of sub-intervals into which the interval of integration may be divided cannot exceed liw.
Default: lw / 4${\mathbf{lw}}/4$
Constraint: liw1${\mathbf{liw}}\ge 1$.
None.
### Output Parameters
1: result – double scalar
The approximation to the integral I$I$.
2: abserr – double scalar
An estimate of the modulus of the absolute error, which should be an upper bound for |Iresult|$|I-{\mathbf{result}}|$.
3: w(lw) – double array
4: iw(liw) – int64int32nag_int array
iw(1)${\mathbf{iw}}\left(1\right)$ contains the actual number of sub-intervals used. The rest of the array is used as workspace.
5: ifail – int64int32nag_int scalar
${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).
## Error Indicators and Warnings
Note: nag_quad_1d_fin_osc_vec (d01au) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:
Cases prefixed with W are classified as warnings and do not generate an error of type NAG:error_n. See nag_issue_warnings.
W ifail = 1${\mathbf{ifail}}=1$
The maximum number of subdivisions allowed with the given workspace has been reached without the accuracy requirements being achieved. Look at the integrand in order to determine the integration difficulties. If necessary, another integrator, which is designed for handling the type of difficulty involved, must be used. Alternatively, consider relaxing the accuracy requirements specified by epsabs and epsrel, or increasing the amount of workspace.
W ifail = 2${\mathbf{ifail}}=2$
Round-off error prevents the requested tolerance from being achieved. Consider requesting less accuracy.
W ifail = 3${\mathbf{ifail}}=3$
Extremely bad local integrand behaviour causes a very strong subdivision around one (or more) points of the interval. The same advice applies as in the case of ${\mathbf{ifail}}={\mathbf{1}}$.
ifail = 4${\mathbf{ifail}}=4$
On entry, key1${\mathbf{key}}\ne 1$, 2$2$, 3$3$, 4$4$, 5$5$ or 6$6$.
ifail = 5${\mathbf{ifail}}=5$
On entry, lw < 4${\mathbf{lw}}<4$, or liw < 1${\mathbf{liw}}<1$.
## Accuracy
nag_quad_1d_fin_osc_vec (d01au) cannot guarantee, but in practice usually achieves, the following accuracy:
|I − result| ≤ tol , $|I-result| ≤ tol ,$
where
tol = max {|epsabs|,|epsrel| × |I|} , $tol=max{|epsabs|,|epsrel|×|I|} ,$
and epsabs and epsrel are user-specified absolute and relative error tolerances. Moreover, it returns the quantity abserr which, in normal circumstances, satisfies
|I − result| ≤ abserr ≤ tol. $|I-result|≤abserr≤tol.$
If ${\mathbf{ifail}}\ne {\mathbf{0}}$ on exit, then you may wish to examine the contents of the array w, which contains the end points of the sub-intervals used by nag_quad_1d_fin_osc_vec (d01au) along with the integral contributions and error estimates over these sub-intervals.
Specifically, for i = 1,2,,n$i=1,2,\dots ,n$, let ri${r}_{i}$ denote the approximation to the value of the integral over the sub-interval [ai,bi] $\left[{a}_{i},{b}_{i}\right]$ in the partition of [a,b] $\left[a,b\right]$ and ei ${e}_{i}$ be the corresponding absolute error estimate. Then, aibi f(x) dx ri $\underset{{a}_{i}}{\overset{{b}_{i}}{\int }}f\left(x\right)dx\simeq {r}_{i}$ and result = i = 1n ri ${\mathbf{result}}=\sum _{i=1}^{n}{r}_{i}$. The value of n$n$ is returned in iw(1)${\mathbf{iw}}\left(1\right)$, and the values ai${a}_{i}$, bi${b}_{i}$, ei${e}_{i}$ and ri${r}_{i}$ are stored consecutively in the array w, that is:
• ai = w(i)${a}_{i}={\mathbf{w}}\left(i\right)$,
• bi = w(n + i)${b}_{i}={\mathbf{w}}\left(n+i\right)$,
• ei = w(2n + i)${e}_{i}={\mathbf{w}}\left(2n+i\right)$ and
• ri = w(3n + i)${r}_{i}={\mathbf{w}}\left(3n+i\right)$.
## Example
```function nag_quad_1d_fin_osc_vec_example
a = 0;
b = 6.283185307179586;
epsabs = 0;
epsrel = 0.001;
[result, abserr, w, iw, ifail] = nag_quad_1d_fin_osc_vec(@f, a, b, epsabs, epsrel);
result, abserr, ifail
function [fv] = f(x,n)
fv=zeros(n,1);
for i=1:double(n)
fv(i) = x(i)*sin(30*x(i))*cos(x(i));
end
```
```
result =
-0.2097
abserr =
4.4659e-14
ifail =
0
```
```function d01au_example
a = 0;
b = 6.283185307179586;
epsabs = 0;
epsrel = 0.001;
[result, abserr, w, iw, ifail] = d01au(@f, a, b, epsabs, epsrel);
result, abserr, ifail
function [fv] = f(x,n)
fv=zeros(n,1);
for i=1:double(n)
fv(i) = x(i)*sin(30*x(i))*cos(x(i));
end
```
```
result =
-0.2097
abserr =
4.4659e-14
ifail =
0
``` | 2017-01-21 00:16:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 82, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9754023551940918, "perplexity": 9999.163408099852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00205-ip-10-171-10-70.ec2.internal.warc.gz"} |