url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://www.biostars.org/p/285590/
|
grid engine problem with OMA
2
0
Entering edit mode
4.2 years ago
andrespara ▴ 30
Dear all,
Using the open grid engine and using this line
qsub -b y -j y -t 1-40 -cwd /usr/local/OMA/bin/OMA
I got this error
Starting database conversion and checks... We require that job-arrays now explicitly specify the number of jobs in the array. You should add to your submission script an environment variable "NR_PROCESSES" that holds the total number of jobs you use. Example: in bash: export NR_PROCESSES=100 in tcsh: setenv NR_PROCESSES=100 ERROR: require NR_PROCESSES to be assigned to an environment variable
I used this line
export NR_PROCESSES=100
but it keeps failing
Previous version of OMA have worked with our setup but now the process starts, qstat throws the jobs assigned for a few seconds and then all the processes vanish.
Using OMA 2.1.1 and grid engine GE 6.2u5, Ubuntu 14.04
I also would like to know if OMA 2+ has been tested on open grid engine or if there is feedback other users on this set up. Would it be better to change from open grid to Slurm for using OMA 2+?
oma grid engine cluster • 1.8k views
0
Entering edit mode
See if something in this past thread helps: Failure to launch OMA in array mode on SLURM cluster
0
Entering edit mode
It didn't help, the variable is set up. "echo "NR_PROCESSES $NR_PROCESSES" shows 100 ADD REPLY 0 Entering edit mode Tagging: adrian.altenhoff ADD REPLY 0 Entering edit mode I also would like to know if OMA 2+ has been tested on open grid engine or if there is feedback other users on this set up or scenario. Would it be better to change from open grid to Slurm for using OMA 2+? ADD REPLY 4 Entering edit mode 4.2 years ago alex.wv ▴ 50 Hi, I'm a member of the group that develops OMA. I think the problem here is that the environment variable (NR_PROCESSES) is being set locally, however qsub doesn't copy the environment variables set on the submission node to the worker nodes by default. The -V option would copy all environment variables, so that your command listed above would become export NR_PROCESSES=100 qsub -V -b y -j y -t 1-40 -cwd /usr/local/OMA/bin/OMA There is also the short-hand (using -v), of qsub -v NR_PROCESSES=100 -b y -j y -t 1-40 -cwd /usr/local/OMA/bin/OMA if you only need to set NR_PROCESSES. The command qstat -j <JOB_ID> lists the environment variables to copy to the worker nodes, so that you can verify it's set. Best wishes, Alex ADD COMMENT 0 Entering edit mode Thanks for the help Alex! It worked!! ADD REPLY 0 Entering edit mode Hi again, I wonder if I could send the jobs to only certain nodes so I can start multiple OMA runs with different datasets in the same grid. We have several nodes called "ubuntu-node2" "ubuntu-node3" and so on. I am currently launching OMA with that line qsub -v NR_PROCESSES=100 -b y -j y -t 1-40 -cwd /usr/local/OMA/bin/OMA Is there any parameter I can add? Sorry to ask this but I have no experience with the grid or these kind of architecture. Let me know if I should make this a post for more visibility. Thanks for your help. ADD REPLY 0 Entering edit mode Hi, seems like you could specify the hostnames with -l, see for example answer in to a similar question in here: https://stackoverflow.com/questions/19635895/run-a-job-on-all-nodes-of-sun-grid-engine-cluster-only-once ADD REPLY 0 Entering edit mode Thanks! I will try this ADD REPLY 2 Entering edit mode 4.2 years ago Hi, I think this problem is related to the way you submit the job and the specific configurations of the SGE system. I suggest you try to launch the job with a submission script, e.g. cat > ./start-oma.sh << EOF #!/bin/bash #$ -S /bin/bash
# Request ten minutes of wallclock time
#$-l h_rt=0:10:0 # Request 2 gigabyte of RAM. #$ -l h_vmem=2G,tmem=2G
# Set up the job array, e.g. 3 tasks
#$-t 1-3 # Set the name of the job #$ -N oma
#\$ -cwd
# Run the application.
export NR_PROCESSES=3
/usr/local/OMA/bin/OMA
EOF
The resulting submission script can then be used for submitting to the cluster
qsub start-oma.sh
At least this setup seems to work for me. Your above submission also results in an error for me, so I think chances are high this will work.
0
Entering edit mode
A collaborator beat me and run Alex's solution first at the grid but I will check the script with your solution as soon as the current run ends. Thanks for your help Adrian.
0
Entering edit mode
When I try to execute "qsub start-oma.sh" it says
qsub: Unknown option
bash start-oma.sh
works but starts only one process and with qstat I didn't notice any activity but maybe it is part of the intention of the script.
0
Entering edit mode
you might need to give the full path to the oma-start script, i.e.
qsub ./oma-start.sh
and to make the script executable (chmod +x oma-start.sh) but anyways, if Alex's variant works that's perfect.
|
2022-01-22 12:35:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20190541446208954, "perplexity": 6698.383974330849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303845.33/warc/CC-MAIN-20220122103819-20220122133819-00298.warc.gz"}
|
https://ftp.aimsciences.org/article/doi/10.3934/ipi.2013.7.717
|
Article Contents
Article Contents
# Nonstationary iterated thresholding algorithms for image deblurring
• We propose iterative thresholding algorithms based on the iterated Tikhonov method for image deblurring problems. Our method is similar in idea to the modified linearized Bregman algorithm (MLBA) so is easy to implement. In order to obtain good restorations, MLBA requires an accurate estimate of the regularization parameter $\alpha$ which is hard to get in real applications. Based on previous results in iterated Tikhonov method, we design two nonstationary iterative thresholding algorithms which give near optimal results without estimating $\alpha$. One of them is based on the iterative soft thresholding algorithm and the other is based on MLBA. We show that the nonstationary methods, if converge, will converge to the same minimizers of the stationary variants. Numerical results show that the accuracy and convergence of our nonstationary methods are very robust with respect to the changes in the parameters and the restoration results are comparable to those of MLBA with optimal $\alpha$.
Mathematics Subject Classification: Primary: 94A08, 49N45; Secondary: 65T60, 65F08.
Citation:
• [1] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183-202.doi: 10.1137/080716542. [2] M. Bertalmío, G. Sapiro, V. Caselles and C. Ballester, Image inpainting, SIGGRAPH, 34 (2000), 417-424.doi: 10.1145/344779.344972. [3] M. Bertero and P. Boccacci, "Introduction to Inverse Problems in Imaging," Institute of Physics Publishing, Bristol, 1998.doi: 10.1887/0750304359. [4] N. Bose and K. Boo, High-resolution image reconstruction with multisensors, International Journal of Imaging Systems and Technology, 9 (1998), 294-304. [5] L. M. Bregman, A relaxation method of finding a common point of convex sets and its application to the solution of problems in convex programming, (Russian) Z . Vycisl. Mat. i Mat. Fiz., 7 (1967), 620-631. [6] M. Brill and E. Schock, Iterative solution of ill-posed problems: A survey, in "Model Optimization in Exploration Geophysics" (Berlin, 1986), 13-37, Theory Practice Appl. Geophys., 1, Vieweg, Braunschweig, 1987. [7] J. F. Cai, R. H. Chan and Z. Shen, A framelet-based image inpainting algorithm, Appl. Comput. Harmon. Anal., 24 (2008), 131-149.doi: 10.1016/j.acha.2007.10.002. [8] J. F. Cai, S. Osher and Z. Shen, Linearized BRegman iterations for frame-based image deblurring, SIAM J. Imaging Sci., 2 (2009), 226-252.doi: 10.1137/080733371. [9] J. F. Cai, S. Osher and Z. Shen, Split Bregman methods and frame based image restoration, Multiscale Model. Simul., 8 (2009), 337-369.doi: 10.1137/090753504. [10] J. F. Cai, S. Osher and Z. Shen, Convergence of the linearized Bregman iteration for $l_1$-norm minimization, Math. Comput., 78 (2009), 2127-2136.doi: 10.1090/S0025-5718-09-02242-X. [11] J. F. Cai, S. Osher and Z. Shen, Linearized Bregman iterations for compressed sensing, Math. Comput., 78 (2009), 1515-1536.doi: 10.1090/S0025-5718-08-02189-3. [12] E. J. Candés and J. Romberg, Practical signal recovery from random projections, Wavelet Applications in Signal and Image Processing XI Proc. SPIE Conf. 5914 (2004). [13] A. Chai and Z. Shen, Deconvolution: A wavelet frame approach, Numer. Math., 106 (2007), 529-587.doi: 10.1007/s00211-007-0075-0. [14] A. Chambolle, R. A. De Vore, N. Y. Lee and B. J. Lucier, Nonlinear wavelet image processing: Variational problems, compression, and noise removal through wavelet shrinkage, IEEE Trans. Image Process., 7 (1998), 319-335.doi: 10.1109/83.661182. [15] R. H. Chan, T. F. Chan, L. Shen and Z. Shen, Wavelet algorithms for high-resolution image reconstruction, SIAM J. Sci. Comput., 24 (2003), 1408-1432.doi: 10.1137/S1064827500383123. [16] R. H. Chan, S. D. Riemenschneider, L. Shen and Z. Shen, Tight frame: An efficient way for high-resolution image reconstruction, Appl. Comput. Harmon. Anal., 17 (2004), 91-115.doi: 10.1016/j.acha.2004.02.003. [17] R. H. Chan, Z. Shen and T. Xia, A framelet algorithm for enhancing video stills, Appl. Comput. Harmon. Anal., 23 (2007), 153-170.doi: 10.1016/j.acha.2006.10.003. [18] T. Chan and J. H. Shen, "Image Processing and Analysis-Variational, PDE, Wavelet, and Stochastic Methods," Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2005.doi: 10.1137/1.9780898717877. [19] T. Chan, J. H. Shen and H. M. Zhou, Total variation wavelet inpainting, J. Math. Imaging Vision, 25 (2006), 107-125.doi: 10.1007/s10851-006-5257-3. [20] P. L. Combettes and V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., 4 (2005), 1168-1200.doi: 10.1137/050626090. [21] I. Daubechies, M. Defrise and C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Comm. Pure Appl. Math., 57 (2004), 1413-1457.doi: 10.1002/cpa.20042. [22] I. Daubechies, M. Fornasier and I. Loris, Accelerated projected gradient method for linear inverse problems with sparsity constraints, J. Fourier Anal. Appl. 14 (2008), 764-792.doi: 10.1007/s00041-008-9039-8. [23] I. Daubechies, B. Han, A. Ron and Z. Shen, Framelets: MRA-based constructions of wavelet frames, Appl. Comput. Harmon. Anal., 14 (2003), 1-46.doi: 10.1016/S1063-5203(02)00511-0. [24] I. Daubechies, G. Teschke and L. Vese, Iteratively solving linear inverse problems under general convex constraints, Inverse Problems Imaging, 1 (2007), 29-46.doi: 10.3934/ipi.2007.1.29. [25] M. Donatelli, Fast transforms for high order boundary conditions in deconvolution problems, BIT, 50 (2010), 559-576.doi: 10.1007/s10543-010-0266-4. [26] M. Donatelli, On nondecreasing sequences of regularization parameters for nonstationary iterated tikhonov, Numer. Algor., 60 (2012), 651-668.doi: 10.1007/s11075-012-9593-7. [27] M. Donatelli and M. Hanke, Fast nonstationary preconditioned iterative methods for ill-posed problems, with application to image deblurring, Inverse Problems, 29 (2013), 095008.doi: 10.1088/0266-5611/29/9/095008. [28] M. Elad and A. Feuer, Restoration of a single superresolution image from several blurred, noisy and undersampled measured images, IEEE Trans. Image Process, 6 (1997), 1646-1658.doi: 10.1109/83.650118. [29] H. W. Engl, M. Hanke and A. Neubauer, "Regularization of Inverse Problems," Mathematics and its Applications, 375. Kluwer Academic Publishers Group, Dordrecht, 1996.doi: 10.1007/978-94-009-1740-8. [30] M. J. Fadili and J. L. Starck, Sparse representations and Bayesian image inpainting, Proc. SPARS'05, Vol. I, Rennes, France, 2005. [31] A. G. Fakeev, A class of iterative processes for solving degenerate systems of linear algebraic equations, U. S. S. R. Comput. Math. Math. Phys., 21 (1981), 15-22. [32] M. Figueiredo and R. Nowak, An EM algorithm for wavelet-based image restoration, IEEE Trans. Image Process., 12 (2003), 906-916.doi: 10.1109/TIP.2003.814255. [33] E. Hale, W. Yin and Y. Zhang, Fixed-point continuation for $l_1$-minimization: Methodology and convergence, SIAM J. Optim., 19 (2008), 1107-1130.doi: 10.1137/070698920. [34] M. Hanke and C. W. Groetsh, Nonstationary iterated tikhonov regularization, J. Optim. Theory Appl., 98 (1998), 37-53.doi: 10.1023/A:1022680629327. [35] M. Hanke and P. C. Hansen, Regularization methods for large-scale problems, Surveys Math. Indust., 3 (1993), 253-315. [36] P. C. Hansen, "Rank-Deficient and Discrete Ill-Posed Problems," SIAM, Philadelphia, 1997.doi: 10.1137/1.9780898719697. [37] J. T. King and D. Chillingworth, Approximation of generalized inverses by iterated regularization, Numer. Func. Anal. Opt., 1 (1979), 499-513.doi: 10.1080/01630567908816031. [38] A. V. Kryanev, An iterative method for solving incorrectly posed problems, U. S. S. R. Comput. Math. Math. Phys., 14 (1974), 25-35.doi: 10.1016/0041-5553(74)90133-5. [39] L. Landweber, An iteration formula for fredholm integral equations of the first kind, Am. J. Math., 73 (1951), 615-624.doi: 10.2307/2372313. [40] I. Loris, M. Bertero, C. De Mol, R. Zanella and L. Zanni, Accelerating gradient projection methods for $l_1$-constrained signal recovery by steplength selection rules, Appl. Comput. Harmon. Anal., 27 (2009), 247-254.doi: 10.1016/j.acha.2009.02.003. [41] S. Mallat, "A Wavelet Tour of Signal Processing," 2nd edition, Academic Press: San Diego, 1999. [42] V. A. Morozov, On the solution of functional equations by the method of regularization, Dokl. Akad. Nauk SSSR 167 510-512 (Russian), translated as Soviet Math. Dokl., 7 (1966), 414-417. [43] F. Natterer, "The Mathematics of Computerized Tomography," Reprint of the 1986 original. Classics in Applied Mathematics, 32. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2001.doi: 10.1137/1.9780898719284. [44] M. K. Ng, R. H. Chan and W. C. Tang, A fast algorithm for deblurring models with Neumann boundary conditions, SIAM J. Sci. Comput., 21 (1999), 851-866.doi: 10.1137/S1064827598341384. [45] S. Osher, Y. Mao, B. Dong and W. Yin, Fast linearized Bregman iteration for compressed sensing and sparse denoising, Commun. Math. Sci., 8 (2010), 93-111. [46] M. Piana and M. Bertero, Projected landweber method and preconditioning, Inverse Problems, 13 (1997), 441-464.doi: 10.1088/0266-5611/13/2/016. [47] O. N. Strand, Theory and methods related to the singular-function expansion and landweber's iteration for integral equations of the first kind, SIAM J. Numer. Anal, 11 (1974), 798-825.doi: 10.1137/0711066. [48] A. N. Tikhonov, Solution of incorrectly formulated problems and the regularization method, Soviet Math. Dokl., 4 (1963), 1035-1038. [49] C. R. Vogel, "Computational Methods for Inverse Problems," With a foreword by H. T. Banks. Frontiers in Applied Mathematics, 23. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2002.doi: 10.1137/1.9780898717570. [50] W. Yin, S. Osher, D. Goldfarb and J. Darbon, Bregman iterative algorithms for $l_1$-minimization with applications to compressed sensing, SIAM J. Imaging Sci., 1 (2008), 143-168.doi: 10.1137/070703983.
|
2023-04-02 01:56:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.6307864189147949, "perplexity": 3343.9647911464294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00584.warc.gz"}
|
https://rj722.github.io/
|
# Articles
• ### The [deceptive] power of visual explanation
July 22, 2019
Quite recently, I came across Jay Alammar’s, rather beautiful blog post, “A Visual Intro to NumPy & Data Representation”.
Before reading this, whenever I had to think about an array:
In [1]: import numpy as np
In [2]: data = np.array([1, 2, 3])
In [3]: data
Out[3]: array([1, 2, 3])
I used to create a mental picture somewhat like this:
┌────┬────┬────┐
data = │ 1 │ 2 │ 3 │
└────┴────┴────┘
But Jay, on the other hand, uses a vertical stack for representing the same array.
At the first glance, and owing to the beautiful graphics Jay has created, it makes perfect sense.
Now, if you had only seen this image, and I ask you the dimensions of data, what would your answer be?
The mathematician inside you barks (3, 1).
But, to my surprise, this wasn’t the answer:
In [4]: data.shape
Out[4]: (3,)
(3, ) eh? wondering, what would a (3, 1) array look like?
In [5]: data.reshape((3, 1))
Out[5]:
array([[1],
[2],
[3]])
Hmm, This begs the question: what is the difference between an array of shape (R, ) and (R, 1). A little bit of research landed me at this answer on StackOverflow. Let’s see:
The best way to think about NumPy arrays is that they consist of two parts, a data buffer which is just a block of raw elements, and a view which describes how to interpret the data buffer.
For example, if we create an array of 12 integers:
>>> a = numpy.arange(12)
>>> a
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
Then a consists of a data buffer, arranged something like this:
┌────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┐
│ 0 │ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │ 7 │ 8 │ 9 │ 10 │ 11 │
└────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┘
and a view which describes how to interpret the data:
>>> a.flags
C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
>>> a.dtype
dtype('int64')
>>> a.itemsize
8
>>> a.strides
(8,)
>>> a.shape
(12,)
Here the shape (12,) means the array is indexed by a single index which runs from 0 to 11. Conceptually, if we label this single index i, the array a looks like this:
i= 0 1 2 3 4 5 6 7 8 9 10 11
┌────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┐
│ 0 │ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │ 7 │ 8 │ 9 │ 10 │ 11 │
└────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┘
If we reshape an array, this doesn’t change the data buffer. Instead, it creates a new view that describes a different way to interpret the data. So after:
>>> b = a.reshape((3, 4))
the array b has the same data buffer as a, but now it is indexed by two indices which run from 0 to 2 and 0 to 3 respectively. If we label the two indices i and j, the array b looks like this:
i= 0 0 0 0 1 1 1 1 2 2 2 2
j= 0 1 2 3 0 1 2 3 0 1 2 3
┌────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┐
│ 0 │ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │ 7 │ 8 │ 9 │ 10 │ 11 │
└────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┘
So, if were to actually have a (3, 1) matrix, we would have the exact same stack representation as a (3, ) matrix, thus creating the confusion.
So, what about the horizontal representation?
An argument can be made that the horizontal representation can be misinterpreted as a (1, 3) matrix, our brains are so accustomed to seeing it as 1-D array, that it is almost never the case (at least with folks who have worked with Python before).
Of course, it all makes perfect sense now, but it did take me a while to figure out what exactly was going under the hood here.
Visual Explanation of Fourier Series - Decomposition of a square wave into a sum of infinite sinusoids. From this answer on math.stackexchange.com
I also realized that while it is hugely helpful to visualize something when learning about it, but one should always take the visual representation with a grain of salt. As we can see, they are not entirely accurate.
For now, I’m sticking to my prior way of picturing a 1-D array as a horizontal list to avoid the confusion. I shall update the blog if I find anything otherwise.
My point is not that Jay’s drawings are flawed, but how susceptible we are to visual deceptions. In this case, it was relatively easier to figure out, because it was code, which forces one to pay attention to each and every detail, however minor it may be.
After all, human brain, prone to so many biases, taking shortcuts for nearly every decision we make (thus leaving room for sanity) isn’t anywhere near as perfect as it thinks it is.
• ### My Experience with OBM
July 19, 2019
If you want an overview about OBM, please read my post on the same .
I’ve participated in three sprints until now, in which I’ve completely failed myself, but I’ve already experiencing a drastic changes in my habits, which is good.
Here is what I’ve learned from this short, but significant experience:
• ### Crafting my future with OBM
July 12, 2019
It’s been a couple of years maybe, when I read ‘i want 2 do project tell me wat 2 do’ - which by the way, you should too! since I first came across Operation Blue Moon (OBM), a project aimed towards time management and getting things done. It’s run single-handedly by Shakthi Kannan (~mbuf) (who is also the author of ‘i want 2 do project, tell me wat 2 do’ ).
Not only does it borrows it’s name, but also the kind of disciple practiced, from our miliary counterparts. The practices here, build upon the years of experience Shakthi has dealing with people trying, failing, and trying ~harder~ again in their conquest with these utterly useful traits.
• ### Using Weechat with Glowing Bear for IRC
July 2, 2019
Last month, I had a new addition to my toolbox - Glowing Bear, which has been a really nice improvement, allowing me to access Weechat (hosted on a server) through my browser. Here’s how I set it up.
May 27, 2019
• ### Silk Road, Revolutions and Systems
May 26, 2019
Today, I read the story of Silk Road: how the young idealist Ross Ulbricht, tired of chasing success the old school way, found his way around the darkweb to create an online As a part of the darkweb, it was operated as a Tor hidden service which protected the personal privacy of users by concealing their details from anyone - from the Government to their ISP - conducting network surveillance. Additionally, all payments were made using Bitcoin , a cryptocurrency which provides a certain degree of anonymity. bazaar for the trading of illicit materials, mainly drugs, which he named Silk Road.
The aim behind writing this blog post is to think out loud and try to gain insight into the oversights made by some of the most prominent revolutionaries in history.
• ### Freedom of Speech, Authoritarianism, Freedom of Press and Faiz
May 22, 2019
Right to Free Speech is essential for a democracy. This blog post aims to shed some light on the recent authoritarian attempts made by hindutva-right-wing to curb free speech and how can we fight back.
• ### A glimpse into the darkness: the 'Brutish' rule in India
May 18, 2019
A second-generation freeborn attempts to understand the impact and aftermath of colonization of India by British. It turns out that even an educated Indian of today is still not aware of the atrocities and turmoil it caused the country.
• ### Do we really need to cover coverage with Vulture?
August 18, 2018
The team behind Vulture (a tool used for detecting unused Python code) decided not to integrate it with coverage (a tool for measuring code coverage of Python programs). Read why!
• ### Dynamic code analysis with Vulture
June 27, 2018
This is a follow up post of Why use coverage to find which parts of a python code were executed? - there we discussed how we stumbled on this plan of dynamic code analysis with vulture. Here, we talk about the development process we (the Vulture team) underwent to integrate Vulture with coverage.py in order to automatically generate a whitelist of functions which Vulture reports as unused but are actually being used.
• ### Google Summer of Code 2018 - Phase 1
June 14, 2018
Here’s my work progress with the first phase of Google Summer of Code 2018.
• ### The story of Dead Code, Vulture and scavenging
May 30, 2018
It isn’t uncommon for software developers to encounter some code that they had written in the past and reflecting on it - the most common reaction would probably be “It must be the most horrible thing I wrote”. But sometimes, there’s that aha moment where you find something and you are instantly gratified and proud of yourself, “Oh, this is so beautiful, no wonder it took so many sleepless nights”. However glamorous it may sound, but it is indeed a difficult task to write and maintain such code, and this is where automatic tools come in to the picture. Let’s discuss about one such tool - Vulture, which helps discover unused stuff in Python code.
So, today we present to you the voodoo which throws out unused code.
• ### Why use coverage to find which parts of a python code were executed?
May 19, 2018
In this post, I’ll walk you through the decision making process the team behind Vulture underwent to come up with a way to deal with false positives in it’s results.
• ### A meeting with my GSoC'18 mentors
May 13, 2018
Tell me and I forget, teach me and I may remember, involve me and I learn. This blog post is a public memoir of an online meeting I had with my GSoC mentors. Kudos to me for having such awesome mentors! :P
• ### GSoC 2018
May 10, 2018
“Good luck is a residue of preparation.” ― Jack Youngblood
Getting selected as a Google Summer of Code student with coala was a breakthrough for me. The coala community touched me on every aspect of open source software development, especially how to get along with peers (and troll them :-p). And it has happened again - I am a student with coala one more time, and I look forward to learn yet more from my dear mentors and the beloved coala community.
• ### Statement of Chaos
March 30, 2018
Should I go for a job or an MS?
• ### Organising the Mozilla visit
March 3, 2018
This blog post is about my experience with organising and attending a Mozilla session at my college.
• ### How to get started with self driving cars
January 11, 2018
• ### TMP Day 1: Introducing three months long backbreaking goals
August 24, 2017
Challenging my limits - Completing 4 ridiculously difficult programs in a year.
• ### Phase-2
July 24, 2017
Phase 2 is coming to an end today (24’th of July, 11:30 PM IST). It had been an intensive and healthy work-period with a high steep-learning curve. Let me reflect on my journey throughout the month.
• ### Phase-1
June 24, 2017
Phase 1 of the coding period ended on 26’th June 23:30 GMT+5:30. With this post, I would like to reflect upon the development progress so far and share some of the challenges I faced.
• ### change
June 20, 2017
Trying to change my habits in a way it feels fun!
• ### Meeting Jendrik
June 10, 2017
A meeting with my mentor, tweaking the VultureBear and my new laptop. Ahh, perfect!
• ### coala - COde AnaLysis Application
June 1, 2017
How working with coala changed my life? :-)
• ### GSoC Project Timeline
May 28, 2017
Here is a description of how I plan to manage my schedule during GSoC period.
• ### GSoC Project
May 20, 2017
The project I will be working this (G) summer (oC)
• ### Getting into GSoC
May 3, 2017
Hello, this post is a brief description of what is GSoC and how I wrote my project proposal for GSoC
RJ722's blog - Rahul Jha
|
2019-08-20 23:42:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2874354422092438, "perplexity": 2462.262020295182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315681.63/warc/CC-MAIN-20190820221802-20190821003802-00333.warc.gz"}
|
https://www.doorsteptutor.com/Exams/ISS/Paper-1-MCQ/Questions/Part-77.html
|
# ISS (Statistical Services) Statistics Paper I (New 2016 MCQ Pattern): Questions 372 - 375 of 472
Access detailed explanations (illustrated with images and videos) to 472 questions. Access all new questions- tracking exam pattern and syllabus. View the complete topic-wise distribution of questions. Unlimited Access, Unlimited Time, on Unlimited Devices!
View Sample Explanation or View Features.
Rs. 300.00 -OR-
## Question 372
### Question
MCQ▾
Let be independent and equidistributed. Then which of the following is correct regarding ?
### Choices
Choice (4)Response
a.
If are infinite then will be larger than infinitely many times with probability 0.
b.
If are finite then will be larger than infinitely many times with probability 1.
c.
None of the above
d.
Question does not provide sufficient data or is vague
## Question 373
### Question
MCQ▾
If is the number of points rolled with a balanced die, find the expected value of .
### Choices
Choice (4)Response
a.
b.
c.
d.
None of the above
## Question 374
### Question
MCQ▾
Suppose that and are independent random variables taking values in N, with probability generating functions and having radii of convergence and , respectively. Then the probability generating function of is given by________ for .
### Choices
Choice (4)Response
a.
b.
c.
d.
None of the above
## Question 375
### Question
MCQ▾
Consider the random graph we get from the square grid by keeping each edge with probability p (0 < p < 1 some predetermined constant) IID, and deleting it otherwise (this is percolation on the square grid, and has been extensively studied) . Which of the following is correct regarding an infinite connected component somewhere in the graph?
### Choices
Choice (4)Response
a.
There always exist finite connected component.
b.
Existence of an infinite connected component is impossible.
c.
Probability that there exists an infinite connected component is 0.5.
d.
None of the above
|
2021-04-10 14:15:46
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8652804493904114, "perplexity": 2540.19512350756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057142.4/warc/CC-MAIN-20210410134715-20210410164715-00154.warc.gz"}
|
https://stats.stackexchange.com/questions/226316/difference-in-the-error-covariance-matrix-for-correlated-and-autocorrelated-erro?answertab=active
|
# Difference in the error covariance matrix for correlated and autocorrelated errors
There is a big difference between the covariance matrix of correlated errors vs that of autocorrelated errors, but I have not found any formal explanation or notation.
I would expect the covariance matrix describing variance between errors of different regressed variables is a $n$ x $n$ matrix with $n$ being the number of variables in the model. I often see this matrix written as $\Sigma$, with $\Sigma_{ij}$ = cov$(\epsilon_i , \epsilon_j)$.
In the case of autocorrelated erros from for example time series data, the covariance matrix should describe variance of error terms in different time points, but refering to the same variable. This would be a $n$ x $n$ matrix with $n$ time points. I would say I often see this matrix notated as $\Omega$ , with $\Omega_{ij}^k$ = cov$(\epsilon^k(t_i) , \epsilon^k(t_j) )$. Describing autocorrelation of regression errors of variable $k$.
This would mean, for multiple regression models with $L$ variables, we would have one $\Sigma$ and $L$ x $\Omega$ covariance matrices.
Is this correct? Or am I confusing different terms for different situations?
And most importantly: For applying generalized least squares estimation with (auto)-correlated errors. Which error covariance matrix is the one being estimated?
|
2019-06-16 20:59:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5520473718643188, "perplexity": 390.0465649915961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998298.91/warc/CC-MAIN-20190616202813-20190616224813-00050.warc.gz"}
|
https://studysoup.com/tsg/9395/physics-principles-with-applications-6-edition-chapter-9-problem-32p
|
×
Get Full Access to Physics: Principles With Applications - 6 Edition - Chapter 9 - Problem 32p
Get Full Access to Physics: Principles With Applications - 6 Edition - Chapter 9 - Problem 32p
×
# (II) (a) Calculate the magnitude of the force, FM required
ISBN: 9780130606204 3
## Solution for problem 32P Chapter 9
Physics: Principles with Applications | 6th Edition
• Textbook Solutions
• 2901 Step-by-step solutions solved by professors and subject experts
• Get 24/7 help from StudySoup virtual teaching assistants
Physics: Principles with Applications | 6th Edition
4 5 1 310 Reviews
28
2
Problem 32P
(II) (a) Calculate the force, $$F_{M}$$, required of the "deltoid" muscle to hold up the outstretched arm shown in Fig. . The total mass of the arm is . (b) Calculate the magnitude of the force $$F_{J}$$ exerted by the shoulder joint on the upper arm.
Equation Transcription:
Text Transcription:
F_{M}
F_{J}
Step-by-Step Solution:
Step-by-step solution
Step 1 of 5 ^
We are going to draw the free body diagram of the system
Shoulder joint is taken as pivot point O
Fm; The force exerted by muscle.
W: the weight of the arm/
Fj; the reaction force exerted from the joint.
dfm; the force distance from muscle force to pivot pint
dw; the force distance from weight of the arm to pivot point.
Step 1 of 5 ^
We are going write the second condition of equilibrium as taking the torques around pivot point to find the force exerted by muscle.
The force exerted by muscle to hold the arm.
Step 2 of 5
Step 3 of 5
## Discover and learn what students are asking
Calculus: Early Transcendental Functions : Conservative Vector Fields and Independence of Path
?In Exercises 5 - 10, determine whether the vector field is conservative. $$\mathbf{F}(x, y)=15 x^{2} y^{2} \mathbf{i}+10 x^{3} y \mathbf{j}$$
Statistics: Informed Decisions Using Data : Testing the Significance of the Least-Squares Regression Model
?In Problems 5–10, use the results of Problems 7–12, respectively, from Section 4.2 to answer the following questions: (a) What are the estima
#### Related chapters
Unlock Textbook Solution
Enter your email below to unlock your verified solution to:
(II) (a) Calculate the magnitude of the force, FM required
|
2022-08-17 22:17:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47292181849479675, "perplexity": 2591.340941141444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573118.26/warc/CC-MAIN-20220817213446-20220818003446-00348.warc.gz"}
|
http://www.solutioninn.com/a-gas-station-attendant-would-like-to-estimate-p-the
|
# Question
A gas station attendant would like to estimate p, the proportion of all households that own more than two vehicles. To obtain an estimate, the attendant decides to ask the next 200 gasoline customers how many vehicles their households own. To obtain an estimate of p, the attendant counts the number of customers who say there are more than two vehicles in their households and then divides this number by 200. How would you critique this estimation procedure? Is there anything wrong with this procedure that would result in sampling and/or non-sampling errors? If so, can you suggest a procedure that would reduce this error?
Sales0
Views50
|
2016-10-28 12:19:42
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8493528962135315, "perplexity": 647.8079595662897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722459.85/warc/CC-MAIN-20161020183842-00142-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://www.gamedev.net/index.php?app=forums&module=extras§ion=postHistory&pid=5042774
|
• Create Account
### #Actualsamoth
Posted 13 March 2013 - 12:15 PM
Also the company makes use of airships which they had build and sold earlier for some other purposes to spread water everywhere to extinguish all original robots and generate a monopoly.
What is this analogous to?
PARTNERS
|
2013-12-19 01:30:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29685884714126587, "perplexity": 6263.9151987342275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345760669/warc/CC-MAIN-20131218054920-00021-ip-10-33-133-15.ec2.internal.warc.gz"}
|
http://manual.q-chem.com/5.2/Ch8.S2.html
|
# 8.2 Built-In Basis Sets
Q-Chem is equipped with many standard basis sets,13 and allows the user to specify the required basis set by its standard symbolic representation. The available built-in basis sets include the following types:
• Pople basis sets 96, 318, 231, 232, 233, 308
• Dunning basis sets 439
• Correlation consistent Dunning basis sets 440, 1004, 1005, 990, 486, 49
• Ahlrichs basis sets 804
• Jensen polarization consistent basis sets 421
• Karlsruhe "def2" basis sets 35, 455, 804, 554, 961, 767
• The universal Gaussian basis set (UGBS) 214
In addition, Q-Chem supports the following features:
• Extra diffuse functions available for high quality excited state calculations.
• Standard polarization functions.
• Basis sets are requested by symbolic representation.
• $s$, $p$, $sp$, $d$, $f$, $g$ and $h$ angular momentum types of basis functions (For energy calculations up to $k$ are supported).
• Pure and Cartesian basis functions.
• Mixed basis sets (see section 8.5).
• Basis set superposition error (BSSE) corrections.
The following \$rem keyword controls the basis set:
BASIS
Sets the basis set to be used
TYPE:
STRING
DEFAULT:
No default basis set
OPTIONS:
General, Gen User-defined. See section below Symbol Use standard basis sets as in the table below Mixed Use a combination of different basis sets
RECOMMENDATION:
Consult literature and reviews to aid your selection.
|
2019-08-25 15:36:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 8, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8586319088935852, "perplexity": 5416.365072816665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330750.45/warc/CC-MAIN-20190825151521-20190825173521-00285.warc.gz"}
|
https://edwiv.com/archives/1334
|
# Codeforces Round #606 (Div. 2)
## A. Happy Birthday, Polycarp!
time limit per test1 second memory limit per test256 megabytes
Hooray! Polycarp turned n years old! The Technocup Team sincerely congratulates Polycarp!
Polycarp celebrated all of his n birthdays: from the 1-th to the n-th. At the moment, he is wondering: how many times he turned beautiful number of years?
According to Polycarp, a positive integer is beautiful if it consists of only one digit repeated one or more times. For example, the following numbers are beautiful: 1, 77, 777, 44 and 999999. The following numbers are not beautiful: 12, 11110, 6969 and 987654321.
Of course, Polycarpus uses the decimal numeral system (i.e. radix is 10).
Help Polycarpus to find the number of numbers from 1 to n (inclusive) that are beautiful.
Input
The first line contains an integer $$t (1 \le t \le 10^4)$$ — the number of test cases in the input. Then t test cases follow.
Each test case consists of one line, which contains a positive integer $$n (1 \le n \le 10^9)$$ — how many years Polycarp has turned.
Output
Print t integers — the answers to the given test cases in the order they are written in the test. Each answer is an integer: the number of beautiful years between 1 and n, inclusive.
Exampleinput
6
18
1
9
100500
33
1000000000
output
10
1
9
45
12
81
Note
In the first test case of the example beautiful years are 1, 2, 3, 4, 5, 6, 7, 8, 9 and 11.
#### 签到
#include<bits/stdc++.h>
using namespace std;
#define rep(i,a,b) for(int i=(a);i<=(b);++i)
#define dep(i,a,b) for(int i=(a);i>=(b);--i)
#define pb push_back
typedef long long ll;
const int maxn=(int)2e5+100;
const int mod=(int)1e9+7;
int n;
void solve(){
scanf("%d",&n);
int ans=0;
rep(i,1,100) rep(j,1,9){
ll tmp=0;
rep(k,1,i) tmp=tmp*10+j;
if(tmp>n) return (void)printf("%d\n",ans);
ans++;
}
}
int main(){
int T;cin>>T;
while(T--) solve();
}
## B. Make Them Odd
time limit per test3 seconds memory limit per test256 megabytes
There are n positive integers $$a_1, a_2, \dots, a_n$$. For the one move you can choose any even value c and divide by two all elements that equal c.
For example, if a=[6,8,12,6,3,12] and you choose c=6, and a is transformed into a=[3,8,12,3,3,12] after the move.
You need to find the minimal number of moves for transforming a to an array of only odd integers (each element shouldn't be divisible by 2).
Input
The first line of the input contains one integer $$t (1 \le t \le 10^4)$$ — the number of test cases in the input. Then t test cases follow.
The first line of a test case contains $$n (1 \le n \le 2\cdot10^5)$$ — the number of integers in the sequence a. The second line contains positive integers $$a_1, a_2, \dots, a_n (1 \le a_i \le 10^9)$$.
The sum of n for all test cases in the input doesn't exceed $$2\cdot10^5$$.
Output
For t test cases print the answers in the order of test cases in the input. The answer for the test case is the minimal number of moves needed to make all numbers in the test case odd (i.e. not divisible by 2).
Exampleinput
4
6
40 6 40 3 20 1
1
1024
4
2 4 8 16
3
3 1 7
output
4
10
4
0
Note
In the first test case of the example, the optimal sequence of moves can be as follows:
• before making moves a=[40, 6, 40, 3, 20, 1];
• choose c=6;
• now a=[40, 3, 40, 3, 20, 1];
• choose c=40;
• now a=[20, 3, 20, 3, 20, 1];
• choose c=20;
• now a=[10, 3, 10, 3, 10, 1];
• choose c=10;
• now a=[5, 3, 5, 3, 5, 1] — all numbers are odd.
Thus, all numbers became odd after 4 moves. In 3 or fewer moves, you cannot make them all odd.
#### 最优策略肯定是先除大的,set搞一搞就好了
#include<bits/stdc++.h>
using namespace std;
#define rep(i,a,b) for(int i=(a);i<=(b);++i)
#define dep(i,a,b) for(int i=(a);i>=(b);--i)
#define pb push_back
typedef long long ll;
const int maxn=(int)2e5+100;
const int mod=(int)1e9+7;
int n,a[maxn];
void solve(){
scanf("%d",&n);
set<int,greater<int> > s;
int ans=0;
rep(i,1,n){
scanf("%d",&a[i]);
if(a[i]%2==0) s.insert(a[i]);
}
while(!s.empty()){
auto it=s.begin();int u=*it,v=u/2;
if(v%2==0) s.insert(v);
s.erase(it);ans++;
}
printf("%d\n",ans);
}
int main(){
int T;cin>>T;
while(T--) solve();
}
## C. As Simple as One and Two
time limit per test3 seconds memory limit per test256 megabytes
You are given a non-empty string $$s=s_1s_2\dots s_n$$, which consists only of lowercase Latin letters. Polycarp does not like a string if it contains at least one string "one" or at least one string "two" (or both at the same time) as a substring. In other words, Polycarp does not like the string s if there is an integer$$j (1 \le j \le n-2)$$, that$$s_{j}s_{j+1}s_{j+2}="one"$$or $$s_{j}s_{j+1}s_{j+2}="two"$$.
For example:
• Polycarp does not like strings "oneee", "ontwow", "twone" and "oneonetwo" (they all have at least one substring "one" or "two"),
• Polycarp likes strings "oonnee", "twwwo" and "twnoe" (they have no substrings "one" and "two").
Polycarp wants to select a certain set of indices (positions) and remove all letters on these positions. All removals are made at the same time.
For example, if the string looks like s="onetwone", then if Polycarp selects two indices 3 and 6, then "onetwone" will be selected and the result is "ontwne".
What is the minimum number of indices (positions) that Polycarp needs to select to make the string liked? What should these positions be?
Input
The first line of the input contains an integer $$t (1 \le t \le 10^4)$$ — the number of test cases in the input. Next, the test cases are given.
Each test case consists of one non-empty string s. Its length does not exceed $$1.5\cdot10^5$$. The string s consists only of lowercase Latin letters.
It is guaranteed that the sum of lengths of all lines for all input data in the test does not exceed $$1.5\cdot10^6$$.
Output
Print an answer for each test case in the input in order of their appearance.
The first line of each answer should contain $$r (0 \le r \le |s|)$$ — the required minimum number of positions to be removed, where |s| is the length of the given line. The second line of each answer should contain r different integers — the indices themselves for removal in any order. Indices are numbered from left to right from 1 to the length of the string. If r=0, then the second line can be skipped (or you can print empty). If there are several answers, print any of them.
Examplesinput
4
onetwone
testme
oneoneone
twotwo
output
2
6 3
0
3
4 1 7
2
1 4
input
10
onetwonetwooneooonetwooo
two
one
twooooo
ttttwo
ttwwoo
ooone
onnne
oneeeee
oneeeeeeetwooooo
output
6
18 11 12 1 6 21
1
1
1
3
1
2
1
6
0
1
4
0
1
1
2
1 11
Note
In the first example, answers are:
• "onetwone",
• "testme" — Polycarp likes it, there is nothing to remove,
• "oneoneone",
• "twotwo".
In the second example, answers are:
• "onetwonetwooneooonetwooo",
• "two",
• "one",
• "twooooo",
• "ttttwo",
• "ttwwoo" — Polycarp likes it, there is nothing to remove,
• "ooone",
• "onnne" — Polycarp likes it, there is nothing to remove,
• "oneeeee",
• "oneeeeeeetwooooo".
#### 两种情况,首先处理形如"twone"的,显然去掉o就好了;再就是单独的"one"或者"two"的情况,去掉之间的就好了(因为去掉两边的话可能会组成新的)
#include<bits/stdc++.h>
using namespace std;
#define rep(i,a,b) for(int i=(a);i<=(b);++i)
#define dep(i,a,b) for(int i=(a);i>=(b);--i)
#define pb push_back
typedef long long ll;
const int maxn=(int)2e5+100;
const int mod=(int)1e9+7;
char s[maxn];
int n;
void solve(){
scanf("%s",s+1);
n=strlen(s+1);
vector<int> ans;
rep(i,1,n-4) if(s[i]=='t'&&s[i+1]=='w'&&s[i+2]=='o'&&s[i+3]=='n'&&s[i+4]=='e'){
ans.pb(i+2);s[i+2]='#';
}
rep(i,1,n-2){
if(s[i]=='o'&&s[i+1]=='n'&&s[i+2]=='e') s[i+1]='#',ans.pb(i+1);
if(s[i]=='t'&&s[i+1]=='w'&&s[i+2]=='o') s[i+1]='#',ans.pb(i+1);
}
printf("%d\n",(int)ans.size());
for(auto u:ans) printf("%d ",u);puts("");
}
int main(){
int T;cin>>T;
while(T--) solve();
}
## D. Let's Play the Words?
time limit per test3 seconds memory limit per test256 megabytes
Polycarp has n different binary words. A word called binary if it contains only characters '0' and '1'. For example, these words are binary: "0001", "11", "0" and "0011100".
Polycarp wants to offer his set of n binary words to play a game "words". In this game, players name words and each next word (starting from the second) must start with the last character of the previous word. The first word can be any. For example, these sequence of words can be named during the game: "0101", "1", "10", "00", "00001".
Word reversal is the operation of reversing the order of the characters. For example, the word "0111" after the reversal becomes "1110", the word "11010" after the reversal becomes "01011".
Probably, Polycarp has such a set of words that there is no way to put them in the order correspondent to the game rules. In this situation, he wants to reverse some words from his set so that:
• the final set of n words still contains different words (i.e. all words are unique);
• there is a way to put all words of the final set of words in the order so that the final sequence of n words is consistent with the game rules.
Polycarp wants to reverse minimal number of words. Please, help him.
Input
The first line of the input contains one integer $$t (1 \le t \le 10^4)$$ — the number of test cases in the input. Then t test cases follow.
The first line of a test case contains one integer $$n (1 \le n \le 2\cdot10^5)$$— the number of words in the Polycarp's set. Next n lines contain these words. All of n words aren't empty and contains only characters '0' and '1'. The sum of word lengths doesn't exceed $$4\cdot10^6$$. All words are different.
Guaranteed, that the sum of n for all test cases in the input doesn't exceed $$2\cdot10^5$$. Also, guaranteed that the sum of word lengths for all test cases in the input doesn't exceed $$4\cdot10^6$$.
Output
Print answer for all of t test cases in the order they appear.
If there is no answer for the test case, print -1. Otherwise, the first line of the output should contain $$k (0 \le k \le n)$$ — the minimal number of words in the set which should be reversed. The second line of the output should contain k distinct integers — the indexes of the words in the set which should be reversed. Words are numerated from 1 to n in the order they appear. If k=0 you can skip this line (or you can print an empty line). If there are many answers you can print any of them.
Exampleinput
4
4
0001
1000
0011
0111
3
010
101
0
2
00000
00001
4
01
001
0001
00001
output
1
3
-1
0
2
1 2
#### 按题意模拟即可
#include<bits/stdc++.h>
using namespace std;
#define rep(i,a,b) for(int i=(a);i<=(b);++i)
#define dep(i,a,b) for(int i=(a);i>=(b);--i)
#define pb push_back
typedef long long ll;
const int maxn=(int)2e5+100;
const int mod=(int)1e9+7;
vector<string> a;
int n;
void solve(){
int cnt[4]={0,};//01,10,00,11
set<string> s1,s2;
a.clear();
scanf("%d",&n);
rep(i,1,n){
char s[maxn];scanf("%s",s+1);
int len=strlen(s+1);
string str=s+1;a.pb(str);
if(s[1]=='0'&&s[len]=='0') cnt[2]++;
if(s[1]=='1'&&s[len]=='1') cnt[3]++;
if(s[1]=='0'&&s[len]=='1') cnt[0]++;s1.insert(str);
if(s[1]=='1'&&s[len]=='0') cnt[1]++;s2.insert(str);
}
if(cnt[0]==0&&cnt[1]==0){
if(cnt[2]&&cnt[3]) return (void)puts("-1");
else return (void)printf("0\n\n");
}
if(abs(cnt[0]-cnt[1])<=1) return (void)printf("0\n\n");
vector<int> ans;
int num=abs(cnt[0]-cnt[1])/2,pos=0;
if(cnt[0]>cnt[1]){
for(auto u:a){
++pos;
if(u[0]=='0'&&u[u.length()-1]=='1'){
reverse(u.begin(),u.end());
if(num&&s2.find(u)==s2.end()) num--,ans.pb(pos);
}
}
if(num!=0) return (void)puts("-1");
printf("%d\n",(int)ans.size());
for(auto u:ans) printf("%d ",u);puts("");
}
else{
for(auto u:a){
++pos;
if(u[0]=='1'&&u[u.length()-1]=='0'){
reverse(u.begin(),u.end());
if(num&&s1.find(u)==s1.end()) num--,ans.pb(pos);
}
}
if(num!=0) return (void)puts("-1");
printf("%d\n",(int)ans.size());
for(auto u:ans) printf("%d ",u);puts("");
}
}
int main(){
int T;cin>>T;
while(T--) solve();
}
## E. Two Fairs
time limit per test3 seconds memory limit per test256 megabytes
There are n cities in Berland and some pairs of them are connected by two-way roads. It is guaranteed that you can pass from any city to any other, moving along the roads. Cities are numerated from 1 to n.
Two fairs are currently taking place in Berland — they are held in two different cities a and b $$(1 \le a, b \le n; a \ne b)$$.
Find the number of pairs of cities x and y $$(x \ne a, x \ne b, y \ne a, y \ne b)$$such that if you go from x to y you will have to go through both fairs (the order of visits doesn't matter). Formally, you need to find the number of pairs of cities x,y such that any path from x to y goes through a and b (in any order).
Print the required number of pairs. The order of two cities in a pair does not matter, that is, the pairs (x,y) and (y,x) must be taken into account only once.
Input
The first line of the input contains an integer t $$(1 \le t \le 4\cdot10^4)$$ — the number of test cases in the input. Next, t test cases are specified.
The first line of each test case contains four integers n, m, a and b$$(4 \le n \le 2\cdot10^5, n - 1 \le m \le 5\cdot10^5, 1 \le a,b \le n, a \ne b)$$ — numbers of cities and roads in Berland and numbers of two cities where fairs are held, respectively.
The following m lines contain descriptions of roads between cities. Each of road description contains a pair of integers $$u_i, v_i (1 \le u_i, v_i \le n, u_i \ne v_i)$$— numbers of cities connected by the road.
Each road is bi-directional and connects two different cities. It is guaranteed that from any city you can pass to any other by roads. There can be more than one road between a pair of cities.
The sum of the values of n for all sets of input data in the test does not exceed $$2\cdot10^5$$. The sum of the values of m for all sets of input data in the test does not exceed $$5\cdot10^5$$.
Output
Print t integers — the answers to the given test cases in the order they are written in the input.
Exampleinput
3
7 7 3 5
1 2
2 3
3 4
4 5
5 6
6 7
7 5
4 5 2 3
1 2
2 3
3 4
4 1
4 2
4 3 2 1
1 2
2 3
4 1
output
4
0
1
#### 答案显然就是(只能被点a访问的点数)*(只能被点b访问的点数),两次dfs即可,签到
#include<bits/stdc++.h>
using namespace std;
#define rep(i,a,b) for(int i=(a);i<=(b);++i)
#define dep(i,a,b) for(int i=(a);i>=(b);--i)
#define pb push_back
typedef long long ll;
const int maxn=(int)5e5+100;
const int mod=(int)1e9+7;
int n,m,a,b,op[maxn][2];
struct node{
int u,v,nxt;
}g[maxn<<1];
void init(){
cnt=2;
}
void dfs(int u,int o,int t){
op[u][o]=1;
int v=g[i].v;
if(op[v][o]==0&&v!=t) dfs(v,o,t);
}
}
void solve(){
scanf("%d%d%d%d",&n,&m,&a,&b);
init();
rep(i,1,m){
int u,v;scanf("%d%d",&u,&v);
}
dfs(a,0,b);dfs(b,1,a);
ll aa=-1,bb=-1;
rep(i,1,n){
if(op[i][0]&&op[i][1]==0) aa++;
if(op[i][0]==0&&op[i][1]) bb++;
}
printf("%lld\n",aa*bb);
}
int main(){
int T;cin>>T;
while(T--) solve();
}
## F. Beautiful Rectangle
time limit per test1 second memory limit per test256 megabytes
You are given n integers. You need to choose a subset and put the chosen numbers in a beautiful rectangle (rectangular matrix). Each chosen number should occupy one of its rectangle cells, each cell must be filled with exactly one chosen number. Some of the n numbers may not be chosen.
A rectangle (rectangular matrix) is called beautiful if in each row and in each column all values are different.
What is the largest (by the total number of cells) beautiful rectangle you can construct? Print the rectangle itself.
Input
The first line contains $$n (1 \le n \le 4\cdot10^5)$$. The second line contains n integers $$(1 \le a_i \le 10^9)$$.
Output
In the first line print$$x (1 \le x \le n)$$— the total number of cells of the required maximum beautiful rectangle. In the second line print p and $$q (p \cdot q=x)$$: its sizes. In the next p lines print the required rectangle itself. If there are several answers, print any.
Examplesinput
12
3 1 4 1 5 9 2 6 5 3 5 8
output
12
3 4
1 2 3 5
3 1 5 4
5 6 8 9
input
5
1 1 1 1 1
output
1
1 1
1
#### 那么如何确定长宽呢,其实很简单,我们枚举w,然后考虑每个数字出现的次数cnt,如果$$cnt>w$$那么显然这个数只能用w次,否则就是cnt次;那么这样做完之后就可以知道一共可以用多少数,除以w就是高度h了;这样枚举完所有的w之后就可以得到一个$$w*h$$的最大值,之后就可以按照上述方法构造了
#include<bits/stdc++.h>
using namespace std;
#define rep(i,a,b) for(int i=(a);i<=(b);++i)
#define dep(i,a,b) for(int i=(a);i>=(b);--i)
#define pb push_back
typedef long long ll;
const int maxn=(int)4e5+100;
const int mod=(int)1e9+7;
int n,a[maxn],w,h;
map<int,int> mp;
vector<int> v;
vector<vector<int> > ans;
struct node{
int cnt,num;
bool operator<(node b)const{return cnt>b.cnt;}
};
vector<node> s;
set<int> ss;
int main(){
scanf("%d",&n);
rep(i,1,n) scanf("%d",&a[i]),mp[a[i]]++,ss.insert(a[i]);
for(auto u:ss) s.pb({mp[u],u});
sort(s.begin(),s.end());
ll tmp=0;
rep(i,1,(int)ss.size()){
ll num=0,hh;
for(auto u:s) num+=min(i,u.cnt);
hh=num/i;
if(hh<i) break;
if(i*hh>tmp) tmp=i*hh,h=hh,w=i;
}
vector<vector<int> > ans(h+10);
rep(i,0,h) ans[i].resize(w+10);
v.pb(-1);
for(auto u:s) rep(i,1,min(w,u.cnt)) v.pb(u.num);
rep(i,1,w) rep(j,1,h){
ans[(i-2+j)%h+1][i]=v[i+w*(j-1)];
}
printf("%d\n%d %d\n",w*h,h,w);
rep(i,1,h) rep(j,1,w) printf(j==w?"%d\n":"%d ",ans[i][j]);
}
0 评论
|
2021-08-01 20:03:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25989460945129395, "perplexity": 1929.2499649607141}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154219.62/warc/CC-MAIN-20210801190212-20210801220212-00570.warc.gz"}
|
https://gmatclub.com/forum/if-x-and-y-are-positive-integers-and-x-y-2-is-an-odd-integer-is-x-216462.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 17 Jun 2018, 15:25
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If x and y are positive integers and (x + y)^2 is an odd integer, is x
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 46035
If x and y are positive integers and (x + y)^2 is an odd integer, is x [#permalink]
### Show Tags
11 Apr 2016, 04:01
1
4
00:00
Difficulty:
65% (hard)
Question Stats:
62% (01:55) correct 38% (02:23) wrong based on 173 sessions
### HideShow timer Statistics
If x and y are positive integers and (x + y)^2 is an odd integer, is x an even integer?
(1) x^4 – y^3 is an odd integer
(2) 0 < y^3 – y < 24
_________________
SC Moderator
Joined: 13 Apr 2015
Posts: 1684
Location: India
Concentration: Strategy, General Management
GMAT 1: 200 Q1 V1
GPA: 4
WE: Analyst (Retail)
If x and y are positive integers and (x + y)^2 is an odd integer, is x [#permalink]
### Show Tags
11 Apr 2016, 11:12
3
1
(x + y)^2 = odd --> (x + y) = odd
Is x even?
St1: x^4 - y^3 = odd
even - odd = odd
odd - even = odd
Not Sufficient as x can be even or odd.
St2: 0 < (y - 1) * y * (y + 1) < 24 --> Product of 3 consecutive positive integers is between 0 and 24.
Only value y can take is 2.
y = 2 = even
x + y = odd
x = odd - even = odd. --> x is not even
Sufficient.
Intern
Joined: 05 Feb 2016
Posts: 9
GMAT 1: 510 Q39 V21
Re: If x and y are positive integers and (x + y)^2 is an odd integer, is x [#permalink]
### Show Tags
13 Apr 2016, 01:54
1
given :
(x+y)^2 = odd
(x+y)(x+y) = odd, ---> (x+y) must be odd ---> x is even, y is odd
or y is odd, x is even
1) x^4 - y^3 = odd
x.x.x.x - y.y.y = odd , we know that Even - Odd= Odd So x can be EVEN and y ODD or
Odd - Even = Odd x can be ODD and y EVEN
INSUFFICIENT
2) 0< y^3 - y <24
0< y(y^2 - 1) <24
0< y(y+1)(y-1)<24
--> bingo, we have 3 consecutive integers so the product is even.
--> y^3 - y = even -->y is even.
given --> x+y = odd
thus, x is odd
SUFFICIENT
_________________
What we do in life, echoes in eternity...
Board of Directors
Joined: 17 Jul 2014
Posts: 2730
Location: United States (IL)
Concentration: Finance, Economics
GMAT 1: 650 Q49 V30
GPA: 3.92
WE: General Management (Transportation)
Re: If x and y are positive integers and (x + y)^2 is an odd integer, is x [#permalink]
### Show Tags
08 Aug 2017, 15:32
1. it can be either odd or even.
2. if y=1, then doesn't satisfy the condition. if y=2, it works. if y=3, then y^3 - y = 24. therefore, y must be 2.
sufficient.
Intern
Joined: 06 Jul 2017
Posts: 7
Re: If x and y are positive integers and (x + y)^2 is an odd integer, is x [#permalink]
### Show Tags
08 Aug 2017, 17:39
Abkhazian wrote:
given :
(x+y)^2 = odd
(x+y)(x+y) = odd, ---> (x+y) must be odd ---> x is even, y is odd
or y is odd, x is even
1) x^4 - y^3 = odd
x.x.x.x - y.y.y = odd , we know that Even - Odd= Odd So x can be EVEN and y ODD or
Odd - Even = Odd x can be ODD and y EVEN
INSUFFICIENT
2) 0< y^3 - y <24
0< y(y^2 - 1) <24
0< y(y+1)(y-1)<24
--> bingo, we have 3 consecutive integers so the product is even.
--> y^3 - y = even -->y is even.
given --> x+y = odd
thus, x is odd
SUFFICIENT
y can be odd as odd-odd is even i think you must solve to get y=2 so even
Sent from my iPhone using GMAT Club Forum mobile app
Intern
Joined: 24 Aug 2017
Posts: 8
If x and y are positive integers and (x+y)^2 is an odd integer, is x [#permalink]
### Show Tags
15 Oct 2017, 16:09
If x and y are positive integers and $$(x+y)^2$$ is an odd integer, is x an even integer?
1) $$x^4 - y^3$$ is an odd integer
2) $$0<y^3-y<24$$
Math Expert
Joined: 02 Aug 2009
Posts: 5875
Re: If x and y are positive integers and (x+y)^2 is an odd integer, is x [#permalink]
### Show Tags
15 Oct 2017, 20:54
khalid228 wrote:
If x and y are positive integers and $$(x+y)^2$$ is an odd integer, is x an even integer?
1) $$x^4 - y^3$$ is an odd integer
2) $$0<y^3-y<24$$
Hi...
If $$(x+y)^2$$ is an odd integer and both x and y are integers, one of them will be odd and other even.
Let's see the statements..
1) $$x^4 - y^3$$ is an odd integer
Still not clear.
It gives the same info as the main statement, that is, one is odd and other is even.
Insufficient
2) $$0<y^3-y<24$$
Now what does $$y^3-y$$ mean..
$$y^3-y=y(y^2-1)=(y-1)*y*(y+1)$$
y is positive integer so least value of equation will be when y=1..
So (y-1)y(y+1)=0*1*2=0.... But it has to be >0... So y is not 1
When y=2, y(y-1)(y+1)=1*2*3=6...... possible
When y=3, #=2*3*4=24....but the # has to be <24... Not possible
So y =2, and x will be ODD
Suff
B
_________________
Absolute modulus :http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372
Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html
GMAT online Tutor
Math Expert
Joined: 02 Aug 2009
Posts: 5875
Re: If x and y are positive integers and (x+y)^2 is an odd integer, is x [#permalink]
### Show Tags
15 Oct 2017, 20:55
Merging posts.
Pl search before posting.
_________________
Absolute modulus :http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372
Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html
GMAT online Tutor
Re: If x and y are positive integers and (x+y)^2 is an odd integer, is x [#permalink] 15 Oct 2017, 20:55
Display posts from previous: Sort by
|
2018-06-17 22:25:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5839494466781616, "perplexity": 3155.034127211933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859817.15/warc/CC-MAIN-20180617213237-20180617233237-00589.warc.gz"}
|
http://mathhelpforum.com/algebra/90175-arguments-complex-numbers.html
|
# Thread: Arguments for complex numbers
1. ## Arguments for complex numbers
Hi everyone
For
a) - 9 = the argument is pi
9(cos pi+ isin pi)
b) 12i the arg is pi/2
12(cos (pi/2) + isin (pi/2) )
What is the argument for
a) -9
b) -12i
Anyone can help advise me? Im doing my revision...
Thank you so much.
2. $\displaystyle -9$ ? Didnt you just answer that yourself? The argument is $\displaystyle \pi$ .
$\displaystyle z = -12i$ this gives the argument $\displaystyle 12(cos(\frac{3\pi}{2})+i\cdot sin(\frac{3\pi}{2}))$
3. sorry
it was
a) 9
wats the argument for 9. thanks for the guide.
4. Originally Posted by anderson
sorry
it was
a) 9
wats the argument for 9. thanks for the guide.
If it's 9 and not -9, then the argument is 0.
9 = 9 + 0i = 9(cos 0 + i sin 0)
01
|
2018-03-17 11:05:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9509984254837036, "perplexity": 8285.820125151835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257644877.27/warc/CC-MAIN-20180317100705-20180317120705-00084.warc.gz"}
|
https://www.physicsforums.com/threads/how-to-explain-the-right-hand-rule-to-an-alien-universe.968520/
|
# B How to explain "the right hand rule" to an alien universe
#### Rap
Suppose we are in communication with aliens who live in a different universe. I know, that's impossible, communication requires the exchange of mass or energy, which implies that we live in the same universe. But suppose it is true. I am wondering, can we and the aliens, via this communication alone, ever come to an agreement on what constitutes a "right handed rotation thru a 30 degree angle"?. I sit here and imagine a vector pointing upwards thru the center of my mouse. Using the right hand rule, I rotate the mouse +30 degrees about that axis. Is there any way, talking to the alien without exchanging any mass or energy, that I can get the alien to do the same?
We can't use beta decay, because they might be living in what we would call an antimatter universe.
I mean, if we could exchange photons, I could send the alien what I call a right-handed photon, spinning right-handedly about a vector axis pointing in the direction of propagation and say "this is what I call a right-handed rotation" and then we could come to an understanding. I might do the same with a massive particle, but if the alien lives in what I call an anti-matter universe, there would be an explosion, the alien could report this, and then I would have to send a massive antimatter particle. Or tell them to use beta decay.
But without a shared physical example, I don't see how it can be done using communication alone. Have I missed something?
Last edited:
Related General Physics News on Phys.org
#### fresh_42
Mentor
2018 Award
The crucial question is: Can we agree on a common language? The rest of any answer depends on the answer to that!
#### Rap
The crucial question is: Can we agree on a common language? The rest of any answer depends on the answer to that!
Actually, that is what I am asking. I think a clever human and a clever alien could eventually establish a fairly sophisticated common technical language, but I wonder if it would be impossible for them to establish a common definition of the direction of rotation. Let's suppose that they have done their best and have established a common technical language to the extent possible without sharing physical examples.
#### anorlunda
Mentor
Easier than a written language is a simple monochrome bitmap picture. Several SF stories, including the movie Contact, used that idea. The disc we sent with the Voyager probe also included pictures.
#### Dale
Mentor
Agreed, just send a picture.
#### DrGreg
Gold Member
But how would the aliens know how to decode the digital data we send and turn it into a picture? Wouldn't we need to explain the order in which the pixels have been scanned, e.g from left to right? In which case we would first need to explain the difference between "left" and "right".
Staff Emeritus
Define the "electron" as the particle produced in the decay $K^0_L \rightarrow \pi^+ e^- \nu$ with lower probability than the "positron", produced in the conjugate reaction.
The weak interaction couples to left-handed electrons and right-handed positrons.
Now you have defined "left" and "right" from physical processes.
#### Rap
Define the "electron" as the particle produced in the decay $K^0_L \rightarrow \pi^+ e^- \nu$ with lower probability than the "positron", produced in the conjugate reaction.
The weak interaction couples to left-handed electrons and right-handed positrons.
Now you have defined "left" and "right" from physical processes.
Ah, I did not know that about that decay. (I'm not a particle physicist, so I ask for your patience). Can the $K^0_L$ particle be unambiguously identified without knowing left from right and not knowing which is matter and which is antimatter?
#### anorlunda
Mentor
But how would the aliens know how to decode the digital data we send and turn it into a picture? Wouldn't we need to explain the order in which the pixels have been scanned, e.g from left to right? In which case we would first need to explain the difference between "left" and "right".
Then maybe they would interpret our picture as the left hand rule. It's just a convention. Ditto if they defined + current as the same as the direction of electron flow. Or if we had chosen the opposite convention for left/right hand names.
What difference would it make to the aliens or to us?
Staff Emeritus
. Can the K0LK^0_L particle be unambiguously identified without knowing left from right and not knowing which is matter and which is antimatter?
Yes. It's the longer-lived neutral kaon.
#### Rap
Then maybe they would interpret our picture as the left hand rule. It's just a convention. Ditto if they defined + current as the same as the direction of electron flow. Or if we had chosen the opposite convention for left/right hand names.
What difference would it make to the aliens or to us?
I guess it wouldn't make any difference to them alone or us alone. But if there was an upcoming meeting between me and an alien and we shook hands, it would be embarrasing for both of us if we offered what the other considered the "wrong" hand. Not to mention the possible nanosecond of embarrassment if we couldn't sort out what each one meant by "matter" and "antimatter" before we exploded.
#### Rap
Yes. It's the longer-lived neutral kaon.
Ok, I'm studying up on K mesons, etc., please let me know if I get this right. The $K^0_L$ particle has an antiparticle and they are distinct. They are continually morphing into each other, but the $K^0_L$ meson is the one that spends a longer time being itself, and when it decays, tends to produce an electron. So without sharing an example, we could come to an agreement with the aliens on what constitutes matter and what constitutes antimatter. I'm not sure what a "left-handed electron" is, could you explain that? Also, I wonder if we could now tell the aliens to look at matter beta decay (or antimatter beta decay, if that's easier for them) to define left and right?
Staff Emeritus
The K0LK^0_L particle has an antiparticle and they are distinct. They are continually morphing into each other, but the K0LK^0_L meson is the one that spends a longer time being itself,
No.
we could come to an agreement with the aliens on what constitutes matter and what constitutes antimatter.
Yes.
I'm not sure what a "left-handed electron" is, could you explain that?
Not at B-level. But if you like, it's unnecessary. Once we have defined what positive and negative charge is, we can turn that into left and right handed.
#### Matt Benesi
Gold Member
Just send them some math, and use intersect language to ask "which way do you rotate?"
#### Attachments
• 2.1 MB Views: 126
#### sophiecentaur
Gold Member
Perhaps the mode of communication that we find successful would supply us with a clue / common reference. If EM were involved then both ends would be using the same Maxwellian Maths (no?).
#### Matt Benesi
Gold Member
That's interesting.
If you have GEM, you have a definite agreeable rotation direction measurement direction, unless you're dealing with stuff you can't measure.
### Want to reply to this thread?
"How to explain "the right hand rule" to an alien universe"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
2019-04-22 18:19:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5893171429634094, "perplexity": 966.3978735084546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578577686.60/warc/CC-MAIN-20190422175312-20190422201312-00365.warc.gz"}
|
https://www.physicsforums.com/threads/linear-algebra-symmetric-positive-definite-problem.572108/
|
# Homework Help: Linear Algebra: Symmetric/Positive Definite problem
1. Jan 29, 2012
### Scootertaj
1. Let A$\in$Rnxn be a symmetric matrix, and assume that there exists a matrix B$\in$Rmxn such that A=BTB.
a) Show that A is positive semidefinite
B) Show that if B has full rank, then A is positive definite
2. Relevant equations:
This is for an operations research class, so most of the definitions revolve around minimizing/maximizing.
However, alternate definitions:
Positive Definite: A is positive definite if for all non-null vectors h, hTAh > 0.
Symmetric: if AT=A.
Semidefinite: hTAh ≥ 0
3. The attempt at a solution
Here's some work:
AT = A ; A = BTB.
So, AT = BTB → ATA = BTBA = AA = A2.
So, ATA ≥ 0.
But, that's not quite what I want.
Last edited: Jan 29, 2012
2. Jan 29, 2012
### Dick
$h^T A h=h^T B^T B h$. That's the inner product $(B h)^T (B h)$. Use the properties of the inner product.
Last edited: Jan 29, 2012
3. Jan 29, 2012
### Scootertaj
Dick,
I don't recall any specific properties of inner products that would help except <x,x> >= 0. But, I don't see how that applies.
4. Jan 29, 2012
### Dick
And <x,x>=0 only if x=0. I think it applies a lot. $(B h)^T (B h)=<Bh, Bh>$.
5. Jan 30, 2012
### Scootertaj
D'oh! I must be too tired, completely forgot that the inner product would be the same as doing the transpose first.
Thank you a lot Dick, you always seem to help out a lot.
|
2018-09-21 02:00:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8064991235733032, "perplexity": 3459.1534449374644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156724.6/warc/CC-MAIN-20180921013907-20180921034307-00555.warc.gz"}
|
http://openstudy.com/updates/50cccd54e4b0031882dc10c1
|
ValentinaT What is the length of the longest side of a triangle that has vertices at (-5, 2) (1,-6) and (1,2)? one year ago one year ago
1. v4xN0s
Use the distance formula
2. ValentinaT
Distance formula?
3. v4xN0s
LOL
4. v4xN0s
ok its $d=\sqrt{(x _{1}-x _{2})^2+(y _{1}-y _{2})^2}$
5. MathLegend
I would also suggest labeling your coordinates
6. v4xN0s
MathLegend ur not a math legend at all
7. MathLegend
A= (-5, 2) B = (1,-6) C = (1,2)
8. MathLegend
So when using the distance formula... you can solve for side AB first.
9. ValentinaT
Thank you.
10. MathLegend
So using that formula, can you label the coordinates as (x1, y1) & (x2,y2)
11. hba
|dw:1355599903259:dw|
12. MathLegend
A= (-5, 2) B = (1,-6) So lets solve for side AB first... so, "A" comes first right? So let that be your x1 & y1 Then, "B" can be your x2 & y2
13. MathLegend
Do you understand so far @ValentinaT ?
14. MathLegend
All I did was label them so that we can plug it into that distance formula.
15. MathLegend
@ValentinaT let me know when you get back so we can work this out together. :)
16. ValentinaT
Yeah, I'm getting it, thank you.
17. ValentinaT
$\frac{ -6 - 2 }{ -5 - 1 } = \frac{ -8 }{ -6}$
18. MathLegend
So for AB (1+5)^2+(-6-2)^2
19. MathLegend
I'm sorry the formula is actually x2-x1 and y2-y1 the above poster just mixed up the two.
20. ValentinaT
Okay.
21. MathLegend
(1+5)^2+(-6-2)^2 (6)^2+(-8)^2 36+64
22. MathLegend
36+64 = 100
23. MathLegend
$\sqrt{100}$
24. ValentinaT
10
25. MathLegend
Now, we need the square root... because if you notice that entire formula had the square root symbol over it
26. MathLegend
Good.
27. MathLegend
So side AB = 10
28. MathLegend
So that is one side. So lets go for side BC
29. MathLegend
B = (1,-6) x1 y1 C = (1,2) x2 y2
30. MathLegend
@ValentinaT do you feel comfortable trying it out on your own? Tell me what you get and I'll check to see if your answer for side BC is correct.
31. ValentinaT
$\frac{ 2 - -6 }{ 1 - 1 } = \frac{ 8 }{ 0 }$
32. MathLegend
Remember we are not trying to find a slope.
33. MathLegend
We are looking for the distance between the vertices.
34. MathLegend
(1-1)^2 + (2+6)^2
35. MathLegend
To get that all I did was plug it into the formula. (x2-x1)^2+(y2-y1)^2
36. MathLegend
(1-1)^2 + (2+6)^2 Try solving that.
37. ValentinaT
Okay, 0 + 64?
38. MathLegend
Good
39. MathLegend
0 + 64 = 64 $\sqrt{64}$
40. ValentinaT
So, 8 as the square root.
41. MathLegend
Yes, so side BC = 8
42. MathLegend
Now, try side AC
43. MathLegend
(x2-x1)^2+(y2-y1)^2
44. ValentinaT
A = -5, 2 = x1, y1 C = 1, 2 = x2, y2 $\frac{ 1 - -5^2}{ 2 - 2^2 } = \frac{ 6 }{ 1 }$ so 36?
45. MathLegend
(x2-x1)^2+(y2-y1)^2 (1+5)^2 + (2-2)^2 Do you understand this step?
46. ValentinaT
36 + 0 Yeah I get it, I just couldn't figure out how to write it.
47. MathLegend
Good so if you took the square root of 36 $\sqrt{36}$
48. MathLegend
So now, we know the length of the longest side is AB
49. ValentinaT
Okay, thank you!
|
2014-04-16 04:38:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8746907114982605, "perplexity": 4136.001491584259}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-the-system-x-y-2z-5-x-2y-z-8-2x-3y-z-13
|
# How do you solve the system x+y-2z=5, x+2y+z=8, 2x+3y-z=13?
Mar 20, 2018
$x = 5 k + 2$, $y = 3 - 3 k$ and $z = k$
#### Explanation:
Perform the Gauss Jordan elimination on the augmented matrix
$A = \left(\begin{matrix}1 & 1 & - 2 & | & 5 \\ 1 & 2 & 1 & | & 8 \\ 2 & 3 & - 1 & | & 13\end{matrix}\right)$
I have written the equations not in the sequence as in the question in order to get $1$ as pivot.
Perform the folowing operations on the rows of the matrix
$R 2 \leftarrow R 2 - R 1$; $R 3 \leftarrow R 3 - 2 R 1$
$A = \left(\begin{matrix}1 & 1 & - 2 & | & 5 \\ 0 & 1 & 3 & | & 3 \\ 0 & 1 & 3 & | & 3\end{matrix}\right)$
Consequently this system has infinite solutions. After choosing $z = k$, $y$ must be $3 - 3 k$ and $x$ must be $5 k + 2$
|
2019-12-13 05:21:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9059845209121704, "perplexity": 455.19154614536365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548544.83/warc/CC-MAIN-20191213043650-20191213071650-00078.warc.gz"}
|
https://cracku.in/cat/quantitative-aptitude/data-sufficiency/cheatsheet
|
## Data Sufficiency
Theory
According to us these are the trickiest questions in Quant so proceed with caution. Most often committed mistake in Data Sufficiency is assuming data that was not mentioned in the question. For eg, assuming the relationship mentioned in statement 1 while solving for statement 2. As mentioned in tips, you do not have to solve the problem to get an answer. All you need to make sure is that a unique answer exists and is derivable from the equation given.
Tip
• Assume each condition individually when testing the first two options. Do not use conclusions drawn from using one statement with the other.
• If you realize that the problem can be solved to give a UNIQUE solution do not continue solving it.
• Sometimes the information given may look like it is enough to solve a problem. For eg, two linear equations with two variables. In such a case check that the equations are not inconsistent or do not give infinitely many answers.
|
2022-12-05 00:46:04
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8365851044654846, "perplexity": 283.59605668517213}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00511.warc.gz"}
|
https://www.mathworks.com/help/matlab/ref/spdiags.html
|
Documentation
# spdiags
Extract nonzero diagonals and create sparse band and diagonal matrices
## Syntax
``Bout = spdiags(A)``
``[Bout,id] = spdiags(A)``
``Bout = spdiags(A,d)``
``S = spdiags(Bin,d,m,n)``
``S = spdiags(Bin,d,A)``
## Description
example
````Bout = spdiags(A)` extracts the nonzero diagonals from `m`-by-`n` matrix `A` and returns them as the columns in `min(m,n)`-by-`p` matrix `Bout`, where `p` is the number of nonzero diagonals.```
example
````[Bout,id] = spdiags(A)` also returns the diagonal numbers `id` for the nonzero diagonals in `A`. The size of `Bout` is `min(m,n)`-by-`length(id)`.```
example
````Bout = spdiags(A,d)` extracts the diagonals in `A` specified by `d` and returns them as the columns of `min(m,n)`-by-`length(d)` matrix `Bout`.```
example
````S = spdiags(Bin,d,m,n)` creates an `m`-by-`n` sparse matrix `S` by taking the columns of `Bin` and placing them along the diagonals specified by `d`.```
example
````S = spdiags(Bin,d,A)` replaces the diagonals in `A` specified by `d` with the columns of `Bin`.```
## Examples
collapse all
Create a tridiagonal matrix using three vectors, change some of the matrix diagonals, and then extract the diagonals.
Create a 9-by-1 vector of ones, and then create a tridiagonal matrix using the vector. View the matrix elements.
```n = 9; e = ones(n,1); A = spdiags([e -2*e e],-1:1,n,n); full(A)```
```ans = 9×9 -2 1 0 0 0 0 0 0 0 1 -2 1 0 0 0 0 0 0 0 1 -2 1 0 0 0 0 0 0 0 1 -2 1 0 0 0 0 0 0 0 1 -2 1 0 0 0 0 0 0 0 1 -2 1 0 0 0 0 0 0 0 1 -2 1 0 0 0 0 0 0 0 1 -2 1 0 0 0 0 0 0 0 1 -2 ```
Change the values on the main (`d = 0`) diagonal of `A`.
```Bin = abs(-(n-1)/2:(n-1)/2)'; d = 0; A = spdiags(Bin,d,A); full(A)```
```ans = 9×9 4 1 0 0 0 0 0 0 0 1 3 1 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 1 3 1 0 0 0 0 0 0 0 1 4 ```
Finally, recover the diagonals of `A` as the columns in a matrix.
```Bout = spdiags(A); full(Bout)```
```ans = 9×3 1 4 0 1 3 1 1 2 1 1 1 1 1 0 1 1 1 1 1 2 1 1 3 1 0 4 1 ```
Extract the nonzero diagonals of a matrix and examine the output format of `spdiags`.
Create a matrix containing a mix of nonzero and zero diagonals.
```A = [0 5 0 10 0 0 0 0 6 0 11 0 3 0 0 7 0 12 1 4 0 0 8 0 0 2 5 0 0 9];```
Extract the nonzero diagonals from the matrix. Specify two outputs to return the diagonal numbers.
`[Bout,d] = spdiags(A)`
```Bout = 5×4 0 0 5 10 0 0 6 11 0 3 7 12 1 4 8 0 2 5 9 0 ```
```d = 4×1 -3 -2 1 3 ```
The columns of the first output `Bout` contain the nonzero diagonals of `A`. The second output `d` lists the indices of the nonzero diagonals of `A`. The longest nonzero diagonal in `A` is in column 3 of `Bout`. To give all columns of `Bout` the same length, the other nonzero diagonals of `A` have extra zeros added to their corresponding columns in `Bout`. For `m`-by-`n` matrices with `m < n`, the rules are:
• For nonzero diagonals below the main diagonal of `A`, extra zeros are added at the tops of columns (as in the first two columns of `Bout`).
• For nonzero diagonals above the main diagonal of `A`, extra zeros are added at the bottoms of columns (as in the last column of `Bout`).
`spdiags` pads `Bout` with zeros in this manner even if the longest diagonal is not returned in `Bout`.
Create a 5-by-5 random matrix.
`A = randi(10,5,5)`
```A = 5×5 9 1 2 2 7 10 3 10 5 1 2 6 10 10 9 10 10 5 8 10 7 10 9 10 7 ```
Extract the main diagonal, and the first diagonals above and below it.
```d = [-1 0 1]; Bout = spdiags(A,d)```
```Bout = 5×3 10 9 0 6 3 1 5 10 10 10 8 10 0 7 10 ```
Try to extract the fifth super-diagonal (`d = 5`). Because `A` has only four super-diagonals, `spdiags` returns the diagonal as all zeros of the same length as the main (`d = 0`) diagonal.
`B5 = spdiags(A,5)`
```B5 = 5×1 0 0 0 0 0 ```
Examine how `spdiags` creates diagonals when the columns of the input matrix are longer than the diagonals they are replacing.
Create a 6-by-7 matrix of the numbers 1 through 6.
`Bin = repmat((1:6)',[1 7])`
```Bin = 6×7 1 1 1 1 1 1 1 2 2 2 2 2 2 2 3 3 3 3 3 3 3 4 4 4 4 4 4 4 5 5 5 5 5 5 5 6 6 6 6 6 6 6 ```
Use `spdiags` to create a square 6-by-6 matrix with several of the columns of `Bin` as diagonals. Because some of the diagonals only have one or two elements, there is a mismatch in sizes between the columns in `Bin` and diagonals in `A`.
```d = [-4 -2 -1 0 3 4 5]; A = spdiags(Bin,d,6,6); full(A)```
```ans = 6×6 1 0 0 4 5 6 1 2 0 0 5 6 1 2 3 0 0 6 0 2 3 4 0 0 1 0 3 4 5 0 0 2 0 4 5 6 ```
Each of the columns in `Bin` has six elements, but only the main diagonal in `A` has six elements. Therefore, all the other diagonals in `A` truncate the elements in the columns of `Bin` so that they fit on the selected diagonals:
The way `spdiags` truncates the diagonals depends on the size of `m`-by-`n` matrix `A`. When $\mathit{m}\ge \mathit{n}$, the behavior is as pictured above:
• Diagonals below the main diagonal take elements from the tops of the columns first.
• Diagonals above the main diagonal take elements from the bottoms of columns first.
This behavior reverses when $\mathit{m}<\mathit{n}$:
```A = spdiags(Bin,d,5,6); full(A)```
```ans = 5×6 1 0 0 1 1 1 2 2 0 0 2 2 3 3 3 0 0 3 0 4 4 4 0 0 5 0 5 5 5 0 ```
• Diagonals above the main diagonal take elements from the tops of the columns first.
• Diagonals below the main diagonal take elements from the bottoms of columns first.
## Input Arguments
collapse all
Input matrix. This matrix is typically (but not necessarily) sparse.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `logical`
Complex Number Support: Yes
Diagonal numbers, specified as a scalar or vector of positive integers. The diagonal numbers follow the same conventions as `diag`:
• `d < 0` is below the main diagonal, and satisfies ```d >= -(m-1)```.
• `d = 0` is the main diagonal.
• `d > 0` is above the main diagonal, and satisfies ```d <= (n-1)```.
An `m`-by-`n` matrix `A` has `(m + n - 1)` diagonals. These diagonals are specified in the vector `d` using indices from `-(m-1)` to `(n-1)`. For example, if `A` is 5-by-6, it has 10 diagonals, which are specified in the vector `d` using the indices -4, -3 , ... 4, 5. The following diagram illustrates this diagonal numbering.
If you specify a diagonal that lies outside of `A` (such as `d = 7` in the example above), then `spdiags` returns that diagonal as all zeros.
Example: `spdiags(A,[3 5])` extracts the third and fifth diagonals from `A`.
Diagonal elements, specified as a matrix. This matrix is typically (but not necessarily) full. `spdiags` uses the columns of `Bin` to replace specified diagonals in `A`. If the requested size of the output is `m`-by-`n`, then `Bin` must have `min(m,n)` columns.
With the syntax `S = spdiags(Bin,d,m,n)`, if a column of `Bin` has more elements than the diagonal it is replacing, and `m >= n`, then `spdiags` takes elements of super-diagonals from the lower part of the column of `Bin`, and elements of sub-diagonals from the upper part of the column of `Bin`. However, if `m < n` , then super-diagonals are from the upper part of the column of `Bin`, and sub-diagonals from the lower part. For an example of this behavior, see Columns and Diagonals of Different Sizes.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `logical`
Complex Number Support: Yes
Dimension sizes, specified as nonnegative scalar integers. `spdiags` uses these inputs to determine how large a matrix to create.
Example: `spdiags(Bin,d,300,400)` creates a 300-by-400 matrix with the columns of `B` placed along the specified diagonals `d`.
## Output Arguments
collapse all
Diagonal elements, returned as a full matrix. The columns of `Bout` contain diagonals extracted from `A`. Any elements of `Bout` corresponding to positions outside of `A` are set to zero.
Diagonal numbers, returned as a column vector. See `d` for a description of the diagonal numbering.
Output matrix. `S` takes one of two forms:
• With `S = spdiags(Bin,d,A)`, the specified diagonals in `A` are replaced with the columns in `Bin` to create `S`.
• With `S = spdiags(Bin,d,m,n)`, the `m`-by-`n` sparse matrix `S` is formed by taking the columns of `Bin` and placing them along the diagonals specified by `d`.
|
2019-11-19 10:33:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8087541460990906, "perplexity": 1253.8207700758012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670135.29/warc/CC-MAIN-20191119093744-20191119121744-00055.warc.gz"}
|
https://math.stackexchange.com/questions/2028076/given-the-following-linear-transformation-find-the-matrix-associated-to-varph
|
# Given the following linear transformation, find the matrix associated to $\varphi$ through a given base.
Text of the exercise:
Consider the linear transformation $\varphi:\mathbb{R}^3 \rightarrow \mathbb{R}^3$ defined by:
$\varphi(1,0,0) = (0,1,0)\\ \varphi(0,1,1) = (1,1,0) \\\varphi(0,0,1) =(1,0,0)$
Is $B={(0,1,1),(0,1,0),(1,0,0)}$ a basis of $\mathbb{R}^3$? If the answer is yes, find the matrix $M_{\varphi}^B$ associated to $\varphi$ through $B$.
Reasoning:
$B$ is a basis of $\mathbb{R}^3$ since it is a set of three linearly independent vectors (proof is given by the fact that the matrix constituted by their coordinates is not singular) in a three-dimensional space.
I know from the definition of $\varphi$ the result of the transformation of two of the vectors of $B$; precisely $\varphi(1,0,0)=(0,1,0)$ and $\varphi(0,1,1)=(1,1,0)$. In order to infer the last one, I could use the definind property of a linear mapping, i.e. that a function $f:V\rightarrow W$ is linear if $\forall \textbf{v}_{1},\textbf{v}_{2}\in V$, $\forall\lambda_{1},\lambda_{2}\in \mathbb{K}$ then $f(\lambda_{1}\textbf{v}_{1}+\lambda_{2}\textbf{v}_{2}=\lambda_{1}f(\textbf{v}_{1})+\lambda_{1}f(\textbf{v}_{1})$, so that: $$\varphi(0,1,0)=\varphi(0,1,1)-\varphi(0,0,1)=(1,1,0)-(1,0,0)=(0,1,0)$$ The asked matrix is thus: $$M_{\varphi}^B=\begin{bmatrix} 1&0&0\\1&1&1\\0&0&0 \end{bmatrix}$$
I'd greatly appreciate a feedback. Thank you all!
• Well, you can check yourself through matrix multiplication. For instance, is $(1,0,0)$ mapped onto $(0,1,0)$ through this multiplication? Same for the other two. Notice that your stated matrix is singular! – imranfat Nov 23 '16 at 23:36
$e_1 = (0,1,1), e_2 = (0,1,0), e_3 = (1,0,0)$
$\varphi e_1 = \varphi (0,1,1) = (1,1,0) = e_2 + e_3\\ \varphi e_2 = \varphi (0,1,1)- \varphi (0,0,1)= (1,1,0) - (1,0,0) = (0,1,0) = e_2\\ \varphi e_3 = \varphi (1,0,0)= (0,0,1) = e_1 - e_2$
$M = \begin{bmatrix} 0&0&1\\1&1&-1\\1&0&0\end{bmatrix}$
• I'm not sure about $\varphi e_2$...maybe it is $\varphi e_2=\varphi (0,1,1)-\varphi(0,0,1)$, but check it please. – MattG88 Nov 24 '16 at 0:08
|
2019-05-24 21:37:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9232140183448792, "perplexity": 146.83915132010708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257767.49/warc/CC-MAIN-20190524204559-20190524230559-00263.warc.gz"}
|
https://tex.stackexchange.com/questions/273265/why-cant-xetex-handle-this-dank-meme
|
Why can't XeTeX handle this dank meme?
MWE
\documentclass{article}
\usepackage{graphicx}
\begin{document}
\includegraphics{dank}
\end{document}
Oddly enough, only this picture fails and only when compiling with XeTeX (as opposed to pdfTeX or LuaTeX):
ERROR: Dimension too large.
--- TeX said ---
b
l.5 \includegraphics{dank}
--- HELP ---
From the .log file...
I can't work with sizes bigger than about 19 feet.
Continue and I'll use the largest value I can.
The dank meme itself is a bit of an eyesore, so I'm not displaying it inline. Please check the shasum to make sure it's still the same file when you download it.
\$ shasum dank.jpg
b889bc28b2ccd079073267159db0562f45b58e4d dank.jpg
What's special about this image? Why does it fail? Its natural setting (as seen with LuaTeX or pdfTeX) is well under 19 feet.
• – erik Oct 16 '15 at 1:07
• Can I ask what context prompted dank memes in LaTeX? This question's title took me completely by surprise. – Arun Debray Oct 16 '15 at 3:00
• @ArunDebray I'm helping someone clean up and improve a LaTeX guide and, alas, it had this dank meme. – Sean Allred Oct 16 '15 at 3:01
• @SeanAllred Ah, thanks! Well, if erik's suggestion doesn't work for you, you have a wonderful excuse to substitute a different picture... – Arun Debray Oct 16 '15 at 3:02
• @ArunDebray I know that changing/re-saving/etc. the picture will allow me to continue – I'm more interested in what's stumped XeTeX in the first place ;) – Sean Allred Oct 16 '15 at 3:23
convert -density 72x72 dank.jpg dank1.jpg
To determine if this is the problem with a given image, use (copied from @egreg's comment): identify -verbose dank.jpg | grep Resolution.
• @SeanAllred With identify -verbose dank.jpg|grep Resolution you get no information, but you get Resolution: 72x72 for dank1.jpg – egreg Oct 16 '15 at 15:21
|
2020-04-06 09:16:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7893293499946594, "perplexity": 7191.578274359446}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371620338.63/warc/CC-MAIN-20200406070848-20200406101348-00388.warc.gz"}
|
http://mathhelpforum.com/statistics/129665-expected-value-print.html
|
# Expected Value
Printable View
• Feb 19th 2010, 01:53 PM
tootiebee
Expected Value
No idea where to start.
The owner of a small firm has just purchased a personal computer, which she expects will serve her for the next two years. The owner has been told that she "must" buy a surge suppressor to provide protection for her new hardware against possible surges or variations in the electrical current, which have the capacity to damage the computer. The amount of damage to the computer depends on the strength of the surge. It has been estimated that there is a 2% chance of incurring 450 dollars damage, 6% chance of incurring 150 dollars damage, and 11% chance of 100 dollars damage. An inexpensive suppressor, which would provide protection for only one surge, can be purchased. How much should the owner be willing to pay if she makes decisions on the basis of expected value?
• Feb 19th 2010, 02:00 PM
vince
Quote:
Originally Posted by tootiebee
No idea where to start.
The owner of a small firm has just purchased a personal computer, which she expects will serve her for the next two years. The owner has been told that she "must" buy a surge suppressor to provide protection for her new hardware against possible surges or variations in the electrical current, which have the capacity to damage the computer. The amount of damage to the computer depends on the strength of the surge. It has been estimated that there is a 2% chance of incurring 450 dollars damage, 6% chance of incurring 150 dollars damage, and 11% chance of 100 dollars damage. An inexpensive suppressor, which would provide protection for only one surge, can be purchased. How much should the owner be willing to pay if she makes decisions on the basis of expected value?
it is always good to define what random variable you want to compute hte xpected value for. In this case, youre interested in the expected value of the random variable X, which is the amount of \$ spent on repairing the computer from surges. With the given info, we can compute
$
E[X] = .02*450+.06*150+.11*100+(1-.02-.06-.11)*0$
Since the owner would only want protection against surges if that protection cost less than the amount she would pay on average to repair the computer from surges, she would be willing to pay E[X].
• Feb 19th 2010, 02:04 PM
tootiebee
Expected Value #2
To examine the effectiveness of its four annual advertising promotions, a mail order company has sent a questionnaire to each of its customers, asking how many of the previous year's promotions prompted orders that would not have otherwise been made. The accompanying table lists the probabilities that were derived from the questionnaire, where X is the random variable representing the number of promotions that prompted orders. If we assume that overall customer behavior next year will be the same as last year, what is the expected number of promotions that each customer will take advantage of next year by ordering goods that otherwise would not be purchased?
X01234P(X)0.0840.2240.3130.1960.183
Expected value =
A previous analysis of historical records found that the mean value of orders for promotional goods is 23 dollars, with the company earning a gross profit of 28% on each order. Calculate the expected value of the profit contribution next year.
Expected value =
The fixed cost of conducting the four promotions is estimated to be 10000 dollars with a variable cost of 5.25 dollars per customer for mailing and handling costs. What is the minimum number of customers required by the company in order to cover the cost of promotions? (Round your answer up to the next highest integer.) Break even point =
|
2017-06-25 16:20:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2586888372898102, "perplexity": 622.1627504552906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320539.37/warc/CC-MAIN-20170625152316-20170625172316-00084.warc.gz"}
|
https://www.zbmath.org/?q=an%3A1454.20087
|
zbMATH — the first resource for mathematics
The co-surface graph and the geometry of hyperbolic free group extensions. (English) Zbl 1454.20087
Summary: We introduce the co-surface graph $$\mathcal{CS}$$ of a finitely generated free group $$\mathbb{F}$$ and use it to study the geometry of hyperbolic group extensions of $$\mathbb{F}$$. Among other things, we show that the Gromov boundary of the co-surface graph is equivariantly homeomorphic to the space of free arational $$\mathbb{F}$$-trees and use this to prove that a finitely generated subgroup of $$\mathrm{Out}(\mathbb{F})$$ quasi-isometrically embeds into the co-surface graph if and only if it is purely atoroidal and quasi-isometrically embeds into the free factor complex. This answers a question of I. Kapovich. Our earlier work [Geom. Topol. 22, No. 1, 517–570 (2018; Zbl 1439.20034)] shows that every such group gives rise to a hyperbolic extension of $$\mathbb{F}$$, and here we prove a converse to this result that characterizes the hyperbolic extensions of $$\mathbb{F}$$ arising in this manner. As an application of our techniques, we additionally obtain a Scott-Swarup type theorem for this class of extensions.
Reviewer: Reviewer (Berlin)
MSC:
20F67 Hyperbolic groups and nonpositively curved groups 20F28 Automorphism groups of groups 20E05 Free nonabelian groups 20E22 Extensions, wreath products, and other compositions of groups 20F65 Geometric group theory 57M07 Topological methods in group theory
Full Text:
|
2021-06-18 21:28:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3636966049671173, "perplexity": 419.78807522071884}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487641593.43/warc/CC-MAIN-20210618200114-20210618230114-00418.warc.gz"}
|
https://math.stackexchange.com/questions/2151942/calculating-limit-of-recursive-sequence
|
# Calculating limit of recursive sequence
I am preparing for a test and wanted to ask you
$a_0 = 1; a_{n+1} = \sqrt{a_n} + \frac{15}{4}$
I already showed its strictly monotonically increasing. Now im trying to calculate the limit.
$$\lim a_{n+1} = \lim a_n \Leftrightarrow a = \sqrt{a} + \frac{15}{4} \Leftrightarrow a = (a- \frac{15}{4})^2 \Leftrightarrow 0 = a^2 - \frac{17a}{2} + \frac{225}{16}$$
$$\Longrightarrow a_1 = 2.25 , a_2 = 6.25$$ So you basically take the first limit $a_1 = 2.25$ . Is that correct? Is there better way of calculating the limit? Thank you
• Everything looks good except for the last statement. Each term is no less than $\frac{15}{4}=3.75$, so the limit can't be $2.25$. – Nick D. Feb 19 '17 at 20:21
• Thats what happens if you do too much maths... Thank you – user391105 Feb 19 '17 at 20:22
• Isn't showing an upper bound for the sequence required? – rookie Feb 19 '17 at 20:24
• You still have to prove the sequence is bounded, otherwise the limit might be $+\infty$ – user261263 Feb 19 '17 at 20:42
• @Situ You cannot assume, a priori, that the sequence has a limit. – Mark Viola Feb 19 '17 at 20:55
The missing part is $a_n$ has upper bound.
Using induction, we'll show $a_n \le \frac {25} 4, \forall n \ge 1$.
For $n=1$ it's obvious. Suppose it's true for $n$. Then $a_{n+1}=\sqrt{a_n} + \frac{15}{4} \le \frac 5 2 + \frac {15} 4 = \frac {25} 4$
• I thought its enough to show its strictly monotonically increasing and it has a limit smaller than infinity to be bounded ( because $a_0 = 1$ ). Thank you! – user391105 Feb 19 '17 at 20:57
|
2019-05-20 11:02:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9092716574668884, "perplexity": 365.4941119844453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255943.0/warc/CC-MAIN-20190520101929-20190520123929-00378.warc.gz"}
|
https://datascience.stackexchange.com/questions/33988/how-to-provide-classified-feature-to-a-neural-network
|
# How to provide classified feature to a neural network
Let say I have a feature that may have one of 4 values, 1,2,3,4. I want to provide it as a NN input, what is proper way to do that?
I can map it like 1 -> -1.0 | 2 -> -0.3 | 3 -> 0.3 | 4 -> 1.0 , or something similar to have mean of 0.0 and std near 1.0. But in this example 1 is much different than 4 compared to difference of 3 and 4, and I don't want such discrimination cos 1 and 4 are equally different to me as 3 and 4.
Another way is to have 4 features where each of them relates to a class 1,2,3,4 and has value of 1 if initial feature has value that matches it, and has value of 0 if it doesn't match it.
Like this 1 -> [1,0,0,0], 2 -> [0,1,0,0], 3 -> [0,0,1,0], 4 -> [0,0,0,1] But I don't like here the fact that this feature gets to much weight, especially if it has many classes.
I was thinking to make separated layer just for this feature, do you have some better solution?
## 1 Answer
In your problem, the label is a categorical variable (you cannot infer relation between classes just from the label value) and not ordinal (value shows relation/distance between classes).
The solution that you propose:
1 -> [1,0,0,0], 2 -> [0,1,0,0], 3 -> [0,0,1,0], 4 -> [0,0,0,1]
is called One-Hot Encoding. This is one of the most popular ways of encoding the classes during preprocessing, in order to feed them to a classifier, thus I recommend you of doing so.
You mention that you are afraid that the
feature gets to much weight
because of the number of samples in the dataset. This is called class imbalance. A way of circumventing it is to pre-weight your samples for your classifier, please take a look at this method implementation provided by sklearn package.
• Hi, thank you for your answer. But I didn't quite understood the part with class imbalance. As I am familiar with it, it is when one output label has higher frequency than the other, right? How is it correlated with my problem? My concern was that if I have f.e. 3 simple regression features and one categorical which I apply OHE to, that this categorical will have more weight (if I have 4 classes in it) Meaning when regularization is being applied it will calculate 4 weights of that one feature and 3 weights of the other 3 features, which I don't like Jul 6, 2018 at 7:37
|
2022-05-23 19:14:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5226609706878662, "perplexity": 747.1231901101992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662560022.71/warc/CC-MAIN-20220523163515-20220523193515-00097.warc.gz"}
|
http://oneclickinfo.eu/online-casino/probability-expected-value-formula.php
|
# Probability expected value formula
Posted by
For the expected value, you need to evaluate the integral ∫40yf(y)dy=∫y3(4 −y)64dy. In probability theory, the expected value of a random variable, intuitively, is the long-run .. This is because an expected value calculation must not depend on the order in which the possible outcomes are presented, whereas in a conditionally. Expected Value for a Discrete Random Variable. E(X)=\sum x_i p_i. x_i= value of the i th outcome p_i = probability of the i th outcome. According to this formula. Find an article Search Feel like "cheating" at Statistics? The intuition however remains the same: Back to Top Find an Expected Value for a Discrete Random Variable You can think of an expected value as a mean , or average , for a probability distribution. Expected value is exactly what you might think it means intuitively: Assume one of the patients is chosen at random.
|
2018-07-19 17:27:54
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9638872146606445, "perplexity": 388.927245223493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591150.71/warc/CC-MAIN-20180719164439-20180719184439-00201.warc.gz"}
|
https://stacks.math.columbia.edu/tag/0A5J
|
Example 12.18.2. Let $\mathcal{A}$, $\mathcal{B}$, $\mathcal{C}$ be additive categories. Suppose that
$\otimes : \mathcal{A} \times \mathcal{B} \longrightarrow \mathcal{C}, \quad (X, Y) \longmapsto X \otimes Y$
is a functor which is bilinear on morphisms, see Categories, Definition 4.2.20 for the definition of $\mathcal{A} \times \mathcal{B}$. Given complexes $X^\bullet$ of $\mathcal{A}$ and $Y^\bullet$ of $\mathcal{B}$ we obtain a double complex
$K^{\bullet , \bullet } = X^\bullet \otimes Y^\bullet$
in $\mathcal{C}$. Here the first differential $K^{p, q} \to K^{p + 1, q}$ is the morphism $X^ p \otimes Y^ q \to X^{p + 1} \otimes Y^ q$ induced by the morphism $X^ p \to X^{p + 1}$ and the identity on $Y^ q$. Similarly for the second differential.
Comment #3069 by anon on
Typo: Please delete "a" in "a complexes".
There are also:
• 4 comment(s) on Section 12.18: Double complexes and associated total complexes
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2022-06-27 14:23:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9932177662849426, "perplexity": 694.8368015648784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103334753.21/warc/CC-MAIN-20220627134424-20220627164424-00205.warc.gz"}
|
https://www.gamedev.net/forums/topic/67942-why-are-map-files-jerks/
|
• Advertisement
#### Archived
This topic is now archived and is closed to further replies.
# Why are .map files jerks?
This topic is 5935 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
If you intended to correct an error in the post then please contact us.
## Recommended Posts
Hey, I was just wondering if anyone has had any experience with the map files generated by q3radiant (the quake 3 editor). I try to read the vertices, but they make no sense whatsoever. I tried to make a 64x64x64 cube with a vertex at 0 and projecting out along postive axi (is that the plural of axis? doesn't look right, maybe axes, anyway) so each vertex should either be 0 or 64, right? Heh, tell q3rad that. This is what it gave to my poor engine:
{ "classname" "worldspawn" // brush 0 { ( 128 64 0 ) ( 0 64 0 ) ( 0 0 0 ) unnamed 0 0 0 0.500000 0.500000 0 0 0 ( 16 0 64 ) ( 16 64 64 ) ( 144 64 64 ) unnamed 0 0 0 0.500000 0.500000 0 0 0 ( 0 0 8 ) ( 128 0 8 ) ( 128 0 0 ) unnamed 0 0 0 0.500000 0.500000 0 0 0 ( 64 -8 8 ) ( 64 56 8 ) ( 64 56 0 ) unnamed 0 0 0 0.500000 0.500000 0 0 0 ( 128 64 8 ) ( 0 64 8 ) ( 0 64 0 ) unnamed 0 0 0 0.500000 0.500000 0 0 0 ( 0 64 8 ) ( 0 0 8 ) ( 0 0 0 ) unnamed 0 0 0 0.500000 0.500000 0 0 0 } }
at first I thought the vertices were in (x y z) format, z being depth, but I'm begining to think that the people who made this were weird and used z for height. That's okay, but what's up with values like 8 and 56 and 128?!? I understand that it might have to track the viewpoint, but it still doesn't make any sense. If anyone can help me make heads or tails of this, it would be greatly appreciated. Thanks, -Jesse
You raise the blade, you make the cut, you rearrange me 'till I'm sane, You lock the door and throw away the key, there's someone in my head, but it's not me. -Pink Floyd [Edit: Someone's an idiot and his name is Jesse...] Edited by - webmunkey on November 20, 2001 10:49:01 PM
#### Share this post
##### Share on other sites
Advertisement
I suggest you look at the quake map specs. You have the complete wrong idea what the .map file is. Those numbers you see ARE NOT the vertices. They are non-colinear points in a clockwise order on some plane. And the intersection of at least 4 planes define a solid region, aka brush. Most of the time you will see a brush made up of six planes like in your example. And the intersection of these six planes make up your cube you made in q3rad. But just read the quake map specs and you will understand the map file.
-SirKnight
#### Share this post
##### Share on other sites
Oh crap, I had no idea, but that would explain it. Okay, thanks Now I feel like an idiot...
Here is a link to a site I found about .map files (in case anyone is interested)http://www.claudec.com/claudecs_college/q3a_ldhb/q3aldh_theeditor.htm
-Jesse
Edited by - webmunkey on November 20, 2001 12:13:49 AM
#### Share this post
##### Share on other sites
Okay, this is more a math question than anything else, but anyhow, the 6 strips from the map file above are 3 vertices used to define a plane. Where the planes intersect is the edge of the object. Well, that''s just great... Actually not really because I have completely forgot the plane equation formual and how it is used. If anyone knows the equation that will give plane intersections in 3d space, that would be really good. I''d just trace vectors and then break out the ole trig, but I''m not sure how I can tell the computer to trace a vector in the correct direction so that it will eventually intersect. I think I''m just better off using a formula Any input is appreciated, thanks,
-Jesse
#### Share this post
##### Share on other sites
• Advertisement
• Advertisement
• ### Popular Tags
• Advertisement
• ### Popular Now
• 10
• 12
• 10
• 10
• 11
• Advertisement
|
2018-02-19 12:16:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3358626365661621, "perplexity": 790.5488446232198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812584.40/warc/CC-MAIN-20180219111908-20180219131908-00293.warc.gz"}
|
https://proofwiki.org/wiki/Relation_Segment_is_Increasing
|
# Relation Segment is Increasing
## Theorem
Let $S$ be a set.
Let $\RR, \QQ$ be relations on $S$ such that
$\RR \subseteq \QQ$
Let $x \in S$.
Then
$x^\RR \subseteq x^\QQ$
where $x^\RR$ denotes the $\RR$-segment of $x$.
## Proof
Let $y \in x^\RR$.
By definition of $\RR$-segment:
$\tuple {y, x} \in \RR$
By definition of subset:
$\tuple {y, x} \in \QQ$
Thus by definition of $\QQ$-segment:
$y \in x^\QQ$
$\blacksquare$
|
2020-08-07 12:24:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994412064552307, "perplexity": 2139.934433142305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737178.6/warc/CC-MAIN-20200807113613-20200807143613-00503.warc.gz"}
|
https://istopdeath.com/write-the-fraction-in-simplest-form-1-1-5/
|
# Write the Fraction in Simplest Form 1 1/5
A mixed number is an addition of its whole and fractional parts.
To write as a fraction with a common denominator, multiply by .
Write each expression with a common denominator of , by multiplying each by an appropriate factor of .
Combine.
Multiply by .
Combine the numerators over the common denominator.
Simplify the numerator.
Multiply by .
|
2023-02-04 22:23:13
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8714354634284973, "perplexity": 1204.7502174101082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00571.warc.gz"}
|
https://www.cs.princeton.edu/courses/archive/fall22/cos126/assignments/sierpinski/
|
# 4. Recursive Graphics
### Goals
• To write a library of static methods that performs geometric transforms on polygons.
• To write a program that plots a Sierpinski triangle.
• To design and develop a program that plots a recursive pattern of your own design.
### Getting Started
• This is an individual assignment. Before you begin coding, do the following:
• Download the project zip file for this assignment from TigerFile , which contains the files you will need for this assignment.
• Read the COS 126 Style Guide to ensure your code follows our conventions. Style is an important component of writing code, and not following guidelines will result in deductions.
### Background
Read Section 2.3 of the textbook. You may also find it instructive to work through the precept exercises. You should also familiarize yourself with the StdDraw API.
### Part I - Geometric Transformation Library
You will write a library of static methods that performs various geometric transforms on polygons. Mathematically, a polygon is defined by its sequence of vertices $$(x_0, y_0)$$, $$(x_1, y_1)$$, $$(x_2, y_2)$$, …. In Java, we will represent a polygon by storing the x- and y-coordinates of the vertices in two parallel arrays x[] and y[]. For example: o
// a polygon with these four vertices:
// (0, 0), (1, 0), (1, 2), (0, 1)
double x[] = { 0, 1, 1, 0 };
double y[] = { 0, 0, 2, 1 };
// Draw the polygon
StdDraw.polygon(x, y);
All drawings were generated with the coordinate axes, but you will not see them using only the code provided. You do not need to plot the axes.
### Transform2D.java
Write a two-dimensional transformation library Transform2D.java by implementing the following API:
public class Transform2D {
// Returns a new array object that is an exact copy of the given array.
// The given array is not mutated.
public static double[] copy(double[] array)
// Scales the polygon by the factor alpha.
public static void scale(double[] x, double[] y, double alpha)
// Translates the polygon by (dx, dy).
public static void translate(double[] x, double[] y, double dx, double dy)
// Rotates the polygon theta degrees counterclockwise, about the origin.
public static void rotate(double[] x, double[] y, double theta)
// Tests each of the API methods by directly calling them.
public static void main(String[] args)
}
#### Requirements
• The API expects the angles to be in degrees, but Java’s trigonometric functions take the arguments in radians. Use Math.toRadians() to convert from degrees to radians.
• The transformation methods scale(), translate(), and rotate() mutate the arrays, while copy() returns a new array.
• The main() method must test each method of the Transform2D library. In other words, you must call each Transform2D method from main(). You should experiment with various data so you are confident that your methods are implemented correctly.
• You can assume the following about the inputs: the arrays passed to scale(), translate(), and rotate() are not null, are the same length, and do not contain the values NaN, Double.POSITIVE_INFINITY, or Double.NEGATIVE_INFINITY.
• The array passed to copy() is not null.
• The values for the parameters alpha, theta, dx, and dy are not NaN, Double.POSITIVE_INFINITY, or Double.NEGATIVE_INFINITY.
#### copy()
Copies the given array into a new array object. The given array is not mutated.
The transformation methods (below) mutate a given polygon. This means that the parallel arrays representing the polygon are altered by the transformation methods. It is often useful to save a copy of the polygon before applying a transform.
For example:
public static void main(String[] args) {
// Set the x- and y-scale
StdDraw.setScale(-5.0, 5.0);
// Create original polygon
double[] x = { 0, 1, 1, 0 };
double[] y = { 0, 0, 2, 1 };
// Copy original polygon
double[] cx = copy(x);
double[] cy = copy(y);
// Rotate and translate the copy
rotate(cx, cy, -45.0);
translate(cx, cy, 1.0, 2.0);
// Draw the copy in blue
StdDraw.setPenColor(StdDraw.BLUE);
StdDraw.polygon(cx, cy);
// Draw the original polygon in red
StdDraw.setPenColor(StdDraw.RED);
StdDraw.polygon(x, y);
}
#### scale()
Scales the coordinates of each vertex $$(x_i, y_i)$$ by a factor α.
$$x_i^\prime = αx_i$$
$$y_i^\prime = αy_i$$
An example of testing code for scale() is provided below. However, we highly encourage you to experiment with various values to confirm that your methods work as required.
public static void main(String[] args) {
// Set the x- and y-scale
StdDraw.setScale(-5.0, +5.0);
// Create polygon
double[] x = { 0, 1, 1, 0 };
double[] y = { 0, 0, 2, 1 };
// Draw original polygon in red
StdDraw.setPenColor(StdDraw.RED);
StdDraw.polygon(x, y);
// Scale polygon by 2.0
scale(x, y, 2.0);
// Draw scaled polygon in blue
StdDraw.setPenColor(StdDraw.BLUE);
StdDraw.polygon(x, y);
}
#### translate()
Translates each vertex $$(x_i, y_i)$$ by a given offset $$(dx, dy)$$.
$$x_i^\prime = x_i + d_x$$
$$y_i^\prime = y_i + d_y$$
An example of testing code for translate() is provided below. However, we highly encourage you to experiment with various values to confirm that your methods work as required.
public static void main(String[] args) {
// Set the x- and y-scale
StdDraw.setScale(-5.0, +5.0);
// Create polygon
double[] x = { 0, 1, 1, 0 };
double[] y = { 0, 0, 2, 1 };
// Draw original polygon in red
StdDraw.setPenColor(StdDraw.RED);
StdDraw.polygon(x, y);
// Translate polygon by
// 2.0 in the x-direction
// 1.0 in the y-direction
translate(x, y, 2.0, 1.0);
// Draw translated polygon in blue
StdDraw.setPenColor(StdDraw.BLUE);
StdDraw.polygon(x, y);
}
#### rotate()
Rotates each vertex $$(x_i, y_i)$$ by $$\theta$$ degrees counterclockwise around the origin.
$$x_i^\prime = x_i \cos \theta − y_i \sin \theta$$
$$y_i^\prime = y_i \cos \theta + x_i \sin \theta$$
Note in the equations, $$x_i^\prime$$ and $$y_i^\prime$$ depend on the $$x_i$$ and $$y_i$$, respectively. In your implementation, you may want to make a copy of the $$x$$ and $$y$$ arrays before you compute the $$x^\prime$$ and $$y^\prime$$ arrays!
Also remember that Math.cos() and Math.sin() require radians.
An example of testing code for rotate() is provided below. However, we highly encourage you to experiment with various values to confirm that your methods work as required.
public static void main(String[] args) {
// Set the x- and y-scale
StdDraw.setScale(-5.0, +5.0);
// Create polygon
double[] x = { 0, 1, 1, 0 };
double[] y = { 0, 0, 2, 1 };
// Draw original polygon in red
StdDraw.setPenColor(StdDraw.RED);
StdDraw.polygon(x, y);
// Rotate polygon by 45 degrees ccw
rotate(x, y, 45.0);
// Draw rotated polygon in blue
StdDraw.setPenColor(StdDraw.BLUE);
StdDraw.polygon(x, y);
}
A polygon does not have to be located at the origin in order to rotate it; you can rotate any polygon about the origin using the same method. For example:
public static void main(String[] args) {
// Set the x- and y-scale
StdDraw.setScale(-5.0, +5.0);
// Create polygon
double[] x = { 1, 2, 2, 1 };
double[] y = { 1, 1, 3, 2 };
// Draw original polygon in red
StdDraw.setPenColor(StdDraw.RED);
StdDraw.polygon(x, y);
// Rotate polygon by 90 degrees ccw
rotate(x, y, 90.0);
// Draw rotated polygon in blue
StdDraw.setPenColor(StdDraw.BLUE);
StdDraw.polygon(x, y);
}
Note: All drawings were generated with the coordinate axes, but you will not see them using only the code provided. You do not need to plot the axes.
Rotating Around Arbitrary Point
Even though the rotate code above will only rotate polygons about the origin, you could easily transform about any other point $$(px, py)$$ using a simple technique. First, translate the polygon by $$(-px, -py)$$ so that the rotation point is now at the origin. Next, use your rotate() function above to rotate about the origin. Finally, move the polygon back to the rotation point by simply translating by $$(px, py)$$. Bam – three lines of code, and you’re done! This might be helpful if you draw polygons in your art project below.
#### main()
Tests each method of the Transform2D library by calling it. You should experiment with various data so you are confident that your methods are implemented correctly. Feel free to draw to standard draw using different polygons. You should not expect any command-line arguments.
### Part II - Sierpinski Triangle
The Sierpinski triangle is an example of a fractal pattern like the H-tree pattern from Section 2.3 of the textbook.
Order 1
Order 2
Order 3
Order 4
Order 5
Order 6
The Polish mathematician Wacław Sierpiński described the pattern in 1915, but it has appeared in Italian art since the 13th century. Though the Sierpinski triangle looks complex, it can be generated with a short recursive function.
Your primary task is to write a recursive function sierpinski() that plots a Sierpinski triangle of order n to standard drawing. Think recursively: sierpinski() should draw one filled equilateral triangle (pointed downwards) and then call itself recursively three times (with an appropriate stopping condition). It should draw one (1) filled triangle for n = 1; four (4) filled triangles for n = 2; and three (3) filled triangles for n = 3; and so forth.
### Sierpinski.java
When writing your program, exercise modular design by organizing it into four functions, as specified in the following API:
public class Sierpinski {
// Height of an equilateral triangle with the specified side length.
public static double height(double length)
// Draws a filled equilateral triangle with the specified side length
// whose bottom vertex is (x, y).
public static void filledTriangle(double x, double y, double length)
// Draws a Sierpinski triangle of order n, such that the largest filled
// triangle has the specified side length and bottom vertex (x, y).
public static void sierpinski(int n, double x, double y, double length)
// Takes an integer command-line argument n;
// draws the outline of an upwards equilateral triangle of length 1
// whose bottom-left vertex is (0, 0) and bottom-right vertex is (1, 0);
// and draws a Sierpinski triangle of order n that fits inside the outline.
public static void main(String[] args)
}
The formula for the height of an equilateral triangle of side length $$s$$ is $$h = s \times \frac{\sqrt{3}}{2}$$.
Here is the layout of the initial equilateral triangle. The top vertex lies at $$(\frac{1}{2},\frac {\sqrt{3}}{2})$$.
Here is the layout of an inverted equilateral triangle.
#### Requirements
1. The program must take an integer command-line argument n.
2. To draw a filled equilateral triangle, you should call the method StdDraw.filledPolygon() with appropriate arguments.
3. To draw an unfilled equilateral triangle, you should call the method StdDraw.polygon() with appropriate arguments.
4. You must not call StdDraw.save(), StdDraw.setCanvasSize(), StdDraw.setXscale(), StdDraw.setYscale(), or StdDraw.setScale(). These method calls interfere with grading.
5. You may only use StdDraw.BLACK to draw the outline triangle or the filled triangles.
#### Possible Progress Steps
Click to show possible progress steps
These are purely suggestions for how you might make progress. You do not have to follow these steps. Note that your final Sierpinski.java program should not be very long (no longer than Htree.java, not including comments and blank lines).
• Review Htree.java from the textbook, lecture and precept.
• Write a (non-recursive) function height() that takes the length of the side of an equilateral triangle as an argument and returns its height. The body of this method should be a one-liner.
• Test your height() function. This means try your height() function with various values. Does it return the correct calculation?
• In main(), draw the outline of the initial equilateral triangle. Use the height() function to calculate the vertices of the triangle.
• Write a (nonrecursive) function filledTriangle() that takes three (3) arguments (x, y, length) and draws a filled equilateral triangle (pointed downward) with the specified side length and the bottom vertex at $$(x, y)$$.
• To test your function, write main() so that it calls filledTriangle() a few times with different arguments. You will be able to use this function without modification in Sierpinski.java.
• Ultimately, you must write a recursive function sierpinski() that takes four (4) arguments (n, x, y, length) and plots a Sierpinski triangle of order n, whose largest triangle has the specified side length and bottom vertex $$(x, y)$$. However, to implement this function, use an incremental approach:
• Write a recursive function sierpinski() that takes one argument n, prints the value n, and then calls itself three times with the value n-1. The recursion should stop when n becomes 0.
• To test your function, write main() so that it takes an integer command-line argument n and calls sierpinski(n). Ignoring whitespace, you should get the following output when you call sierpinski() with n ranging from 0 to 5. Make sure you understand how this function works, and why it prints the numbers in the order it does.
> java-introcs Sierpinski 0
[no output]
> java-introcs Sierpinski 1
1
> java-introcs Sierpinski 2
2
1
1
1
> java-introcs Sierpinski 3
3
2 1 1 1
2 1 1 1
2 1 1 1
> java-introcs Sierpinski 4
4
3 2 1 1 1 2 1 1 1 2 1 1 1
3 2 1 1 1 2 1 1 1 2 1 1 1
3 2 1 1 1 2 1 1 1 2 1 1 1
> java-introcs Sierpinski 5
5
4 3 2 1 1 1 2 1 1 1 2 1 1 1
3 2 1 1 1 2 1 1 1 2 1 1 1
3 2 1 1 1 2 1 1 1 2 1 1 1
4 3 2 1 1 1 2 1 1 1 2 1 1 1
3 2 1 1 1 2 1 1 1 2 1 1 1
3 2 1 1 1 2 1 1 1 2 1 1 1
4 3 2 1 1 1 2 1 1 1 2 1 1 1
3 2 1 1 1 2 1 1 1 2 1 1 1
3 2 1 1 1 2 1 1 1 2 1 1 1
• Modify sierpinski() so that in addition to printing n, it also prints the length of the triangle to be plotted. Your function should now take two arguments: n and length. The initial call from main() should be to sierpinski(n, 0.5), since the largest Sierpinski triangle has side length 0.5. Each successive level of recursion halves the length. Ignoring whitespace, your function should produce the following output:
> java-introcs Sierpinski 0
[no output]
> java-introcs Sierpinski 1
1 0.5
> java-introcs Sierpinski 2
2 0.5
1 0.25
1 0.25
1 0.25
> java-introcs Sierpinski 3
3 0.5
2 0.25 1 0.125 1 0.125 1 0.125
2 0.25 1 0.125 1 0.125 1 0.125
2 0.25 1 0.125 1 0.125 1 0.125
> java-introcs Sierpinski 4
4 0.5
3 0.25 2 0.125 1 0.0625 1 0.0625 1 0.0625
2 0.125 1 0.0625 1 0.0625 1 0.0625
2 0.125 1 0.0625 1 0.0625 1 0.0625
3 0.25 2 0.125 1 0.0625 1 0.0625 1 0.0625
2 0.125 1 0.0625 1 0.0625 1 0.0625
2 0.125 1 0.0625 1 0.0625 1 0.0625
3 0.25 2 0.125 1 0.0625 1 0.0625 1 0.0625
2 0.125 1 0.0625 1 0.0625 1 0.0625
2 0.125 1 0.0625 1 0.0625 1 0.0625
• Modify sierpinski() so that it takes four (4) arguments (n, x, y, length) and plots a Sierpinski triangle of order n, whose largest triangle has the specified side length and bottom vertex $$(x, y)$$. Start by drawing Sierpinski triangles with pencil and paper. What values need to change between each recursive call?
• Remove all print statements before submitting to TigerFile.
#### Testing
Below are the target Sierpinski triangles for different values of n.
> java-introcs Sierpinski 1
> java-introcs Sierpinski 2
> java-introcs Sierpinski 3
### Part III - Create Your Own Art
#### Art.java
In this part you will create a program Art.java that produces a recursive drawing of your own design. This part is meant to be fun, but here are some guidelines in case you’re not so artistic.
A very good approach is to first choose a self-referential pattern as a target output. Check out the graphics exercises in Section 2.3. Here are some of our favorite student submissions from a previous year. See also the Famous Fractals in Fractals Unleashed for some ideas. Here is a list of fractals, by Hausdorff dimension. Some pictures are harder to generate than others (and some require trigonometry).
#### Requirements
1. Art.java must take one (1) integer command-line argument n that controls the depth of recursion.
2. Your drawing must stay within the drawing window when n is between 1 and 6 inclusive. (The autograder will not test values of n outside of this range.)
3. You may not change the size of the drawing window (but you may change the scale). Do not add sound.
4. Your drawing can be a geometric pattern, a random construction, or anything else that takes advantage of recursive functions.
5. Optionally, you may use the Transform2D library you implemented in Part 1. You may also define additional geometric transforms in Art.java, such as sheer, reflect across the x- or y- axis, or rotate about an arbitrary point (as opposed to the origin).
6. Your program must be organized into at least three separate functions, including main(). All functions except main() must be private.
7. For full credit, Art.java must not be something that could be easily rewritten to use loops in place of recursion, and some aspects of the recursive function-call tree (or how parameters or overlapping are used) must be distinct from the in-class examples (HTree, NestedCircles, etc.). You must do at least two of the following things to get full credit on Art.java (and doing more may yield a small amount of extra credit):
• call one or more Transform2D methods
• use different parameters than in examples: f(n, x, y, size)
• use different StdDraw methods than in examples (e.g., ellipses, arcs, text; take a look at the StdDraw API)
• have non-constant number of recursive calls per level (e.g., conditional recursion)
• have mutually recursive methods
• have multiple recursive methods
• use recursion that doesn’t always recur from level n to level n-1
• draw between recursive calls, not just before or after all recursive calls
• use recursive level for secondary purpose (e.g., level dictates color)
8. Contrast this with the examples HTree, Sierpinski, and NestedCircles, which have very similar structures to one another.
9. You will also lose points if your artwork can be created just as easily without recursion (such as Factorial.java). If the recursive function-call tree for your method is a straight line, it probably falls under this category.
10. You may use GIF, JPG, or PNG files in my artistic creation. If you do, be sure to submit them along with your other files. Make it clear in your readme.txt what part of the design is yours and what part is borrowed from the image file.
#### FAQ
The API checker says that I need to make my methods private. Use the access modifier private instead of public in the method signature. A public method can be called directly in another class; a private method cannot. The only public method that you should have in Art.java is main().
What will cause me to lose points on the artistic part? We consider three things: the structure of the code; the structure of the recursive function-call tree; and the art itself.
For example, the Quadricross looks very different from the in-class examples, but the code to generate it looks extremely similar to HTree, so it is a bad choice. On the other hand, even though the Sierpinski curve eventually generates something that looks like the Sierpinski triangle, the code is very different (probably including an angle argument in the recursive method), and so it would earn full marks.
Here is an animation we produced using student art:
### Submission
• Submit to TigerFile : Transform2D.java, Sierpinski.java, Art.java (and optional image files), and completed readme.txt and acknowledgments.txt files.
• Note that, as part of this assignment, we may anonymously publish your art. If you object, please indicate so in your readme.txt when asked. We also reserve the right to pull any art, at any time, for whatever reason -— and by submitting your assignment, you implicitly agree with this policy.
### Enrichment
Fractals in the wild. Here’s a Sierpinski triangle in polymer clay, a Sierpinski carpet cookie, a fractal pizza, and a Sierpinski hamantaschen.
Fractal dimension (optional diversion). In grade school, you learn that the dimension of a line segment is 1, the dimension of a square is 2, and the dimension of a cube is 3. But you probably didn’t learn what is really meant by the term dimension. How can we express what it means mathematically or computationally? Formally, we can define the Hausdorff dimension or similarity dimension of a self-similar figure by partitioning the figure into a number of self-similar pieces of smaller size. We define the dimension to be the $$\log$$ (# self similar pieces) / $$\log$$ (scaling factor in each spatial direction). For example, we can decompose the unit square into four smaller squares, each of side length 1/2; or we can decompose it into 25 squares, each of side length 1/5. Here, the number of self-similar pieces is 4 (or 25) and the scaling factor is 2 (or 5). Thus, the dimension of a square is 2, since $$\log (4) / \log(2) = \log (25) / \log (5) = 2$$. Furthermore, we can decompose the unit cube into 8 cubes, each of side length 1/2; or we can decompose it into 125 cubes, each of side length 1/5. Therefore, the dimension of a cube is $$\log(8) / \log (2) = \log(125) / \log(5) = 3$$.
We can also apply this definition directly to the (set of white points in) Sierpinski triangle. We can decompose the unit Sierpinski triangle into three Sierpinski triangles, each of side length 1/2. Thus, the dimension of a Sierpinski triangle is $$\log (3) / \log (2) ≈ 1.585$$. Its dimension is fractional—more than a line segment, but less than a square! With Euclidean geometry, the dimension is always an integer; with fractal geometry, it can be something in between. Fractals are similar to many physical objects; for example, the coastline of Britain resembles a fractal; its fractal dimension has been measured to be approximately 1.25.
|
2023-03-22 08:40:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3123471140861511, "perplexity": 1156.0565666147086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00354.warc.gz"}
|
https://governance.foundation/frameworks/seon
|
# Software Engineering Ontology Network
## Description
SEON: A Software Engineering Ontology Network - SEON provides a well-grounded network of SE reference ontologies, and mechanisms for deriving and incorporating new integrated domain ontologies into the network.
SEON results from various efforts on building ontologies for the Software Engineering (SE) field. Although SEON itself is a new proposal, the studies and ontologies developed along the years are important contributions for defining this network. Hence, SEON rises with three main premises:
• being based on a well-founded grounding for ontology development;
• offering mechanisms to support building and integrating new SE domain ontologies to the network; and
• promoting integration by keeping a consistent semantics for concepts and relations along the whole network.
Updated:
|
2022-10-05 02:55:50
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8411474227905273, "perplexity": 9745.015503764826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00331.warc.gz"}
|
http://mathhelpforum.com/differential-equations/194955-solving-dy-dx-e-x-y-print.html
|
# Solving dy/dx = e^(x+y)
• January 5th 2012, 03:14 PM
M.R
Solving dy/dx = e^(x+y)
Hi,
I am trying to solve the following differential equation:
$\frac {dy}{dx} = e^{x+y}$
Now:
$\frac {dy}{dx} = e^x e^y$
$\frac {1}{e^y} dy = e^x dx$
$\int \frac {1}{e^y} dy = \int e^x dx$
$-e^{-y} = e^x + C$
$ln(e^{-y^{-1})} = ln(e^x + C)$
$\frac{-1}{y} = x + ln(C)$
$y = \frac {1}{-x - ln(C)}$
But the answer in the book is shown as:
$e^{x+y}+Ce^y+1=0$
Where am I going wrong?
• January 5th 2012, 03:22 PM
pickslides
Re: Differential equation
Maybe you are correct, at this step $-e^{-y} = e^x + C$ , multiply both sides through by $e^{y}$
What do you get?
• January 5th 2012, 03:24 PM
ILikeSerena
Re: Differential equation
Quote:
Originally Posted by M.R
$ln(e^{-y^{-1})} = ln(e^x + C)$
$\frac{-1}{y} = x + ln(C)$
This is not right.
ln(ab) = ln(a) + ln(b)
ln(a + b) ≠ ln(a) + ln(b)
Quote:
Originally Posted by M.R
$-e^{-y} = e^x + C$
What do you get if you multiply left and right with $e^y$?
|
2016-06-25 01:24:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9140228629112244, "perplexity": 3066.2025131902055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00198-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://cs.stackexchange.com/questions/138684/eliminating-ambiguity-in-a-to-aa-mid-a-mid-a
|
# Eliminating ambiguity in $A \to AA \mid (A) \mid a$
I'm trying to solve this complier design problem related to ambiguity in CFG the given grammar is
\begin{align} &A → AA \\ &A → (A) \\ &A → a \end{align}
I was able to find that this given grammar is ambiguous but I don't know how to remove the ambiguity. How can I remove the ambiguity from the given production rules to get new productions without ambiguity?
• Have you learned any techniques for removing ambiguity? Have you tried applying them? – Yuval Filmus Apr 8 at 10:31
|
2021-05-06 22:52:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985129833221436, "perplexity": 1171.9054538123833}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988763.83/warc/CC-MAIN-20210506205251-20210506235251-00156.warc.gz"}
|
https://mathemerize.com/what-is-integration-of-sin-inverse-cos-x/
|
# What is integration of sin inverse cos x ?
## Solution :
We have, I = $$\int$$ $$sin^{-1}(cos x)$$ dx
By using integration formula, cos x = $$sin({\pi\over 2} – x)$$
I = $$\int$$ $$sin^{-1}[sin({\pi\over 2} – x)]$$ dx
I = $$\int$$ $$({\pi\over 2} – x)$$ dx
I = $${\pi\over 2}x$$ – $$x^2\over 2$$ + C
### Similar Questions
What is the integration of sin inverse root x ?
What is the integration of sin inverse x whole square ?
What is the integration of x sin inverse x dx ?
What is the integration of tan inverse root x ?
What is the integration of x tan inverse x dx ?
|
2023-03-24 00:51:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330878853797913, "perplexity": 1309.582852452306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00669.warc.gz"}
|
https://doc.sitecore.com/en/developers/100/platform-administration-and-architecture/configure-the-propertystore-cache.html?utm_medium=rss&utm_source=rss_reader
|
# Configure the PropertyStore cache
Abstract
How to change the caching for the PropertyStore.
Applies to Content Management
The size of the PropertyStore cache on a Content Management (CM) server might have an impact on the login process. You can see if this cache is running full by browsing to /sitecore/admin/cache.aspx. When the cache is full, it is cleared and this can slow down the login process afterwards. The default size of this cache is 5 MB.
The setting of the cache size is not part of the configuration but you can change the value by patching the configuration.
To set the size of the cache, for example, to 20 MB:
• Create the following patch file:
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
<sitecore>
<settings>
<setting name="Caching.DefaultPropertyCacheSize" value="20MB" />
</settings>
</sitecore>
</configuration>
|
2021-07-25 19:57:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5088786482810974, "perplexity": 2745.5199229220493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151760.94/warc/CC-MAIN-20210725174608-20210725204608-00279.warc.gz"}
|
https://optogeneticsandneuralengineeringcore.gitlab.io/ONECoreSite/projects/3DeepLabCut/
|
# 3DeepLabCut
Add another camera to DLC and take your data to 3D
How awesome is DeepLabCut??? Open-source markerless tracking of videos. So fun!!! Now let’s take that fun to 3D space!
This project demonstrates how to use DLC to generate three dimensional feature tracking data from multiple views of the same scene/behavior. Two (or more) cameras can be placed around an area of interest without the need for a fancy setup (that is, no a priori, distance, angle, or focal length needs to be precise). The code here will show how you can capture the video (with time stamped frames). You can then take a ‘wand’ (any object that has two unique points, like a marker or pen) and wave it in front of the cameras, using DLC to track the points. We then have some code to clean up the DLC data, and finally use a software package to create a calibration file. With this, you can now capture new videos of behaviors of interest, run it though DLC, and reconstruct it in 3D!!
This is written and tested only on Ubuntu. EasyWand is written for Matlab. The scripts below are written for a Jupyter notebook. We assume you know how to use Jupyter.
Unfamiliar with DeepLabCut?
# Workflow
Scripts should be called in a particular order. We provide a script to capture videos, then jump on over to DLC to track the features of the videos. We show how to get this data into a form suitable for EasyWand, then use EasyWand for calibration, and finally reconstruct the features in 3D space.
Download this zip file and extract it to some location. This has the files necessary to run these scripts. The demo data we have is too large for me to host, but if you want it, email us and we’ll figure something out.
Jupyter installs when you install Anaconda.
If you don’t have openCV installed, pop open the terminal (ctrl, alt, t) and enter
sudo apt install python3-opencv
Then start a jupyter notebook
jupyter notebook
Your favorite browser will launch. Navigate to where you extracted the 3DLC_demo folder. Open the video_capture.ipynb file.
## 1) video_capture.ipynb
SSequentially capture video footage from multiple cameras using openCV. Other software can be used to capture footage, but it is critical that each video have the same number of frames. Cameras should ideally be synchronized (or near so) to simultaneously capture frames.
This code can be used to record “wand” video and then also to record the “behavior” video you would like to analyze. Carefully set up your cameras so they are well positioned to record the behavior of interest (try to position the cameras such that each feature that you would like to analyze is visible to at least two cameras at a time). We will calibrate the cameras using a wand. The wand should be an object of a known length, preferably a length that is similar to the behavior. It should have two well defined small points on the ends (like two different color tapes wrapped around a marker). For the wand video, you will want to record the wand moving around in the overlapping field of view of all cameras. It is advised to keep both points within the view of all cameras, and try not to point directly at a camera (with both points exactly overlapping, the calibration can become confused). We can take this video, track the two points in DLC, and create a calibration between all cameras. Then, if the camera set up doesn’t have any major changes, we can then use this to get 3D tracking of new behaviors.
This code can also be used to record “behavior” video which we can then track features with DLC. Later we will use the calibration file to make take the 2D data from DLC and convert it to 3D space!
Note that there is often a sizable delay (a second or two) between the first and second frames of videos captured with this script (initialization). The exact duration of this delay can be found within the timestamps output file.
• Output Files
• a series of ‘.avi’ video files with the “wand” moving around in the overlapping field of view of the cameras
• a series of ‘.avi’ video files with the “features” from multiple camera angles
• a ‘timestamps’ csv file file that will record the time that every frame was captured from each camera. All times are relative to the moment when the first frame of the first video was captured. Rows are frames, columns are cameras.
## 2) DeepLabCut Tracking of Features in Videos
DeepLabCut will be used to track features in each set of videos. It is preferable to first train a network to track both tips of the “wand” being used. With the wand data collected, you can initiate training the network to track features in the behavior videos.
• Input Files
• wand ‘.avi’ video files for each camera angle or
• feature ‘.avi’ video files for each camera angle
• Output Files
• a ‘.csv’ file for each camera angle with wand position data or
• a ‘.csv’ file for each camera angle with data for each tracked feature
## 3) format_wand_csv.ipynb
Clean up and reformat DeepLabCut’s wand data for use in camera calibration. It’s unlikely that the wand was perfectly visible in every frame of the wand videos. This script uses DeepLabCut’s “likelihood” metric to judge whether both tips of the wand were clearly identified within each frame of the videos. If the wand is ever not visible within a frame, then that frame will be omitted from the file used for camera calibration.
Note that you have control over the “likelihood” cutoff, but a cutoff >= 0.999999 is recommended.
• Input Files
• a ‘.csv’ file for each camera angle with position data for each tracked feature
• Output Files
• a “formatted” csv file that will be passed into easyWand5
## 4) easyWand5.m
Calculates the Direct Linear Transformation (DLT) coefficients in MATLAB for the cameras based off of the formatted wand data. easyWand and it’s userguide can be found here. We would love to find a python based alternative, but for now it’s hard to beat easyWand5’s functionality.
Pull up the easyWand GUI by calling the following within MATLAB from the “demo_3d” folder.
addpath('easyWand/easyCamera')
addpath('easyWand/easyWand5')
easyWand5
• Input Files
• ‘.csv’ data files from DLC, reformated with format_wand_csv.ipynb, for each camera, from wand videos
• Output Files
• a ‘.csv’ of the DLT coefficients
## 5) reconstruct_3d.ipynb
Reconstructs 3D data for tracked features using DLT coefficients.
• Input Files
• a ‘csv’ output file from DeepLabCut for every camera
• a ‘.csv’ of the DLT coefficients
• (Optional) a timestamps ‘.csv’ file as was generated in the video_capture script. If a direct path is not given, the script will search the current directory for the most recently generated csv file with the word ‘timestamps’ in it and load the data from it. If no path was given and no timestamps files were found, the program will assume a framerate of 20 fps for the animations.
• Output Files
• Contained within a folder in the demo_3d directory, you’ll find a csv file for every 3d tracked body part along with a couple animation files (in mp4 and gif formats)
# Multiple Cameras
But are multiple cameras necessary? Maybe not. We will test a novel technology and report later this week (hopefully). It promises absolute synchrony across multiple views from a single camera. It utilizes a crazy new technology I’m calling Moment Investigation: Reach Research ORdered Synchrony (or MIRRORS (cough cough))….
# ONE Core acknowledgement
Please acknowledge the ONE Core facility in your publications. An appropriate wording would be:
“The Optogenetics and Neural Engineering (ONE) Core at the University of Colorado School of Medicine provided engineering support for this research. The ONE Core is part of the NeuroTechnology Center, funded in part by the School of Medicine and by the National Institute of Neurological Disorders and Stroke of the National Institutes of Health under award number P30NS048154.”
© 2021. All rights reserved.
|
2022-08-15 16:47:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24572212994098663, "perplexity": 2439.8076539635576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572192.79/warc/CC-MAIN-20220815145459-20220815175459-00363.warc.gz"}
|
https://physics.stackexchange.com/questions/268421/how-much-approximately-the-pressure-would-drop
|
# How much approximately the pressure would drop?
A container made from from aluminium ($0.5~\mathrm{mm}$ thickness $10^3~\mathrm{cm^3}$) closed system , it has 50 psi (nitrogen) pressure inside at temperature $80^\circ~\mathrm C.$
Approximately :
How much pressure will it be lost after 10 years ? 0.1 psi or 1 psi or more ...
I know that Gas permeation through solid is extremely slow and negligible in most cases. It's hard to get general idea without approximate number .
• What is inside the box? If any measurable amount of gas leaks, it will be more if it's hydrogen than if it's, say, isobutane. And what is on the outside -- vacuum? Jul 16 '16 at 21:15
• inside the box is air Jul 16 '16 at 21:17
• outside is also air Jul 16 '16 at 21:18
• Any answer that you get will be unverifiable, unless you are willing to wait 10 years. If you REALLY need an answer for this, try a radioactive alpha emitter with a relatively short half life, and approximately the same density as the gas you are interested in. ANY leakage into a larger container will be immediately detectable, and the rate of leakage will give you your answer. Jul 17 '16 at 0:21
Consider Fick's second law of diffusion, in one dimension, where $u$ is the concentration of the diffusing gas (in $\mathrm{mol/m^3}$) and $D$ the diffusion coefficient:
$$\frac{\partial u}{\partial t}=D\frac{\partial u}{\partial x}$$ If we assume the concentration of gas outside the container to be much smaller than inside (a reasonable assumption), then the right hand concentration gradient can be determined from Fick's first law to be (with $\tau$ the thickness of the container walls): $$\frac{\partial u}{\partial x}= -\frac{u-u_0}{\tau}$$ Where $u_0$ is the nitrogen concentration outside the box (assumed constant). With the Ideal Gas law: $$pV=nRT$$ $$p=\frac{n}{V}RT=uRT$$ $$u=\frac{p}{RT}$$ Also: $$u_0=\frac{\chi p_a}{RT}$$ Where $p_a$ is atmospheric pressure and $\chi=0.78$ the molar fraction of nitrogen in air. With those relations and $A$ the total surface area of the container, we get: $$-\frac{V}{RT}\frac{dp}{dt}=\frac{AD}{\tau}\Big(\frac{p}{RT}-\frac{\chi p_a}{RT}\Big)$$ $$-V\frac{dp}{dt}=\frac{AD}{\tau}(p-\chi p_a)$$ $$-\frac{dp}{p-\chi p_a}=\frac{AD}{\tau V}dt$$ $$\int_{p_0}^{p(t)}\frac{dp}{p-\chi p_a}=-\frac{AD}{\tau V}\int_0^tdt$$ $$\ln\Bigg(\frac{p(t)-\chi p_a}{p_0-\chi p_a}\Bigg)=-\frac{AD}{\tau V}t$$ We'll call: $$\alpha=-\frac{AD}{\tau V}$$ $$\implies \Large{p(t)=\chi p_a+(p_0-\chi p_a)e^{-\alpha t}}$$ Note that this expression does not contain $T$. $D$ however is temperature dependent as the following figure shows, for diffusion of gases through solids:
As specific values for aluminium/nitrogen are hard to find we'll use the above values to get a crude estimate. First we need to calculate $\alpha$ from:
$A=600 \times 10^{-4}\mathrm{m^2}$
$D=10^{-12}\mathrm{m^2s^{-1}}$
$\tau=0.5\times 10^{-3}\mathrm{m}$
$V=10^{-3}\mathrm{m^3}$
$\implies \alpha \approx 10^{-7}\mathrm{s^{-1}}$
For 10 years:
$t=10\times365\times24\times60\times60=3.15 \times 10^8\mathrm{s}$
This puts the upper estimate of $-\alpha t \approx -30$ and thus $p(\text{10 years})\approx p_0e^{-30} \approx \chi p_a=11.5\mathrm{psi}$.
10 years is of course also quite a long time and $0.5\:\mathrm{mm}$ quite thin for a container holding a gas starting at $5.5\:\mathrm{Bar}$!
• but the temperature $80^\circ~\mathrm C.$ .. should not we use 10^{-16} at least ? Jul 18 '16 at 12:41
• @user118676: You can use $10^{-16}$, which of course decreases the pressure loss enormously. But most sources I've seen give values of $10^{-12}$ for gas/steel at RT. The graph above is also at higher than 80 C temperature range. W/o a more precise value for $D_{Al,N2}$ it's not possible to give a definitive answer to the question.
– Gert
Jul 18 '16 at 13:34
|
2021-09-25 07:05:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8280060887336731, "perplexity": 451.15785468148596}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057598.98/warc/CC-MAIN-20210925052020-20210925082020-00250.warc.gz"}
|
https://www.gamedev.net/forums/topic/508676-actionlistener---this-the-correct-way/
|
• 15
• 15
• 11
• 9
• 10
# [java] ActionListener - This the Correct Way?
This topic is 3468 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hello I'm just wondering if this is the correct way to deal with events? (CODE BELOW). I know this works fine as I've tested it I'm just making sure that I'm not over looking anything by doing it like this (apart from a hefty if/else if/else statement if the panel contains many controls.) Is there a better way way to deal with it? How would you do it? Thanks in advance! CODE
public class LoginPanel extends JPanel implements ActionListener{
...
....
....
}
public void actionPerformed(ActionEvent e){
}
else{
System.out.println("You pushed Quit");
}
}
...
}
##### Share on other sites
Well, whether one way is correct and another incorrect ... I will not judge since it is often dependent on the situation. In the following there are some ways shown, 1 of them is your solution.
A single self-implemented listener working on all sources, dispatching by using the actionCommand; just the same as in the OP:
public class Anythingextends Objectimplements ActionListener { public Anything() { ... button.setActionCommand("action"); button.addActionListener(this); ... ) public void actionPerformed(ActionEvent event) { if(event.getActionCommand()=="action") { ... } else if ... }}
Using the actionCommand is of advantage e.g. if you don't refer to the event sources, and/or there are several sources that can fire "action".
Instead of distpatching by command, you can use something like a mediator:
public class Anythingextends Objectimplements ActionListener { public Anything() { ... this.button.addActionListener(this); ... ) private Button button; public void actionPerformed(ActionEvent event) { if(event.getSource()==this.button) { ... } else if ... }}
Often you have the need to refer to event sources for other purposes anyway, e.g. to enable/disable then conditionally.
Next, with both of the ways above, you can hide self being an ActionListener by using an embedded listener (only shown for the 2nd variant here):
public class Anythingextends Object { public Anything() { ... this.button.addActionLIstener(this.actionListener); ... ) private Button button; private final ActionListener actionListener = new ActionListener() { public void actionPerformed(ActionEvent event) { if(event.getSource()==Anything.this.button) { ... } else if ... } }}
You can instanciate the listener in Anything's initializer or so, too, if you want a more dynamic behaviour, of course. BTW: The above is the way I prefer.
With the embedded listener pattern, you can also drop the dispatching totally, if you use several listeners:
public class Anythingextends Object { public Anything() { ... this.button1.addActionListener(this.actionListener1); this.button2.addActionListener(this.actionListener2); ... ) private Button button1; private Button button2; private final ActionListener actionListener1 = new ActionListener() { public void actionPerformed(ActionEvent event) { ... } } private final ActionListener actionListener2 = new ActionListener() { public void actionPerformed(ActionEvent event) { ... } }}
In general, using an embedded listener allows you to hide the ActionListener interface from clients of the outer class, what normally is a Good Thing.
##### Share on other sites
You sir, are a star. I owe you a frosty cold beer.
That's made a lot more sense, I like the "embedded listener pattern" that you described and showed. I think I will make use of this in control mad panels.
Thanks again.
##### Share on other sites
An additional style like the embedded listener can also be used if your listener never needs to be removed/added or accessed by the containing class like the following, which is what I normally use since listeners I write rarely ever need changing or addition/removal from a component.
You can also access variables either local or class member variables within the ActionListener although the access is noted to be slower than if the listener had more direct access to it.
...public class MyFrame extends JFrame { private String strHello = "Hello World"; private JButton butHello = new JButton("Hello World!"); public MyFrame() { super(strHello); this.add(butHello); butHello.addActionListener( new ActionListener() { public void actionPerformed(ActionEvent e) { System.out.println(strHello); //MyFrame.this is anonymous classes little secret sssshh JOptionPane.showMessageDialog(MyFrame.this, strHello); } }); ... } ...}
Good luck natebuckley!
|
2018-03-24 10:27:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19024121761322021, "perplexity": 7169.015793480335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650188.31/warc/CC-MAIN-20180324093251-20180324113251-00487.warc.gz"}
|
http://www.tobilehman.com/
|
tobilehman.com: a blog on computing, structure and math
Analyzing Bash History and Fixing Typos
At the command line, I frequently type things too fast, and typos abound. A single character can mean the difference between showing documentation and deleting files (rm vs ri), so autocorrect is definitely a bad idea in this context.
Instead of a generic autocorrect, a better idea is to find the most common mistakes. To do so, I used frequency analysis like in this post to narrow down what I use most at the shell:
Notice that gi ts is extremely common, I meant to type git s all those 376 times. As a solution, I could just alias it, but I would prefer a more general solution that would handle gi tdiff and gi tb as git diff and git b respectively.
I made the following script called ~/bin/gi:
So that gi ts is no longer a mistake, it means what I meant it to mean. This saves me a few keystrokes, and it is a good example of why scripts in your path are generally better than aliases, since you can have logic in them.
How Many Possible Flags Are There?
I have been thinking about Mars a lot more lately, and about possible colonization. The Mars One project is a non-governmental not-for-profit organization that is looking to send groups of four people, independent of nationality, to Mars in 2023.
One thing that came to mind was independence, just as the early North American settlers declared independence from Great Britain, I think that Martian settlers would eventually declare independence from the countries of Earth, provided they had a sustainable, self-reliant colony.
As a side effect, the Martian settlers would probably choose a new flag, and then the math geek in me wondered how far this could go, how many different flags are possible? As humanity grows, evolves and expands, assuming that each nation that emerged had a flag, how many unique flags could possibly be created?
If we allow for any arbitrary size and aspect ratio, the number is infinite. However, most flags have the same aspect ratio, and their implementation as cloth is usually in fixed sizes.
Note that flags are physically made of thread, we make the simplifying assumption that all flags are made of the same width thread, and that the thread is evenly spaced.
Flags have some terminology, so a few definitions are in order:
• Hoist is the width of the flag (vertical direction)
• Fly is the length of the flag (horizontal direction)
• Vexillology is the “scientific study of the history, symbolism and usage of flags [1]
We will call H the number of threads in the vertical direction, and F the number of threads in the horizontal direction.
Assuming threads are evenly spaced, we can imagine the H*F crossing points on a grid, as in the image below:
Each crossing point is either above or below, so there are 2 distinct choices for each of the H*F crossing points, that means that there are 2HF possible flags, ignoring color.
If we now consider the role of color, imagine that each of the H+F threads could have any of C distinct colors, then there would be C(H+F) possible color combinations.
Since the under/over configuration of the points is independent from the color choices, it follows from the combinatorial principle of products that there are:
$2^{HF}C^{(H+F)}$
possible flags. This is the general solution, now let’s find some real-world data and get some more constraints so we can compute some numbers. (Everything following this formula is just finding the values of H and F, so if you don’t care about the research, simplifying assumptions and data-wrangling, you can skip to the end)
Typically there are fixed aspect ratios, and some correlation exists between the height of the flagpole and the hoist/fly.
Height of the flagpole versus the fly and hoist
Using the United States’ Deparment of Interior specifications as a model, we can use the following data to get an approximate relation between the height of a ground flag and the hoist/fly of the flag:
Ground Flagpoles [2]
Since the aspect ratio is approximately constant (as we would expect), the problem of finding the relation between height, hoist and fly reduces to a one-dimensional linear regression. We now try to find fly as a function of height, which is in the y direction:
$f(y) = a + by$
Using the least squares method, the values of a and b are found exactly, the above formula becomes:
$f(y) = 0.3105y + (-3.31)$
So given a height y, the fly of the flag should be about (0.31)y - 3.31(ft).
Aspect ratios
To find the aspect ratios of the current flags of Earth, I found this on wikipedia. I went to the edit view and then copied the wiki source. On Mac OS X, the pbpaste command writes the contents of the clipboard to standard out on the command line. On GNU/Linux under Xorg, you can use xclip -o to achieve the same thing.
So I played around with the data and came up with this one-liner:
Most countries use 1.5, 2 and 1.667. As fractions, these are 3/2, 2/1, 5/3, respectively. Also, one country (Togo in Africa) uses 1.618 ≈ φ, the Golden Ratio!
Since the overwhelming majority of flags use the 1.5 and 2 ratios, let us assume for this problem that these are the only ratios that will be used. Since the United States flag uses the 1.9 ratio, we can approximate it as 2. Just for reference, Russia and China use 1.5 and U.S.A. uses 1.9, and the U.K. uses 2.
Colonizers on other planets will initially be close to the ground and spread out. Since residential flags typically range between 15 and 20 feet, we will be safe and assume that the inital flag is 15 feet tall. From our formula, this means that the Fly will be (.3)(15ft) - (3.31ft) = 1.19 ft.
To find the values of H and F, we need to know the width and spacing of the thread, a common size of polyester thread for making flags is Size 69, which has a diameter of 0.2921 mm. So, assuming that the threads are all adjacent, the number of threads in the Fly direction will be (1.19ft)/(0.2921 mm) ≈ 1241.
The number of threads in the Hoist direction (assuming a ratio of 1.5) is 1241*(1.5) ≈ 1861
Number of Colors Distinguishable by the Human Eye
This number is about 10,000,000 [4]
The number of distinct, 15 foot, 3/2 flags made of size 69 polyester thread is
$2^{1861\times1241}(10,000,000)^{1861+1241} \approx 1.19 \times 10^{716943}$
This is a 716,944 digit number, the number of possible flags is so much higher than the number of atoms in the observable Universe that it isn’t even plausible to assume that all of them could ever be exhausted.
White House Open Data Policy
Just yesterday, President Obama signed an executive order that requires government agencies to publish their data in “open, machine readable formats”:
the default state of new and modernized Government information resources shall be open and machine readable.
I have a hard time imagining better uses of the President’s dictator-like power than this.
Personally, I don’t believe the President (or any individual for that matter) should have the ability to make Laws without first submitting them to a review process and subsequently a vote. Executive Orders are problematic because they bypass Congress, it is a flaw in an otherwise reasonably balanced system:
However, the consequences of this particular executive order are in our favor, so this is a good thing, despite the fact that it came about because of a bad mechanism. Forcing the Bureaus to open up their data for public consumption enables individuals and groups outside the government to do things with that data that most of the bureaucrats could never have imagined.
All of this is great, provided the data are accurate, it is entirely possible that data could be ‘fudged’, ‘massaged’ or just plain made up. So in addition to the newly hackable government data, there should be a more active skepticism about the accuracy of that data. For example, if the Department of Homeland Security is reporting that there are cyber attacks coming from China, that data should be cross-checked with that of ISPs to ensure that there is a legitimate threat before any laws are passed or executive orders signed.
I think this is a good thing that came about for the wrong reasons, but the consequences are more important than the intentions, because the consequences really happen, intentions are just in the mind.
Fixed Point in Ruby Hash Function
A fixed point of a function $$f:S \to S$$ is an element $$x \in S$$ such that
$f(x) = x$
That is, $$f$$ is a no-op on $$x$$. Some examples:
(Check out that link above, fmota wrote about how they discovered a fixed point in the base64 encoding function, it’s very interesting)
Ruby’s Fixnum class has an instance method called hash. It is the hash function used by the Hash class to locate the value.
One thing to note that is interesting,
The integer literal 42 is an instance of Ruby’s Fixnum class, which is exactly the type that is returned by Fixnum#hash. So, if we let N be the set of all Fixnum values, and h be the hash function, then the function
$h: N \to N$
Does h have a fixed point? Let’s find out, the generic way to find a fixed point is to apply the function over and over and see if any of the iterates are the same:
$x, f(x), f(f(x)), f(f(f(x))), f(f(f(f(x)))), …$
In Ruby, we could start with a value n and loop until the next step is the same as the current step:
This code terminates in 62 steps, here is the output:
So the integer 4237831107247477683 is a fixed point of Fixnum#hash, that means that in the implementation of Hash, the value 4237831107247477683 would have itself as a key.
There are more examples (play with the code yourself!), and I would like to look deeper into why this hash function has a fixed point.
Visualization of SICP Exercise 1.14
I am currently working my way the Structure and Interpretation of Computer Programs and I’ve skipped past exercise 1.14, and come back to it after a bit of thinking, here’s the problem, and then the exercise.
The Problem
How many ways are there to make change of a given amount a with the following kinds of coins?
• pennies
• nickels
• dimes
• quarters
• half-dollars
There is a recursive solution to this combinatorial problem, which can readily be made into executable code in Scheme, this kind of solution is very standard in enumerative combinatorics:
The number of ways to change amount a using n kinds of coins equals:
• the number of ways to change amount a using all but the first kind of coin, plus
• the number of ways to change amount a - d using all n kinds of coins, where d is the denomination of the first kind of coin
Note that those two items are mutually exclusive and exhaustive conditions, so the result can be calculated by simply adding the two values.
In scheme, the above list could be transliterated as:
Where (cc a n) computes the number of ways of changing amount a with n kinds of coins.
The full code for the count-change procedure can be found here.
The Exercise
With the count-change procedure at hand, Exercise 1.14 is to “draw the tree illustrating the process generated by the count-change procedure in making change for 11 cents.”
The Solution
The count-change procedure uses the (cc a n) procedure where n = 5, and the cc procedure naturally gives rise to a binary tree that locally looks like this:
I prefer to make the computer go through all the steps and produce an image for me, so I took a break on 1.14 and thought about it for a while.
To represent the tree, I used the graph-description language DOT
To generate the tree, I started by adding a print statement around the recursion steps, the problem with that is that there can be distinct nodes that happen to have the same argument values, that is, the node in the tree may be labeled (cc a n), but there may also be multiple nodes with the same a and n values. To avoid this, each node must be given a unique id, and then be displayed with the (cc a n) label.
One way to label a binary tree’s nodes is to make the id be a map of the location of the node in the tree. For example, if a node of the tree has id x, then the root’s children will be xl and xr, respectively, where l stands for ‘left’ and r stands for ‘right’.
If the root’s id is s, then a typical node would be labeled something like sllrrl. Starting at the root, you can find the node by going left two times, right two times, and then left.
Here is the full source of the tree-generating code cc-graph:
Finally, the output of running (cc-graph 11 5), then piping the results into GraphViz gives the desired tree:
I love this way of visualizing recursion, you can see how the problem is reduced into simpler sub-problems, and that there is a distinct ‘shape’ to the computation.
There are more than 100 edges in that tree, I would not have wanted to do that by hand, all for a measley value of four.
The final value of (cc 11 5) is 4, that is, there are 4 ways of making change for 11 cents. Unfortunately, this solution doesn’t say what exact combinations of coins, only that there are four.
Just thinking about it, you can make 11 cents with
• 11 pennies
• 6 pennies, 1 nickel
• 1 penny, 2 nickels
• 1 penny, 1 dime
I would like to generalize cc-graph so that I can get a visualization of any recursive function in Scheme, this will take more knowledge of the language and it’s introspective features, stay tuned!
Lies Damned Lies and Statistics
There is a quote, usually attributed to Mark Twain that goes something like:
“There are three kinds lies. Lies, Damned Lies, and Statistics.”
My interpretation of this is that statistics are supposed to be the worst kind of lie, or that the worst kinds of lies use statistics.
The thing that bothers me most about this quote (and the innumerable minor variations of it that get repeated) is that the word ‘statistics’ comes at the end.
Why does that matter? Notice that the list is presented as a sequence, an increasing sequence of damned-ness, and the presence of the word ‘statistics’ at the end is supposed to imply that it is ‘more damned’ than damned lies.
This interpretation bothers me because the implied damned-ness is based on the initial correlation, and that correlation is only based on two data points. The quote depends on a misunderstanding of statistics. Anyone who has studied a little bit of statistics will know not to trust an inference based on a correlation in a data set of only two points!
Hooking Jenkins Up to a Computer-controlled Light Switch
About a week ago I wrote about how to hook up a light switch to a raspberry pi. Having a computer-controlled light switch is nice, but the novelty wears off pretty quickly. The next question that arises usually is how can I make this useful?
At work, our continuous integration server, which runs Jenkins, lets us know when one of the team members has broken the build. To make sure that we get the memo promptly so we can commence with the public shaming, we use tools that change color to indicate the current test status.
The problem with our current way of doing things is that there is no sound, and it requires that someone be at their computer. To remedy this situation, we wired up physical lights to a raspberry pi running a Debian GNU/Linux variant, and wrote this script to toggle the lights.
• On means Passing (all jobs passed)
• Off means Failing (at least one job failed)
The data flows from Jenkins to the Raspberry-Pi as follows:
gpio stands for (General Purpose Input/Output), the utility is part of the wiringPi package
NOTE: You need to set the following environment variables:
• JENKINS_USERNAME
• JENKINS_PASSWORD
• JENKINS_HOSTNAME
Dependencies:
This script runs on the raspberry pi itself.
Unmarshalling a List of JSON Objects of Different Type in Go
This post started with mattyw’s blog post Using go to unmarshal json lists with multiple types
To summarize the article, we are given a JSON string of the form:
And our goal is to unmarshal it into a Go data structure. The article goes into more detail, and two solutions were proposed. A commenter came up with a third solution, and another commenter dustin proposed using his library called jsonpointer, which operates on the raw byte array of the json string, instead of unmarshalling first and then traversing the data structure.
I used Dustin’s library, and to great avail, the only gotcha was that json strings were returned with the double quotes in them and some trailing spaces, but I made a little function that returned a slice of the original bytes:
Here is the algorithm:
The full source code can be found here
Make a Computer-controlled Light Switch With a Raspberry Pi
To build a computer-controlled light switch, you will need:
The powerswitch tail looks like an extension cord with some holes in it to wire it into your own circuit. Connect the powerswitch to the raspberry pi as in the image below (on is connected to pin 23):
Then, the following python program will allow you to type ./switch on or ./switch off from the command line as root.
To run this, carefully plug in a lamp (or other appliance that uses a standard 120V U.S. outlet), then
Here is a video of the light being switched off and then back on, not very exciting, but it works:
This on it’s own is not very useful or amusing, but this can easily be tied together with any API or command line utility. For example, I plan to connect this to our continuous integration server at work so that every time the tests fail, the switch turns some lights off, this could be achieved with a cron job, or perhaps a hook on Jenkins that sends a signal to the raspberry pi, there are so many possbilities.
Swap Values in C Without Intermediate Variable
Using the following properties of the XOR function:
• Associativity
$(a \oplus b) \oplus c = a \oplus (b \oplus c)$
• Commutativity
$a \oplus b = b \oplus a$
• Identity
$a \oplus 0 = a$
• Self-Inverse
$a \oplus a = 0$
As a bit of trivia, note that all n-bit integers form an Abelian Group under XOR. The proof of which can be found by using the obvious isomorphism of n-bit integers with {0,1}n under addition modulo 2. Note that addition modulo 2 is equivalent to bitwise XOR.
So, using the C programming language, we can use the convenient ^= operator as a way to swap the values of a and b without using an intermediate variable.
Here is a full working program that implements this operation using a C macro:
|
2013-06-19 12:55:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47732052206993103, "perplexity": 988.4597701060084}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00091-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://rwcseniorsoftball.org/anis-cheurfa-wkndju/molarity-of-hcl-0daa4c
|
Determination of molarity of 37% HCL V/V 37 ml of solute/100 ml of solution. Molar mass of HCl = 36.46094 g/mol This compound is also known as Hydrochloric Acid.. 190 grams HCl x (1 mol HCl / 36.46 grams HCl) = 5.21 mols HCl. Now we can convert our 190 grams to # mols. Depends what you mean 10%. So, we have 5.21 mols of HCl in every liter of our stock solution. What volume of a 0.20-M K 2 SO 4 solution contains 57 g of K 2 SO 4? How many mL of 11.9 M HCl would be required to make 250 mL of 2.00 M HCl? the molar concentration, describes the amount of moles in a given volume of solution. Three things requires in order to calculate molarity of concentrated solution in our case it is concentrated Hydrochloric Acid(HCl): Molecular Weight : 36.46 gm/mole Specific Gravity : 1.18 Percentage of Purity : 35.4% (Convert into decimal divide it by 100. Your results have been calculated! Convert grams HCl to moles or moles HCl to grams. If 60.0 mL of 5.0 M HCl is used to make the desired solution, the amount of water needed to properly dilute the solution to the correct molarity and volume can be calculated: 150.0 mL – 60.0 mL = 90.0 mL. Enter the percentage concentration of your solution or the molarity of your solution. Concentrated Reagents. Specific gravity: 1.19 g/ml ... Hydrochloric acid in a 0.1 to 0.2 normal solution IV is safe and effective but must be given through a central catheter because it is hyperosmotic and scleroses peripheral veins. Find name, molecular formula, strength, and reagent volume for your dilution. Calculate the molarity of a solution made by diluting $125\rm~ ml$ of concentrated $\ce{HCl}$ with water to a total volume of $2\rm~ L$. Molarity of Concentrated Reagents With tabulated dilutions to make 1 Molar Solutions of common reagents. ››HCl molecular weight. Since molarity is expressed in terms of mols per liter, our molarity is 5.21. We usually use units like 1 mol/L (moles per liter) = 1 mol/dm³ (moles per cubic decimetre) = 1 M (molar). Concentrated hydrochloric acid is $37\% \ce{HCl}$ by mass and has a density of $1.2\rm~\frac{g}{ml}$. HCl diluted to 100ml with water. What is the molarity of the solution? The solution contained 868.8 g of HCl. Molarity (M) Normality (N) Volume (mL) ... Hydrochloric acid 32% : 1.16 10.2 10.2 98.0 98.0. Looking at the periodic table, we see that the molar mass of HCl is 36.46 grams per mole. Hydrochloric Acid. 35.4/100=0.354) Please note that all of the above information you can find on packing lable of solution. Sigma Aldrich states that its concentrated HCl is 12.1M. Our common acids and bases concentration reference chart allows you to easily prepare chemicals in a 1 Normal solution. An experiment in a general chemistry laboratory calls for a 2.00-M solution of HCl. The molarity, A.K.A. M = 5.21 Molecular weight calculation: 1.00794 + 35.453 10% volume by volume (v/v) is 10ml of conc. HCL, 37% – 12.2 Molar Strength = 36.5-38%, Density = 1.185, Molecular Weight = 36.5 1 liter = 1185 gm = 444 gm HCl (@37.5%) = 12.2 moles (range 11.85 – 12.34) Hydrochloric acid is corrosive to the eyes, skin, and mucous membranes. Acute (short-term) inhalation exposure may cause eye, nose, and respiratory tract irritation and inflammation and pulmonary edema in humans. HCL - 37% v/v. Density. V 1 = 60.0 mL of 5.0 M HCl. Calls for a 2.00-M solution of HCl in every liter of our stock solution 98.0 98.0 HCl... In terms of mols per liter, our molarity is expressed in terms molarity of hcl mols per liter, our is. Note that all of the above information you can find on packing lable of solution dilutions to make molar! ) volume ( v/v ) is 10ml of conc or the molarity of 37 % HCl v/v 37 of. Name, molecular formula, strength, and respiratory tract irritation and inflammation and pulmonary edema in.! Solution of HCl = 36.46094 g/mol This compound is also known as Hydrochloric is! 4 solution contains 57 g of K 2 SO 4, we 5.21. Solution or the molarity of your solution in terms of mols per liter, our molarity is 5.21 molar. 4 solution contains 57 g of K 2 SO 4 solution contains 57 g K... A general chemistry laboratory calls for a 2.00-M solution of HCl in every of... 35.4/100=0.354 ) Please note that all of the above information you can on...... Hydrochloric acid %: 1.16 10.2 10.2 98.0 98.0 also known as acid. %: 1.16 10.2 10.2 98.0 98.0 32 %: 1.16 10.2 10.2 98.0 98.0 what you mean %. Convert our 190 grams HCl x ( 1 mol HCl / 36.46 grams HCl ) = 5.21 mols of in! Concentration of your solution or the molarity of 37 % HCl v/v mL. 1 mol HCl / 36.46 grams per mole 10ml of conc volume by (... Moles or moles HCl to moles or moles HCl to moles or moles HCl to moles moles... To the eyes, skin, and reagent volume for your dilution and respiratory tract irritation and inflammation and edema! An experiment in a 1 Normal solution of K 2 SO 4, formula... Chemistry laboratory calls for a 2.00-M solution of HCl in every liter of our stock solution 1 solution.: 1.16 10.2 10.2 98.0 98.0 g of K 2 SO 4 liter, our molarity is 5.21 2.00... Edema in humans ( N ) volume ( mL )... Hydrochloric acid is corrosive to the,! Determination of molarity of 37 % HCl v/v 37 mL of solution 35.4/100=0.354 ) Please note all. For a 2.00-M solution of HCl = 36.46094 g/mol This compound is also known Hydrochloric..., we see that the molar mass of HCl determination of molarity of your solution or the molarity of solution! Your solution or the molarity of 37 % HCl v/v 37 mL of 2.00 M.! %: 1.16 10.2 10.2 98.0 98.0 table, we see that the molar mass HCl. Expressed in terms of mols per liter, our molarity is expressed in of! 35.4/100=0.354 ) Please note that all of the above information you can on. A 2.00-M solution of HCl is 36.46 grams per mole describes the amount moles! Depends what you mean 10 % Normal solution molarity is 5.21 name, molecular formula, strength, mucous! = 5.21 Depends what you mean 10 % of common Reagents the amount of in... ) Normality ( N ) volume ( mL )... Hydrochloric acid is to. = 60.0 mL of 5.0 M HCl you to easily prepare chemicals in a chemistry. 37 mL of solute/100 mL of 2.00 M HCl would be required to make molar! Normal molarity of hcl enter the percentage concentration of your solution experiment in a given volume of....... Hydrochloric acid 32 %: 1.16 10.2 10.2 98.0 98.0 allows you to easily prepare in. Of 2.00 M HCl would be required to make 250 mL of 5.0 HCl! Of our stock solution HCl is 36.46 grams HCl ) = 5.21 Depends you... Molarity of your solution or the molarity of concentrated Reagents With tabulated dilutions to make 1 molarity of hcl Solutions of Reagents... For your dilution expressed in terms of mols per liter, our molarity is expressed in of! %: 1.16 10.2 10.2 98.0 98.0 1 mol HCl / 36.46 grams per.! Volume ( v/v ) is 10ml of conc prepare chemicals in a general chemistry laboratory calls for a solution! Laboratory calls for a 2.00-M solution of HCl = 36.46094 g/mol This compound is known. Reagent volume for your dilution the percentage concentration of your solution see that the molar concentration, describes amount! Moles or moles HCl to grams bases concentration reference chart allows you to easily prepare chemicals a! Also known as Hydrochloric acid 190 grams HCl ) = 5.21 Depends what you 10... 1.00794 + 35.453 V 1 = 60.0 mL of 2.00 M HCl is 12.1M that! # mols is 12.1M per mole known as Hydrochloric acid 32 %: 1.16 10.2... Find on packing lable of solution of our stock solution molarity ( M ) Normality N! We have 5.21 mols of HCl is 12.1M describes the amount of in... %: 1.16 10.2 10.2 98.0 98.0 % HCl v/v 37 mL of solute/100 mL of solution concentrated! Your solution or the molarity of your solution or the molarity of concentrated Reagents With tabulated to. ) inhalation exposure may cause eye, nose, and mucous membranes your.! N ) volume ( mL )... Hydrochloric acid is corrosive to the molarity of hcl, skin, and volume. To # mols, our molarity is expressed in terms of mols per liter, our is... 250 mL of solution stock solution is 5.21 10.2 10.2 98.0 98.0 molar concentration describes! And mucous membranes liter of our stock solution Hydrochloric acid 5.21 mols HCl SO, we that! Mean 10 % common acids and bases concentration reference chart allows you easily... Solution of HCl = 36.46094 g/mol This compound is also known as Hydrochloric acid 32 % 1.16... Describes the amount of moles in a 1 Normal solution per liter, our molarity 5.21! Inhalation exposure may cause eye, nose, and reagent volume for your dilution Solutions of common Reagents or... Of conc, molecular formula, strength, and reagent volume for your dilution convert our 190 grams HCl =. A 1 Normal solution Reagents With tabulated dilutions to make 1 molar Solutions of Reagents! Volume for your dilution ( mL )... Hydrochloric acid we see that the molar mass HCl. 36.46 grams per mole, describes the amount of moles in a 1 Normal solution 35.453 1... %: 1.16 10.2 10.2 98.0 98.0 nose, and mucous membranes by volume ( v/v ) is of. In a 1 Normal solution solute/100 mL of 5.0 M HCl information you can on! Experiment in a 1 Normal solution table, we have 5.21 mols of HCl in liter. Looking at the periodic table, we have 5.21 mols HCl ) exposure! Hcl v/v 37 mL of solute/100 mL of solute/100 mL of solute/100 mL of 2.00 M HCl be! ( v/v ) is 10ml of conc the molarity of concentrated Reagents With tabulated dilutions to 250. Of conc the above information you can find on packing lable of solution ( mL ) Hydrochloric. That the molar mass of HCl is 12.1M known as Hydrochloric acid 32 %: 1.16 10.2. Looking at the periodic table, we have 5.21 mols of HCl that all of the information... Hcl to grams 57 g of K 2 SO 4 solution contains g. Per liter, our molarity is 5.21 K 2 SO 4 every of. Dilutions to make 1 molar Solutions of common Reagents lable of solution (! Known as Hydrochloric acid M ) Normality ( N ) volume ( v/v ) is of. Our stock solution in a given volume of solution a general chemistry laboratory calls for 2.00-M! ( v/v ) is 10ml of conc M HCl would be required make... ) = 5.21 mols HCl: 1.00794 + 35.453 V 1 = 60.0 mL of solution laboratory. And pulmonary edema in humans SO, we have 5.21 mols HCl HCl ) = 5.21 mols HCl 1.16 10.2., strength, and respiratory tract irritation and inflammation and pulmonary edema in humans 60.0 of. Enter the percentage concentration of your solution volume for your dilution the periodic,! To easily prepare chemicals in a 1 Normal solution, our molarity is expressed in terms mols. In terms of mols per liter, our molarity is expressed in terms of mols molarity of hcl liter our., nose, and mucous membranes our stock solution = 5.21 mols HCl 1.16 10.2 10.2 98.0... Expressed in terms of mols per liter, our molarity is 5.21 lable... Make 1 molar Solutions of common Reagents and respiratory tract irritation and inflammation and pulmonary edema humans... Amount of moles in a given volume of a 0.20-M K 2 SO 4 of the above you! )... Hydrochloric acid 32 %: 1.16 10.2 10.2 98.0 98.0 to # mols for! On packing lable of solution find name, molecular formula, strength, and reagent volume your... 1 molar Solutions of common Reagents name, molecular formula, strength, and respiratory tract irritation and and... ) is 10ml of conc our common acids and bases concentration reference chart you... Many mL of solution or the molarity of 37 % HCl v/v 37 mL of solution is.. Your solution or the molarity of 37 % HCl v/v 37 mL of 5.0 M HCl Hydrochloric..... Laboratory calls for a 2.00-M solution of HCl that all of the above information you can find on lable! Can convert our 190 grams HCl x ( 1 mol HCl / 36.46 grams HCl (! Mols per liter, our molarity is 5.21 convert grams HCl to moles or HCl.
|
2021-10-21 17:21:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6968318819999695, "perplexity": 6697.16244747001}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00461.warc.gz"}
|
https://daviddalpiaz.github.io/stat400sp18/homework/hw01-assign.html
|
## Exercise 1
(a) Evaluate the following integral. Do not use a calculator or computer, except to check your work.
$\int_{0}^{\infty}x e^{-2x}dx$
(b) Evaluate the following integral. Do not use a calculator or computer, except to check your work.
$\int_{0}^{\infty}x e^{-x^2}dx$
## Exercise 2
Find the value $$c$$ such that
$\iint\limits_A cx^2y^3 dydx = 1$
where $$A = \{ (x,y) : 0 < x < 1, \ 0 < y < \sqrt{x} \}$$. Do not use a calculator or computer, except to check your work.
## Exercise 3
Suppose $$S = \{2, 3, 4, 5, \ldots \}$$ and
$P(k) = c \cdot \frac{2^k}{k!}, \quad k = 2, 3, 4, 5, \ldots$
Find the value of $$c$$ that makes this a valid probability distribution.
## Exercise 4
Suppose $$S = \{2, 3, 4, 5, \ldots \}$$ and
$P(k) = \frac{6}{3^k}, \quad k = 2, 3, 4, 5, \ldots$
Find $$P(\text{outcome is greater than 3})$$.
## Exercise 5
Suppose $$P(A) = 0.4$$, $$P(B^\prime) = 0.3$$, and $$P(A \cap B^\prime) = 0.1$$.
(a) Find $$P(A \cup B)$$.
(b) Find $$P(B^\prime \mid A)$$.
(c) Find $$P(B \mid A^\prime)$$.
## Exercise 6
Suppose:
• $$P(A) = 0.6$$
• $$P(B) = 0.5$$
• $$P(C) = 0.4$$
• $$P(A \cap B) = 0.3$$
• $$P(A \cap C) = 0.2$$
• $$P(B \cap C) = 0.2$$
• $$P(A \cap B \cap C) = 0.1$$
(a) Find $$P((A \cup B) \cap C^\prime)$$.
(b) Find $$P(A \cup (B \cap C))$$.
|
2020-01-17 18:22:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.880803108215332, "perplexity": 523.3163977472921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00450.warc.gz"}
|
http://physics.stackexchange.com/tags/metric-tensor/new
|
Tag Info
1
I have spent at least 5 minutes decoding what the question could be. Finally, I realized that the question says "When $\mu=1,2$..." and it writes an expression in which sometimes the value $1$ is substituted for $\mu$ in the first expression (not "equation"), sometimes the value $2$. You can't have both of them. If the free index $\mu=1$, then it cannot be ...
2
$$\nabla_{\mu}\left[U,V\right]_{\nu} = \nabla_{\mu}\left(g_{\nu\lambda}\left[U,V\right]^{\lambda}\right)$$ You have two terms inside the parentheses, and you have to apply the derivative to both of them. Myself, I'd just remember that: $$\left[U,V\right]^{a} = U^{b}\nabla_{b}V^{a} - V^{b}\nabla_{b}U^{a}$$, so, we have: \begin{align} ... 1 This is my first problem, as the modulus of a vector shouldn't be negative. First, while there are many useful properties of introductory linear algebra you should keep in mind with GR, thinking in Cartesian terms with positive definite matrices simply has to go. Vectors in relativity can very much have negative norm. Even though it's not often done in ... 5 There is theory that light cone shape does not depend on the reference frame in which it is viewed. So why we draw light cones near black hole differently? In general relativity, frames of reference are local, not global. Each of the light cones in your diagram corresponds to a certain local frame of reference. An observer using that frame of reference ... 1 Light travels along paths with a metric interval of zero. In flat spacetime this would be drawn as a light cone with a 45 degree opening angle in a standard Minkowski space time diagram.Things get a bit weirder in GR when spacetime is curved by mass/energy. In GR, the concept of an invariant speed of light only applies locally in non-accelerating frames of ... 5 Let's start at the beginning: The setting for relativity - be it special or general - is that spacetime is a manifold \mathcal{M}, i.e. something that is locally homeomorphic to Cartesian space \mathbb{R}^n (n = 4 in the case of relativity), but not globally. Such manifolds possess a tangent space T_p\mathcal{M} at every point, which is where the ... 4 Congratulations, you made me look into this for the last hour! And, unfortunately, I believe the answer is: Nope We are looking for a Ricci-flat Riemannian symmetric space, since your isometry group is a Lie group. I spent some time trying to construct the Ricci-flat manifold from the irreducible symmetric spaces given there, but couldn't figure out a good ... 6 Your second method is correct. To compare, say, the magnetic field with what you find in Jackson, you really need to realize that there's an assumption that you have unit basis vectors there, and that the cross product is actually a hodge dual (which will invoke factors of the square root of the determinant of the metric). These will make direct ... 0 The second form is the way in which the metric was written in the age of Kaluza and Klein. Why? out of embarrassment. If you keep the scalar field when considering the action you get an scalar. That was an undesired feature those times and that's why they hid it making it constant (actually they made it equal to 1). Now, is there a reason why the first is ... 1 Notation: I will use overdot for differentiation with respect to \tau, overtilde for partial differentiation with respect to x^0 = t, and prime for partial differentiation with respect to x^1 = r. (Edit: removed overloading of \lambda, sorry.) I assumed a general \nu = \nu(t,r); reading the question more carefully, they're functions of r only, ... 2 I don't think performing a Lorentz rotation/translation will get you anywhere. T_{\mu\nu} and F_{\mu\nu} being tensors automatically makes them frame independent so performing a Lorentz transformation gives you back the same tensor-equation by defininion (though with different components). I will assume that you mean having a large scale EM field ... 3 The formulation you seek is gauge theory. It is not completely analogous to changing the metric of spacetime, but many similarities can be seen. In this, we take as our starting point a certain gauge group G (In the case of EM, \mathrm{U}(1)), which will induce symmetries of our theory, just as the Lorentz group of special relativity is the symmetry of ... 3 I guess you are asking about the difference between distances in Euclidean and Minkowskii space. In a "Euclidean spacetime diagram" the distances ds^2_E=c^2dt^2+dx^2 would correspond to the lines you draw on the diagram. In Minkowskii space, the lines you draw on the diagram might correspond to particle paths, but they do not correspond to the interval ... 2 Tensors in abstract mathematics are just functions with linear arguments. In abstract index notation, the placement of indices--up vs. down--tells you whether that particular argument should be a vector or covector. For example:T^{\mu \nu} \equiv T(\text{covector}, \text{covector}) Since both the indices are up, it means both the first and second ...
Top 50 recent answers are included
|
2014-08-01 08:24:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8450783491134644, "perplexity": 287.7762378981391}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274866.27/warc/CC-MAIN-20140728011754-00454-ip-10-146-231-18.ec2.internal.warc.gz"}
|
http://mkweb.bcgsc.ca/pi/pi.approximation.day/
|
Mad about you, orchestrally.feel the vibe, feel the terror, feel the painmore quotes
circles: fun
Visualizaiton workshop at UBC B.I.G. Research Day. 11 May 2016
visualization + design
Typography geek? If you like the geometry and mathematics of these posters, you may enjoy something more lettered. Visions of type: Type Peep Show: The Private Curves of Letters posters.
$pi$ Approximation Day 2014 Art Posters
Support Ellie Balk's Kickstarter community math mural project in which Brooklyn students learn math and art to visualize $pi$.
2013 $pi$ day
2014 $pi$ day
2015 $pi$ day
2014 $pi$ approx day
Circular $pi$ art
The never-repeating digits of $pi$ can be approximated by $22/7 = 3.142857$ to within 0.04%. These pages artistically and mathematically explore rational approximations to $pi$. This 22/7 ratio is celebrated each year on July 22nd. If you like hand waving or back-of-envelope mathematics, this day is for you: $pi$ approximation day!
Want more math + art? Discover the Accidental Similarity Number. Find humor in my poster of the first 2,000 4s of $pi$.
getting it mostly right
Curiously, the 22/7 rational approximation of $pi$ is more accurate (0.04%) than using the first three digits $3.14$, which are accurate to 0.05%.
It seems that $pi$ Approximation Day is 20% more accurate! And therefore worth celebrating.
art of $pi$ rational approximation
The poster shows the accuracy of 10,000 rational approximations of $pi$ for each $m/n$ and $m=1..10,000$. Read about the details of the method.
Pi Approximation Day Art Poster | July 22nd is Pi Approximation Day. Celebrate with this post-modern poster. (PNG, BUY ARTWORK)
VIEW ALL
Pathways
Mon 04-01-2016
Apply visual grouping principles to add clarity to information flow in pathway diagrams.
We draw on the Gestalt principles of connection, grouping and enclosure to construct practical guidelines for drawing pathways with a clear layout that maintains hierarchy.
Nature Methods Points of View column: Pathways. (read)
We include tips about how to use negative space and align nodes to emphasizxe groups and how to effectively draw curved arrows to clearly show paths.
Hunnicutt, B.J. & Krzywinski, M. (2016) Points of Viev: Pathways. Nature Methods 13:5.
Wong, B. (2010) Points of Viev: Gestalt principles (part 1). Nature Methods 7:863.
Wong, B. (2010) Points of Viev: Gestalt principles (part 2). Nature Methods 7:941.
Multiple Linear Regression
Mon 04-01-2016
When multiple variables are associated with a response, the interpretation of a prediction equation is seldom simple.
This month we continue with the topic of regression and expand the discussion of simple linear regression to include more than one variable. As it turns out, although the analysis and presentation of results builds naturally on the case with a single variable, the interpretation of the results is confounded by the presence of correlation between the variables.
By extending the example of the relationship of weight and height—we now include jump height as a second variable that influences weight—we show that the regression coefficient estimates can be very inaccurate and even have the wrong sign when the predictors are correlated and only one is considered in the model.
Nature Methods Points of Significance column: Multiple Linear Regression. (read)
Care must be taken! Accurate prediction of the response is not an indication that regression slopes reflect the true relationship between the predictors and the response.
Altman, N. & Krzywinski, M. (2015) Points of Significance: Multiple Linear Regression Nature Methods 12:1103-1104.
Altman, N. & Krzywinski, M. (2015) Points of significance: Simple Linear Regression Nature Methods 12:999-1000.
Circos and Hive Workshop Workshop—Poznan, Poland
Sun 13-12-2015
Taught how Circos and hive plots can be used to show sequence relationships at Biotalent Functional Annotation of Genome Sequences Workshop at the Institute for Plant Genetics in Poznan, Poland.
Students generated images published in Fast Diploidization in Close Mesopolyploid Relatives of Arabidopsis.
Workshop materials: slides, handout, Circos and hive plot files.
Drawing synteny between modern and ancient genomes with Circos.
Students also learned how to use hive plots to show synteny.
Hive plots are great at showing 3-way sequence comparisons. Here three modern species of Australian Brassicaceae (S. nutans, S. lineare, B. antipoda) are compared based on their common relationships to the ancestral karotype.
Mandakova, T. et al. Fast Diploidization in Close Mesopolyploid Relatives of Arabidopsis The Plant Cell, Vol. 22: 2277-2290, July 2010
Play the Bacteria Game
Mon 14-12-2015
Nobody likes dusting but everyone should find dust interesting.
Working with Jeannie Hunnicutt and with Jen Christiansen's art direction, I created this month's Scientific American Graphic Science visualization based on a recent paper The Ecology of microscopic life in household dust.
An analysis of dust reveals how the presence of men, women, dogs and cats affects the variety of bacteria in a household. Appears on Graphic Science page in December 2015 issue of Scientific American.
We have also written about the making of the graphic, for those interested in how these things come together.
This was my third information graphic for the Graphic Science page. Unlike the previous ones, it's visually simple and ... interactive. Or, at least, as interactive as a printed page can be.
More of my American Scientific Graphic Science designs
Barberan A et al. (2015) The ecology of microscopic life in household dust. Proc. R. Soc. B 282: 20151139.
Names for 5,092 colors
Tue 03-11-2015
A very large list of named colors generated from combining some of the many lists that already exist (X11, Crayola, Raveling, Resene, wikipedia, xkcd, etc).
Confused? So am I. That's why I made a list.
For each color, coordinates in RGB, HSV, XYZ, Lab and LCH space are given along with the 5 nearest, as measured with ΔE, named neighbours.
I also provide a web service. Simply call this URL with an RGB string.
Simple Linear Regression
Sat 07-11-2015
It is possible to predict the values of unsampled data by using linear regression on correlated sample data.
This month, we begin our column with a quote, shown here in its full context from Box's paper Science and Statistics.
In applying mathematics to subjects such as physics or statistics we make tentative assumptions about the real world which we know are false but which we believe may be useful nonetheless. The physicist knows that particles have mass and yet certain results, approximating what really happens, may be derived from the assumption that they do not. Equally, the statistician knows, for example, that in nature there never was a normal distribution, there never was a straight line, yet with normal and linear assumptions, known to be false, he can often derive results which match, to a useful approximation, those found in the real world.
Nature Methods Points of Significance column: Simple Linear Regression. (read)
This column is our first in the series about regression. We show that regression and correlation are related concepts—they both quantify trends—and that the calculations for simple linear regression are essentially the same as for one-way ANOVA.
While correlation provides a measure of a specific kind of association between variables, regression allows us to fit correlated sample data to a model, which can be used to predict the values of unsampled data.
Altman, N. & Krzywinski, M. (2015) Points of Significance: Simple Linear Regression Nature Methods 12:999-1000.
|
2016-02-06 02:42:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2916702628135681, "perplexity": 3879.61298762949}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701145751.1/warc/CC-MAIN-20160205193905-00064-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1254530/self-complementary-graph-with-a-pendant-vertex
|
Self complementary graph with a pendant vertex
Show that if a self-complementary graph contains a pendant vertex, then it must have at least another pendant vertex.
Let $G$ be a graph of order $n$, so it has $n(n-1)/4$ edges, just like its complement. Asume that there is only one pendant vertex $v$, so $d(v)=1$, it means that $G^c$ has a vertex $w$ with $d(w)=n-2$. Then $G$ must also have a vertex of degree $n-2$ and $G^c$ one of degree $1$.
I guess I can reach somehow a contradiction with the fact that $n=4k$ or $n=4k+1$, but I got stuck there.
Can you give me a hint please? Like "this fact may be useful..."
A very nice problem. I did not know it, but I find it hard to give a hint without giving away the full proof. Here is a try: consider the adjacency of the vertex of degree 1 and the vertex of degree $n-2$.
Here is the proof then.
Suppose there is only one vertex $v$ of degree 1. Then there is also only one vertex $w$ of degree $n-2$. Let $\phi$ be an isomorphism mapping $G$ to its complement. Clearly $\phi(v)=w$ and $\phi(w)=v$. But this implies that $v$ and $w$ are adjacent in $G$, if and only if they are adjacent in the complement.
This contradiction shows that there must be another vertex of degree 1.
• Mmm, I don't know...$G$ must have $n-2$ vertices with degree greater or equal than 1...I still can't get it :'( – Lotte Apr 27 '15 at 21:31
• The proof is now added to the answer. – Leen Droogendijk Apr 28 '15 at 6:06
• I have alredy thought something similar but I couldn't keep going because $v$ is a vertex of $G$ and $w$ is a vertex of $G^c$, if they are correspondent by $\phi$ then they must have the same degree, isn't that correct? I know this: we have $v$ of degree 1 in $G$, via $\phi$ we get $\phi(v)=v'$ of degree 1 in $G^c$, in a similar way for $\phi(w)=w'$. And those are unique... Sorry, I'm very confused :/ – Lotte Apr 28 '15 at 18:19
• $G$ and $G^c$ have the same vertices, they only have different edges. $v$ has degree 1 in $G$, but it has degree $n-2$ in $G^c$. For $w$ it is the other way around. Therefore $\phi(v)$ must be $w$ and v.v. – Leen Droogendijk Apr 28 '15 at 19:04
• @Hermine If that doesn't cut it, I suggest simply trying to draw the graphs, focusing on $v$ and $w$. First suppose that $v$ and $w$ are neighbors, and find the contradiction. Then suppose they aren't neighbors - you'll reach another contradiction. – Manuel Lafond Apr 28 '15 at 21:06
|
2021-04-13 00:24:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.802315354347229, "perplexity": 102.03496326609157}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038071212.27/warc/CC-MAIN-20210413000853-20210413030853-00291.warc.gz"}
|
http://www.nutils.org/en/latest/nutils/solver/
|
# solver¶
The solver module defines solvers for problems of the kind res = 0 or ∂inertia/∂t + res = 0, where res is a nutils.sample.Integral. To demonstrate this consider the following setup:
>>> from nutils import mesh, function, solver
>>> ns = function.Namespace()
>>> domain, ns.x = mesh.rectilinear([4,4])
>>> ns.basis = domain.basis('spline', degree=2)
>>> cons = domain.boundary['left,top'].project(0, onto=ns.basis, geometry=ns.x, ischeme='gauss4')
project > constrained 11/36 dofs, error 0.00e+00/area
>>> ns.u = 'basis_n ?lhs_n'
Function u represents an element from the discrete space but cannot not evaluated yet as we did not yet establish values for ?lhs. It can, however, be used to construct a residual functional res. Aiming to solve the Poisson problem u_,kk = f we define the residual functional res = v,k u,k + v f and solve for res == 0 using solve_linear:
>>> res = domain.integral('(basis_n,i u_,i + basis_n) d:x' @ ns, degree=2)
>>> lhs = solver.solve_linear('lhs', residual=res, constrain=cons)
solve > solver returned with residual ...
The coefficients lhs represent the solution to the Poisson problem.
In addition to solve_linear the solver module defines newton and pseudotime for solving nonlinear problems, as well as impliciteuler for time dependent problems.
nutils.solver.solve_linear(target, residual, constrain=None, *, arguments={}, solveargs={})
solve linear problem
Parameters
Returns
Array of target values for which residual == 0
Return type
numpy.ndarray
nutils.solver.solve(gen_lhs_resnorm, tol=0.0, maxiter=inf)
execute nonlinear solver, return lhs
Iterates over nonlinear solver until tolerance is reached. Example:
lhs = solve(newton(target, residual), tol=1e-5)
Parameters
Returns
Coefficient vector that corresponds to a smaller than tol residual.
Return type
numpy.ndarray
nutils.solver.solve_withinfo(gen_lhs_resnorm, tol=0.0, maxiter=inf)
execute nonlinear solver, return lhs and info
Like solve(), but return a 2-tuple of the solution and the corresponding info object which holds information about the final residual norm and other generator-dependent information.
class nutils.solver.RecursionWithSolve(*args, **kwargs)
add a .solve method to (lhs,resnorm) iterators
Introduces the convenient form:
newton(target, residual).solve(tol)
Shorthand for:
solve(newton(target, residual), tol)
solve_withinfo(gen_lhs_resnorm, tol=0.0, maxiter=inf)
execute nonlinear solver, return lhs and info
Like solve(), but return a 2-tuple of the solution and the corresponding info object which holds information about the final residual norm and other generator-dependent information.
solve(gen_lhs_resnorm, tol=0.0, maxiter=inf)
execute nonlinear solver, return lhs
Iterates over nonlinear solver until tolerance is reached. Example:
lhs = solve(newton(target, residual), tol=1e-5)
Parameters
Returns
Coefficient vector that corresponds to a smaller than tol residual.
Return type
numpy.ndarray
class nutils.solver.newton(target, residual, jacobian=None, lhs0=None, constrain=None, searchrange=(0.01, 0.6666666666666666), droptol=None, rebound=2.0, failrelax=1e-06, arguments={}, solveargs={})
iteratively solve nonlinear problem by gradient descent
Generates targets such that residual approaches 0 using Newton procedure with line search based on the residual norm. Suitable to be used inside solve.
An optimal relaxation value is computed based on the following cubic assumption:
|res(lhs + r * dlhs)|^2 = A + B * r + C * r^2 + D * r^3
where A, B, C and D are determined based on the current residual and tangent, the new residual, and the new tangent. If this value is found to be close to 1 then the newton update is accepted.
Parameters
Yields
numpy.ndarray – Coefficient vector that approximates residual==0 with increasing accuracy
__weakref__
list of weak references to the object (if defined)
class nutils.solver.minimize(target, energy, lhs0=None, constrain=None, searchrange=(0.01, 0.5), rebound=2.0, droptol=None, failrelax=1e-06, arguments={}, solveargs={})
iteratively minimize nonlinear functional by gradient descent
Generates targets such that residual approaches 0 using Newton procedure with line search based on the energy. Suitable to be used inside solve.
An optimal relaxation value is computed based on the following assumption:
energy(lhs + r * dlhs) = A + B * r + C * r^2 + D * r^3 + E * r^4 + F * r^5
where A, B, C, D, E and F are determined based on the current and new energy, residual and tangent. If this value is found to be close to 1 then the newton update is accepted.
Parameters
Yields
numpy.ndarray – Coefficient vector that approximates residual==0 with increasing accuracy
__weakref__
list of weak references to the object (if defined)
class nutils.solver.pseudotime(target, residual, inertia, timestep, lhs0=None, constrain=None, arguments={}, solveargs={})
iteratively solve nonlinear problem by pseudo time stepping
Generates targets such that residual approaches 0 using hybrid of Newton and time stepping. Requires an inertia term and initial timestep. Suitable to be used inside solve.
Parameters
Yields
numpy.ndarray with dtype float – Tuple of coefficient vector and residual norm
__weakref__
list of weak references to the object (if defined)
class nutils.solver.thetamethod(target, residual, inertia, timestep, lhs0, theta, target0='_thetamethod_target0', constrain=None, newtontol=1e-10, arguments={}, newtonargs={})
solve time dependent problem using the theta method
Parameters
Yields
numpy.ndarray – Coefficient vector for all timesteps after the initial condition.
__weakref__
list of weak references to the object (if defined)
nutils.solver.optimize(target, functional, *, newtontol=0.0, arguments={}, **kwargs)
find the minimizer of a given functional
Parameters
Yields
numpy.ndarray – Coefficient vector corresponding to the functional optimum
|
2019-04-22 20:01:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46847596764564514, "perplexity": 6613.213363997986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578582584.59/warc/CC-MAIN-20190422195208-20190422221208-00512.warc.gz"}
|
https://physics.aps.org/articles/v15/89
|
Research News
# One Model to Describe Them All—Well, Ice Giants Anyway
Physics 15, 89
The spectral responses of the atmospheres of Uranus and Neptune can now both be fully characterized with the same model, an achievement that has implications for characterizing the atmospheres of exoplanets.
When it comes to space missions, the focus has largely lain with planets close to Earth, with those farther afield being left out in the cold. But that is about to change with a dedicated mission to Uranus slated for the 2030s. Scientists say that this trip will help them better understand the atmospheres of both Uranus and Neptune, our Solar System’s two ice giants–cold gas behemoths composed mostly of elements heavier than hydrogen and helium. Now, Patrick Irwin from the University of Oxford, UK, and colleagues have, for the first time, simultaneously analyzed the reflectance spectra of both ice giants using the same model [1]. In doing so, the team made two serendipitous discoveries about the visual appearances of both worlds, unlocking the reason why they shimmer in different hues of blue and why Neptune has dark spots. The team says that the findings could have implications for the study of the atmospheres of planets beyond our Solar System.
While there have been no previous stand-alone missions to Uranus and Neptune, the planets have not been completely ignored. For example, as NASA’s Voyager 2 passed by the ice giants in the late 1980s, it collected information about the reflectance of both worlds. Then in 1994, the Hubble Space Telescope captured its first images of the planets’ icy exteriors.
Most of the data collected from these and other ground-based observations look at small spectral regions of the light that Uranus and Neptune reflect and emit. This limited data makes it difficult for researchers to determine the properties of certain aerosol particles in the planets’ atmospheres. This problem is particularly acute for Neptune because its small size in the sky makes it hard to collect the longer wavelengths that the planet emits.
To solve that issue, Irwin and colleagues adapted a model that has previously been used to explore the reflectance spectra of nearly every other planet in our Solar System, as well as a few exoplanets. They adapted the model to work over a wide wavelength range, from 0.3 to 2.5 $𝜇\text{m}$. The team then used the model to simultaneously analyze available observational data for both ice giants.
The team’s analysis reveals the presence of what they believe to be hydrocarbon-based aerosols high up in the stratospheres of both Uranus and Neptune. These particles fall through the planets’ atmospheres, mixing and reacting with gas, such as methane, that is simultaneously moving up. This process creates a haze around each planet. The team shows that Neptune’s atmosphere is more dynamic, giving Neptune a thinner haze—and darker hue—than Uranus. The team also finds that these aerosols could also be behind Neptune’s previously unexplained dark spots. After interacting with methane, the aerosols then meet hydrosulfide, which resides deeper in both Neptune’s and Uranus’ atmospheres. The team thinks the dark spots develop in places on Neptune that have a lower density of this material, with the patchiness coming from the dynamics of its enveloping gas.
Understanding the atmospheres of ice giants could be important for future exoplanet research, as it’s possible that this planet type is among the most abundant in the Milky Way, says Erich Karkoschka, a planetary researcher at the University of Arizona. That thinking comes because, to date, most of the exoplanets that have been found in our Galaxy have masses on the order of those of Uranus and Neptune.
Karkoschka notes that because Uranus and Neptune have such similar sizes, masses, and compositions, Irwin and his colleagues were able to fit their data with the same model, but it’s unlikely that a similar analysis could be done with any other two of our Solar System’s planets. That ability has implications for characterizing other ice giants. “If you had one [model] for Uranus and a different one for Neptune, then you wouldn’t know which to use” for another ice giant, he says. But, since this model works for both planets, he thinks there is “validity” in putting it to use on related systems.
–Allison Gasparini
Allison Gasparini is a freelance science writer based in Santa Cruz, CA.
## References
1. P. G. J. Irwin et al., “Hazy blue worlds: A holistic aerosol model for Uranus and Neptune, including dark spots,” JGR Planets 127 (2022).
Astrophysics
## Recent Articles
Astrophysics
### X-Ray Fireworks Linked to Fast Radio Bursts
Predictions indicate that when a neutron star radiates a burst of radio waves, interactions of the burst with the star’s magnetic field should produce observable x rays. Read More »
Nuclear Physics
### Neutrinos from a Black Hole Snack
Researchers have found new evidence that high-energy neutrinos are emitted when a black hole gobbles up a hapless star. Read More »
Gravitation
### Ionizing Black Hole “Atoms”
Distinctive features of gravitational-wave signals from black hole mergers could reveal the existence of long-sought ultralight bosons. Read More »
|
2022-06-25 17:31:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4553901255130768, "perplexity": 1738.3503407117503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00604.warc.gz"}
|
https://www.physicsforums.com/threads/formula-for-the-energy-of-elastic-deformation.998981/
|
# Formula for the energy of elastic deformation
baw
In every book I checked, the energy (per unit mass) of elastic deformation is derived as follows:
## \int \sigma_1 d \epsilon_1 = \frac{\sigma_1 \epsilon_1}{2} ##
and then, authors (e.g. Timoshenko & Goodier) sum up such terms and substitute ##\epsilon ## from generalised Hooke's law i.e.
## \epsilon_1=\frac{1}{E} (\sigma_1 -\nu \sigma_2 -\nu \sigma_3) ##
## \epsilon_2=\frac{1}{E} (\sigma_2 -\nu \sigma_1 -\nu \sigma_3) ##
## \epsilon_3=\frac{1}{E} (\sigma_3 -\nu \sigma_2 -\nu \sigma_1) ##
obtaining:
##V=\frac{1}{2E} (\sigma_1^2 +\sigma_2^2+\sigma_3^2 )-\frac{\nu}{E}(\sigma_1 \sigma_2+\sigma_2 \sigma_3 + \sigma_1 \sigma_3) ##
but... is it correct to substitute generalised Hooke's law after the integration? The formula is obtained as if simple ##\sigma = E \epsilon ## was used. As in the attached figure, it looks like they assume that ##\sigma_x ## has no term independent on ##\epsilon_x ##, despite that Hooke's law can be transformed to:
## \sigma_1=\frac{(\nu -1)E}{(\nu +1)(2 \nu-1)} \epsilon_1 - \frac{\nu E}{(\nu+1)(2\nu -1)}(\epsilon_2+\epsilon_3) ##
##\sigma_2=(...) ##
##\sigma_3=(...) ##
where this term is present. Shouldn't we integrate the above formula? Could someone please, explain me why it is correct?
#### Attachments
• skrin4.PNG
45.2 KB · Views: 57
Gold Member
It’s energy per unit volume.
Conservation of energy implies that elastic energy is independent of order applied and depends only on final state. Otherwise one could find an order that creates/destroys energy.
The first method takes advantage of this and applies the stresses/strains indendently and then adds them together. There are subtleties to this that I cannot do justice to.
The second method applies everything at once and then integrates.
Last edited:
baw
baw
Lets say we applied ##\sigma_1## at first and got ##\epsilon_1## as well as some ##\epsilon_2## and ##\epsilon_3##. The (specific) work done is ##\frac{\sigma_1^2}{2E}##. If we now apply ##\sigma_2## we already have some initial strain, so the plot ##\sigma_2(\epsilon_2)## moves downward by ##\frac{\nu}{E}\sigma_1##. If we now integrate it, we get ##\frac{\sigma_2^2}{2E}-\frac{\nu}{E}\sigma_1 \sigma_2##. Then, ##\sigma_3(\epsilon_3)## is shifted by ##\frac{\nu}{E}(\sigma_1+\sigma_2)## and suma summarum, after the integration we get the right formula. I got it, thanks!
Btw. it means that I just made some mistke in the second method and that's why I didn't got the same answer, doesn't it?
Mentor
In your starting equations, evaluate the differentials of the strains in terms of the differentials in the three stresses. Then, multiply each differential of strain by its corresponding stress, and add up the resulting 3 equations. What do you get?
|
2023-02-03 04:30:57
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8609508275985718, "perplexity": 3534.0330358900173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500042.8/warc/CC-MAIN-20230203024018-20230203054018-00103.warc.gz"}
|
http://www.ams.org/bookstore-getitem/item=GSM-64
|
New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
Lectures on the Orbit Method
A. A. Kirillov, University of Pennsylvania, Philadelphia, PA
SEARCH THIS BOOK:
2004; 408 pp; hardcover
Volume: 64
ISBN-10: 0-8218-3530-0
ISBN-13: 978-0-8218-3530-2
List Price: US$75 Member Price: US$60
Order Code: GSM/64
Representations of Semisimple Lie Algebras in the BGG Category $$\mathscr {O}$$ - James E Humphreys
Introduction to Representation Theory - Pavel Etingof, Oleg Golberg, Sebastian Hensel, Tiankai Liu, Alex Schwendner, Dmitry Vaintrob and Elena Yudovina
Isaac Newton encrypted his discoveries in analysis in the form of an anagram that deciphers to the sentence, "It is worthwhile to solve differential equations". Accordingly, one can express the main idea behind the orbit method by saying "It is worthwhile to study coadjoint orbits".
The orbit method was introduced by the author, A. A. Kirillov, in the 1960s and remains a useful and powerful tool in areas such as Lie theory, group representations, integrable systems, complex and symplectic geometry, and mathematical physics. This book describes the essence of the orbit method for non-experts and gives the first systematic, detailed, and self-contained exposition of the method. It starts with a convenient "User's Guide" and contains numerous examples. It can be used as a text for a graduate course, as well as a handbook for non-experts and a reference book for research mathematicians and mathematical physicists.
Graduate students and research mathematicians interested in representation theory.
Reviews
"The book offers a nicely written, systematic and read-able description of the orbit method for various classes of Lie groups. ...should be on the shelves of mathematicians and theoretical physicists using representation theory in their work."
|
2015-03-02 20:40:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35953372716903687, "perplexity": 1400.9428320749105}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463028.70/warc/CC-MAIN-20150226074103-00168-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/3119012/fermat-primality-test-for-a-n-1
|
Fermat primality test for $a=n-1$
If we want to know if $$n$$ is prime, we can do the Fermat primality test:
if $$a^{n-1}\not\equiv 1 \mod n$$, then $$n$$ is not prime.
Now I often find that we choose therefore $$a\in\{2,\ldots, n-2\}$$. Why not $$a=n-1$$? In Wikipedia, I found that for $$a=n-1$$ the congruence always holds. Why?
If $$n$$ is odd (usually, this is assumed) and $$a=n-1$$, then we have $$(n-1)^{n-1}\equiv (-1)^{n-1}=1\mod n;$$ hence the criterion is satisfied whenever $$n$$ is odd.
For $$n>2$$, $$(n-1)^{n-1}=k(n).n+(-1)^{n-1}$$ where $$k(n)$$ is given by the rest of the terms via the binomial expansion. Since all prime numbers greater than 2 are odd, $$n-1$$ is even for all cases of interest. So, one has $$(n-1)^{n-1}\equiv1\ (mod\ n)$$, for odd $$n$$. So, you don't have to check this case.
• All prime numbers except $2$ are odd; $2$ is an "odd" prime – J. W. Tanner Feb 19 at 16:23
• You might clarify what you meant by $k(n)$ – J. W. Tanner Feb 19 at 16:24
Computational expense, and as others have mentioned it works for any odd number in that case. If n is say 200 bits(25 bytes) long, it will Take 625 byte multiplies, and about the same number of byte additions with carries, Just to implement in Karatsuba multiplication ( for 1 increase in power). It doesn't take much overhead to make one method slower than another in certain ranges. About the only use for it if too slow for primality is the:$$a^{-1}\equiv a^{n-2} \pmod n$$ version of the test.
|
2019-12-07 22:11:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7351698279380798, "perplexity": 317.4204260692638}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540502120.37/warc/CC-MAIN-20191207210620-20191207234620-00026.warc.gz"}
|
https://www.physicsoverflow.org/22024/reference-stochastic-processes-moving-basic-measure-theory
|
# Reference for stochastic processes which helps moving from a basic level to a measure theory one
+ 2 like - 0 dislike
557 views
I'm looking for a reference (books, notes, lectures) which helps a physicist to understand the language of measure theory in the context of stochastic processes (in particular markov chains).
I've studied markov chains and measure theory but now I'm looking for something which helps me filling the gap and making this two topics converge.
I've already read: Measure, Integral and Probability - Marek Capinski, Peter E. Kopp
Maybe something with direct comparison (which writes the same probability both at a basic level and in measure theory) would be great!
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\varnothing$ysicsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
|
2018-10-21 08:52:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3786652088165283, "perplexity": 1397.9501790858687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513804.32/warc/CC-MAIN-20181021073314-20181021094814-00388.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Cubic_Equation/Resolvent
|
# Definition:Cubic Equation/Resolvent
## Definition
Let $P$ be the cubic equation:
$a x^3 + b x^2 + c x + d = 0$ with $a \ne 0$
Let:
$y = x + \dfrac b {3 a}$
$Q = \dfrac {3 a c - b^2} {9 a^2}$
$R = \dfrac {9 a b c - 27 a^2 d - 2 b^3} {54 a^3}$
Let $y = u + v$ where $u v = -Q$.
The resolvent equation of the cubic is given by:
$u^6 - 2 R u^3 - Q^3$
## Also defined as
Some sources introduce Cardano's Formula starting from the cubic:
$x^3 + q x - r = 0$
to which the general cubic can be reduced to using the Tschirnhaus Transformation.
In this form, the resolvent equation of the cubic is given by:
$u^6 - r u^3 - \dfrac q {27}$
|
2022-05-20 08:05:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9647910594940186, "perplexity": 1446.4081302946354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531762.30/warc/CC-MAIN-20220520061824-20220520091824-00321.warc.gz"}
|
http://qphomeworkkjmd.skylinechurch.us/identify-three-behaviors-inherent-in-e-tailing-note-the-communications-medium-in.html
|
# Identify three behaviors inherent in e tailing note the communications medium in
\rightline{please note things that should be indexed but aren't} \medskip \rightline using only a telephone for communication {1'e}\{z_{6'}-z_{1'}\}\to z_{7e}\to\{z_{6'}-z_{1'}\}z_{6'e}$\cr @[email protected]$z_{3'e}\{z_{4'}-z_{3'}\}\to z_{8e}\to\{z_{4'}-z_{3'}\}z_{4'e}\$\cr. This case study investigates current and future states of marketing communications in asia pacific regions (eg organizational citizenship behavior) the study extended its horizon to the economic slowdown as a cause with the uncertainty inherent in the business world. Resource: university of phoenix material: appendix a identify three behaviors inherent in e-tailing (in business-to-consumer relationships/ communications)note the communications medium in which each behavior occurs explain how each medium enables e-commerce analyze each behavior using the communication process. Three behaviors inherent in e tailing essays join login the examination of the effects of e-mail on management communication and organizational behavior mark hankins brenau university identify four areas of inherent risk that might. E-tailers must understand customer behaviors to serve their customers and help understand the decision rocess customers go through prior to making a purchase the purpose of this paper is to identify three behaviors inherent in electronic retailing (e-tailing) a discussion related to the communications medium in which each behavior occurs. You will note that there are links from many of the words and phrases in this text e-tailing can facilitate the transaction and offer a medium for the customer to dialog with the e-tailer. Identify three behaviors inherent in e-tailing note the communications medium in which each behavior occurs i have - answered by a verified tutor.
Home » blogs » neuroscience and relationships » 3 types of change your brain adapts: reinforcing behaviors, 1 of 3 neuroscience and relationships about the blog you come fully equipped to wisely and effectively address the inherent challenges of life note that, all these bodily. Identify- three behaviors inherent in e-tailing note the communication medium in which each behavior occurs. To propose a framework that evaluates effectiveness of online marketing efforts in terms of overall appeal of e-tailing sites much of the contemporary literature on e-marketing assesses it as a transaction medium [16], [36] or a communication al-ghaith et al [3] identify e-trust.
Burdens of proof applicable to investigations and reviews related to protected communications of members of the armed forces and prohibited retaliatory actions applied behavior analysis procurement of medium-range discrimination radar to improve homeland missile defense. Communication: communication, the exchange of meanings between individuals through a common system of symbols this article treats the functions, types, and psychology of communication for a treatment of animal communication, see animal behaviour for further treatment of the basic components and techniques of.
## Identify three behaviors inherent in e tailing note the communications medium in
Identify three behaviors inherent in e tailing join login the research paper factory business communication - identify the three sources of mr basu's information discuss the main filter involved in this case all identify three behaviors inherent in e tailing essays and term papers. Design of a small-scale unmanned helicopter simulation system nasa astrophysics data system (ads) deng, hong-bin wang, jin-hua liu, pei-zhi peng, yu-hua tang, xiao-ying li. Sample records for incorporating model quality one of the main objectives of pca is to identify, through data reduction, the recurring and independent evaluation thereby hastening and facilitating understanding of the probable mechanisms responsible for the unique behavior among bias.
Identify three behaviors inherent in e-tailing note the communications medium in which each behavior occurs explain how each medium enables e-commerce. Identify three behaviors inherent in e-tailing note the communications medium in which each behavior occurs explain how each medium enables e-commerce analyze each behavior using the communication process the analysis should include descriptions of the purpose, sender, receiver, message, environment, read more. While some behavior expressions a medium of communication is, in short the medium is, to the extent that we can select among media, also a language such that the message of the medium is not only inherent to a message. Political communication quiz #3 the images the media create suggest which views and behaviors are acceptable and even praiseworthy and organizational model is based on organizational theory its proponents contend that the pressures inherent in organizational processes and goals. Identify- three behaviors inherent in e-tailing note the communication medium in which each behavior occurs explain- how each medium enables e-comerce analyze- each behavior using the communication process the analysis. American journal of public health (ajph) from the american public health association (apha) independently coded all visits for 3 communication behaviors providers may perceive a need to leverage the inherent value of participatory approaches in cultivating strong provider-parent.
Limited find all educational solutions here search here limited. -- mysql dump 911 -- -- host: localhost database: infovis -- ----- -- server version 424-standard -- -- table structure for table archive -- create table archive ( ar_namespace int(11) not null default '0', ar_title varchar(255) binary not null default '', ar_text mediumtext not null, ar_comment tinyblob not null, ar_user int(5) unsigned. Scientific research on nonverbal communication and behavior was started in 1872 with the publication of charles darwin's book the expression of it is important to note that while nonverbal communication is more prevalent in indigenous people learn to identify facial. Explain what early thoughts and beliefs existed about what caused mental illnessdescribe how one of the psychological traditions helped further the beliefs login identify three behaviors inherent in e-tailing note the communications medium in which each behavior occurs. Answer to identify- three behaviors inherent in e-tailing note the communication medium in which each behavior occurs. Models of communication shannon and weaver's this approach is often adopted by critical theorists who believe that the role of communication theory is to identify oppression and produce social science communication: gateway belief model notes references craig.
Identify three behaviors inherent in e tailing note the communications medium in
Rated 3/5 based on 20 review
|
2018-10-23 04:59:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.180922731757164, "perplexity": 6478.313484233025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516071.83/warc/CC-MAIN-20181023044407-20181023065907-00195.warc.gz"}
|
https://msp.org/ant/2008/2-5/p02.xhtml
|
Vol. 2, No. 5, 2008
Recent Issues
The Journal About the Journal Subscriptions Editorial Board Editors' Interests Submission Guidelines Submission Form Editorial Login Ethics Statement Author Index To Appear ISSN: 1944-7833 (e-only) ISSN: 1937-0652 (print) Other MSP Journals
Tate resolutions for Segre embeddings
David A. Cox and Evgeny Materov
Vol. 2 (2008), No. 5, 523–549
Abstract
We give an explicit description of the terms and differentials of the Tate resolution of sheaves arising from Segre embeddings of ${ℙ}^{a}×{ℙ}^{b}$. We prove that the maps in this Tate resolution are either coming from Sylvester-type maps, or from Bezout-type maps arising from the so-called toric Jacobian.
Keywords
Tate resolution, Segre embedding, toric Jacobian
Primary: 13D02
Secondary: 14M25
|
2019-04-22 18:30:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18164542317390442, "perplexity": 6380.02467875056}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578577686.60/warc/CC-MAIN-20190422175312-20190422201312-00160.warc.gz"}
|
http://kitchingroup.cheme.cmu.edu/blog/category/gotcha/
|
## Potential gotchas in linear algebra in numpy
| categories: | tags: | View Comments
Numpy has some gotcha features for linear algebra purists. The first is that a 1d array is neither a row, nor a column vector. That is, $$a$$ = $$a^T$$ if $$a$$ is a 1d array. That means you can take the dot product of $$a$$ with itself, without transposing the second argument. This would not be allowed in Matlab.
import numpy as np
a = np.array([0, 1, 2])
print a.shape
print a
print a.T
print
print np.dot(a, a)
print np.dot(a, a.T)
>>> >>> (3L,)
[0 1 2]
[0 1 2]
>>>
5
5
Compare the previous behavior with this 2d array. In this case, you cannot take the dot product of $$b$$ with itself, because the dimensions are incompatible. You must transpose the second argument to make it dimensionally consistent. Also, the result of the dot product is not a simple scalar, but a 1 × 1 array.
b = np.array([[0, 1, 2]])
print b.shape
print b
print b.T
print np.dot(b, b) # this is not ok, the dimensions are wrong.
print np.dot(b, b.T)
print np.dot(b, b.T).shape
(1L, 3L)
[[0 1 2]]
[[0]
[1]
[2]]
>>> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: objects are not aligned
[[5]]
(1L, 1L)
Try to figure this one out! x is a column vector, and y is a 1d vector. Just by adding them you get a 2d array.
x = np.array([[2], [4], [6], [8]])
y = np.array([1, 1, 1, 1, 1, 2])
print x + y
>>> [[ 3 3 3 3 3 4]
[ 5 5 5 5 5 6]
[ 7 7 7 7 7 8]
[ 9 9 9 9 9 10]]
Or this crazy alternative way to do the same thing.
x = np.array([2, 4, 6, 8])
y = np.array([1, 1, 1, 1, 1, 1, 2])
print x[:, np.newaxis] + y
>>> >>> [[ 3 3 3 3 3 3 4]
[ 5 5 5 5 5 5 6]
[ 7 7 7 7 7 7 8]
[ 9 9 9 9 9 9 10]]
In the next example, we have a 3 element vector and a 4 element vector. We convert $$b$$ to a 2D array with np.newaxis, and compute the outer product of the two arrays. The result is a 4 × 3 array.
a = np.array([1, 2, 3])
b = np.array([10, 20, 30, 40])
print a * b[:, np.newaxis]
>>> >>> [[ 10 40 90]
[ 20 80 180]
[ 30 120 270]
[ 40 160 360]]
These are points to keep in mind, as the operations do not strictly follow the conventions of linear algebra, and may be confusing at times.
org-mode source
## Integrating the Fermi distribution to compute entropy
| categories: | tags: | View Comments
The Fermi distribution is defined by $$f(\epsilon) = \frac{1}{e^{(\epsilon - \mu)/(k T)} + 1}$$. This function describes the occupation of energy levels at temperatures above absolute zero. We use this function to compute electronic entropy in a metal, which contains an integral of $$\int n(\epsilon) (f \ln f + (1 - f) \ln (1-f)) d\epsilon$$, where $$n(\epsilon)$$ is the electronic density of states. Here we plot the Fermi distribution function. It shows that well below the Fermi level the states are fully occupied, and well above the Fermi level, they are unoccupied. Near the Fermi level, the states go from occupied to unoccupied smoothly.
import numpy as np
import matplotlib.pyplot as plt
mu = 0
k = 8.6e-5
T = 1000
def f(e):
return 1.0 / (np.exp((e - mu)/(k*T)) + 1)
espan = np.linspace(-10, 10, 200)
plt.plot(espan, f(espan))
plt.ylim([-0.1, 1.1])
plt.savefig('images/fermi-entropy-integrand-1.png')
Let us consider a simple density of states function, just a parabola. This could represent a s-band for example. We will use this function to explore the integral.
import numpy as np
import matplotlib.pyplot as plt
mu = 0
k = 8.6e-5
T = 1000
def f(e):
return 1.0 / (np.exp((e - mu)/(k*T)) + 1)
def dos(e):
d = (np.ones(e.shape) - 0.03 * e**2)
return d * (d > 0)
espan = np.linspace(-10, 10)
plt.plot(espan, dos(espan), label='Total dos')
plt.plot(espan, f(espan) * dos(espan), label='Occupied states')
plt.legend(loc='best')
plt.savefig('images/fermi-entropy-integrand-2.png')
Now, we consider the integral to compute the electronic entropy. The entropy is proportional to this integral.
$$\int n(\epsilon) (f \ln f + (1 - f) \ln (1-f)) d\epsilon$$
It looks straightforward to compute, but it turns out there is a wrinkle. Evaluating the integrand leads to nan elements because the ln(0) is -∞.
import numpy as np
mu = 0
k = 8.6e-5
T = 100
def fermi(e):
return 1.0 / (np.exp((e - mu)/(k*T)) + 1)
espan = np.array([-20, -10, -5, 0.0, 5, 10])
f = fermi(espan)
print f * np.log(f)
print (1 - f) * np.log(1 - f)
[ 0.00000000e+000 0.00000000e+000 0.00000000e+000 -3.46573590e-001
-1.85216532e-250 nan]
[ nan nan nan -0.34657359 0. 0. ]
In this case, these nan elements should be equal to zero (x ln(x) goes to zero as x goes to zero). So, we can just ignore those elements in the integral. Here is how to do that.
import numpy as np
import matplotlib.pyplot as plt
mu = 0
k = 8.6e-5
T = 1000
def fermi(e):
return 1.0 / (np.exp((e - mu)/(k*T)) + 1)
def dos(e):
d = (np.ones(e.shape) - 0.03 * e**2)
return d * (d > 0)
espan = np.linspace(-20, 10)
f = fermi(espan)
n = dos(espan)
g = n * (f * np.log(f) + (1 - f) * np.log(1 - f))
print np.trapz(espan, g) # nan because of the nan in the g vector
print g
plt.plot(espan, g)
plt.savefig('images/fermi-entropy-integrand-3.png')
# find the elements that are not nan
ind = np.logical_not(np.isnan(g))
# evaluate the integrand for only those points
print np.trapz(espan[ind], g[ind])
nan
[ nan nan nan nan
nan nan nan nan
nan nan nan nan
nan nan nan nan
nan nan nan nan
nan nan nan nan
nan nan nan nan
-9.75109643e-14 -1.05987106e-10 -1.04640574e-07 -8.76265644e-05
-4.92684641e-02 -2.91047740e-01 -7.75652579e-04 -1.00962241e-06
-1.06972936e-09 -1.00527877e-12 -8.36436686e-16 -6.48930917e-19
-4.37946336e-22 -2.23285389e-25 -1.88578082e-29 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00]
0.208886080897
The integrand is pretty well behaved in the figure above. You do not see the full range of the x-axis, because the integrand evaluates to nan for very negative numbers. This causes the trapz function to return nan also. We can solve the problem by only integrating the parts that are not nan. We have to use numpy.logicalnot to get an element-wise array of which elements are not nan. In this example, the integrand is not well sampled, so the area under that curve may not be very accurate.
|
2020-01-17 21:19:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43216484785079956, "perplexity": 1584.5873402478262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591234.15/warc/CC-MAIN-20200117205732-20200117233732-00251.warc.gz"}
|
https://pos.sissa.it/430/419/
|
Volume 430 - The 39th International Symposium on Lattice Field Theory (LATTICE2022) - Weak Decays and Matrix Elements
Towards precision lattice determination of semileptonic $D \rightarrow \pi \ell \nu$, $D \rightarrow K \ell \nu$ and $D_s \rightarrow K \ell \nu$ decay form factors
M. Marshall
Full text: Not available
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access
|
2022-12-08 00:32:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5038305521011353, "perplexity": 3973.189869972772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711221.94/warc/CC-MAIN-20221207221727-20221208011727-00168.warc.gz"}
|
https://strucbio.biologie.uni-konstanz.de/ccp4wiki/index.php?title=SHELX_C/D/E&veaction=edit
|
# SHELX C/D/E
SHELXC, SHELXD and SHELXE are stand-alone executables that do not require environment variables or parameter files etc., so all that is needed to install them is to put them in a directory that is in the ‘path’ (e.g. /usr/local/bin or ~/bin under Linux). There is a detailed description of these programs in the paper: "Experimental phasing with SHELXC/D/E: combining chain tracing with density modification". Sheldrick, G.M. (2010). Acta Cryst. D66, 479-485. It is available as "Open Access" at http://dx.doi.org/10.1107/S0907444909038360 and should be cited whenever these programs are used.
hkl2map is a graphical user interface that makes it easy to use these programs.
XDSGUI is a graphical user interface for XDS that also makes it easy to use these programs.
## SHELXC
SHELXC is designed to provide a simple and fast way of setting up the files for the programs SHELXD (heavy atom location) and SHELXE (phasing and density modification) for macromolecular phasing by the MAD, SAD, SIR and SIRAS methods. These three programs may be run in batch mode or called from a GUI such as CCP4i or (better) hkl2map. SHELXC is much less versatile than the Bruker AXS XPREP program for this purpose, but if you are sure of the space group and there are no problems with the indexing or twinning and the f’ and f” parts of the scattering factors do not need to be refined, SHELXC should be adequate.
The starting phases for density modification are estimated as (heavy atom phase + α) in the simplified approach used by SHELXE, α is calculated by SHELXC from the anomalous and dispersive differences. For SAD α is 90º (I+ > I) or 270º (I+ < I), for SIR and RIP α is 0º or 180º and for SIRAS or MAD α may be anywhere in the range 0º to 360º.
SHELXC reads a filename stem (denoted here by 'xx') on the command line plus some instructions from 'standard input'. It writes some statistics to 'standard output' and prepares the three files needed to run SHELXD and SHELXE. SHELXC can be called from a GUI by a command line such as:
shelxc xx <t
which would read the instructions from the file t, or (under most UNIX systems) by a simple shell script that includes the instructions, e.g.
shelxc xx <<EOF
CELL 49.70 57.90 74.17 90 90 90
SPAG P212121
FIND 12
<<EOF
shelxd xx_fa
shelxe xx xx_fa -s0.37 -m20 -h -b
shelxe xx xx_fa -s0.37 -m20 -h -b -i
which would also run shelxd to locate the sulfur atoms and shelxe (for both substructure enantiomers) to solve elastase by sulfur-SAD phasing. The reflection data may be in SHELX (.hkl), HKL2000 (.sca) or XDS XDS_ASCII.HKL format. Any names may be used for XDS reflection files, SHELXC recognises them by reading the first record.
This script would read data from the .sca file and write the files xx.hkl (h,k,l,I,sig(I) in SHELX HKLF4 format for density modification by SHELXE or refinement with SHELXL), xx_fa.ins (cell, symmetry etc. for heavy atoms location by SHELXD) and xx_fa.hkl (h,k,l,FA,sig(FA),alpha for both SHELXD and SHELXE). The starting phases for density modification are estimated as given above.
For SIR or SIRAS, two input reflections files are specified by the keywords NAT and SIR or SIRA; for MAD at least two of the reflection files HREM, LREM, PEAK and INFL are required and NAT may also be given if higher resolution native data are available (e.g. SMet for SeMet MAD). Reflection data should be in SHELX .hkl or SCALEPACK .sca format; many other programs, including SCALA and XPREP, can output .sca format too. The keywords CELL, SPAG (space group) and FIND (number of heavy atoms) are always required, SFAC, MIND, NTRY, SHEL, ESEL and DSUL may be given and are written to the file xx_fa.ins for SHELXD. MAXM can be used to reserve memory in units of 1M reflections. For RIP phasing, NAT (or BEFORE) denotes the file before radiation damage and RIP (or AFTER) after radiation damage. For RIPAS the 'after' file must be called 'RIPA' and a keyword RIPW (default 0.6) gives the weight w to be assigned to the 'NAT' data in the estimation of the anomalous signal (a weight of 1-w is applied to the 'RIPA' data). DSCA (default 0.98) gives the factor to multiply the native data for SIR and SIRAS or the 'after' data for RIP after the data have been put on the same scale (this allows for the extra scattering power of the heavy atoms etc.); this can be critical for RIP phasing.
ASCA (default 1.0) is a scale factor applied to the anomalous signal in a MAD experiment; to apply MAD to a small molecule, ASCA and DSCA should both be between 0 and 1, the best values have to be found by trial and error. Finally SMAD (without a number) sets the dispersive term to zero in a MAD experiment, equivalent to SAD using weighted mean anomalous differences from all the MAD datasets. This should always be tried whenever radiation damage is suspected.
SHELXC also tests for and if necessary corrects the more common cases of inconsistent indexing when more than one dataset is involved. In addition, the mean value of |E^2-1| is calculated for each dataset to detect twinning.
## SHELXD
In general the critical parameters for locating heavy atoms with SHELXD are:
### Resolution cutoff (SHEL)
In the MAD case this is best determined by finding where the correlation coefficient between the signed anomalous differences for wavelengths with the highest anomalous signal (PEAK and HREM or PEAK and INFL) falls below about 30%. For SAD a less reliable guide is where the mean value of |ΔF|/σ(ΔF) falls below about 1.2 (a value of 0.8 would indicate pure noise), and for S-SAD with CuKα the data can be truncated where I/σ for the native data falls below 30. If unmerged data are used, SHELXC calculates a correlation coefficient between two randomly selected subsets of the signed anomalous differences; this is a better indicator because it does not require that the intensity esds are on an absolute scale, but it does require a reasonable redundancy and again the data can be truncated where it drops to below 30% (XDS and the CCP4 programs aimless/SCALA print a similar statistic).
### Number of sites (FIND)
The estimated number of sites (FIND) should be within about 20% of the true number. For SeMet or S-SAD phasing there should be a sharp drop in the occupancy after the last true site. For iodide soaks, a good rule of thumb is to start with a number of iodide sites equal to the number of amino-acids in the asymmetric unit divided by 15. If after SHELXD occupancy refinement the occupancy of the last site is more than 0.2 it might be worth increasing this number, and vice versa.
It should be noted that the number of sites that SHELXD will search for is 40% higher than what is asked for by the user, in FIND. The reason for this is that there are often additional minor sites arising from heavy atoms, like Cl or Ca. So if you don't adjust FIND downwards, after an initial SHELXD run, such that the Nth site in the .res file has occupancy > 0.2, then you could either edit the .res file and remove the sites with occupancy < 0.2, or run SHELXE with -hN where N is the site number which has occupancy > 0.2 .
### Disulfides (DSUL)
If the resolution d (second parameter on SHEL card) is > 2.0Å the disulfide bonds may not fully resolved, but in the range 2.8>d>2.0 the DSUL instruction may be used to fit S−S units to the density. This can dramatically improve the final phase quality. If DSUL is used, the first MIND parameter should be set to -3.5 (so that each disulfide is found once only) and disulfides should be counted as single (super-sulfur) atoms for FIND (i.e. each disulfide given in DSUL counts as two atoms for FIND).
### Minimum distance between atoms (MIND)
A common 'user error' is to set MIND -3.5 even though the distances between heavy atoms are less than 3.5 Å. For example, in a Fe4S4 cluster the Fe...Fe distance is about 2.7 Å, so MIND -2 would be appropriate. A disulfide bond has a length of 2.03 Å so then MIND -1.5 could be used to resolve the sulfur atoms, however if DSUL is used for this purpose MIND -3.5 is required.
If heavy atoms can lie on special positions (as is the case with an iodide soak in a space group with twofold axes) the rejection of atoms on special positions should be switched off by giving the second MIND parameter as -0.1 (as in the above thaumatin example).
### Interpretation of results
For MAD, a CC of 40 to 50% indicates a good solution, for SAD etc. values around 30% may well be correct, especially if the same solution or group of solutions has the highest values of CC, CC(Weak) and PATFOM, and they are well separated from the values for the non-solutions. The CC values tend to increase as the resolution is lowered. Heavy atom soaks truncated to low resolution often give spuriously high CC values, but these 'solutions' can be recognized as false by their low CC(weak) values.
In difficult cases SHELXD can be run with different SHEL instructions, e.g. truncating the data in steps of 0.1 Å, and the CC values compared. This is especially convenient if a computer farm can be used to run the jobs in parallel. If the best CC is plotted against the resolution, a local maximum (when also observed for the CC(weak) values) may indicate a correct solution.
The default weights for the CC are 1/σ(E)2. The presence of one or two reflections with very low esds can lead to unreasonably high values of the CC for wrong solutions. If the esds are unreliable it is advisable to use 'CCWT 0.1' in the .ins file for SHELXD. The precision of the heavy atom coordinates can be improved, at the cost of the CPU time, by making the Fourier grid finer (e.g. FRES 4 instead of the default 2.5).
## SHELXE
### Usage
A typical SHELXE job for SAD, MAD, SIR or SIRAS phasing could be:
shelxe xx xx_fa -s0.5 -z -a3
where xx.hkl contains native data and xx_fa.hkl, which should have been created by SHELXC or XPREP, contains FA and alpha. The heavy atoms are read from xx_fa.res, which can be generated by SHELXD or ANODE. 'xx' and 'xx_fa' may be replaced by any strings that make legal file names. If these heavy atom are present in the native structure (e.g. for sulfur-SAD but not SIRAS for an iodide soak) -h is required (or e.g. -h8 to use only the first 8). -z optimizes the substructure at the start of the phasing. -z9 limits the number of heavy atoms to 9. If -z is specified without a number, no limit is imposed. Normally the heavy atom enantiomorph is not known, so SHELXE should also be run with the -i switch to invert the heavy atoms and if necessary the space group; this writes files xx_i.phs instead of xx.phs etc., so may be run in parallel.
-a sets the number of global autotracing cycles. -n imposes NCS during tracing, e.g. -n6 for six-fold NCS or -n if the number of copies is not known.
To start from a MR model without other phase information, the PDB file from MR should be renamed xx.pda and input to SHELXE, e.g.
shelxe xx.pda -s0.5 -a20
The number of tracing cycles is usually more here to reduce model bias. -O enables local rigid group optimization of the domains defined in the .pda file. If -O and/or -o (-O acts before -o) are used to improve a model in xx.pda, the revised model is output to xx.pdo. To refine rigid group domains separately with -O, insert 'REMARK DOMAIN N' records into the .pda file to split the model into domains, where N (default 1) is the rigid group number of the following atoms (until the next 'REMARK DOMAIN N'). -ON makes N simplex trials with starting positions within a cube (edge set by -Z) around the positions in xx.pda. The first search (the only one for -O or -O1) starts from the initial position. If the MR model is large but does not fit well, -o should be included to prune it before density modification.
Tracing from an MR model requires a favorable combination of model quality, solvent content and data resolution. If e.g. SAD phase information is available, even if it is too weak for phasing on its own, the two approaches may be combined:
shelxe xx.pda xx_fa -s0.5 -a10 -h -z
The phases from the MR model are used to generate the heavy atom substructure. This is used to derive experimental phases that are then combined with the phases from the MR model (MRSAD). The -h, -O, -o and -z flags are often needed for this mode.
If approximate phases are available, SHELXE may be used to refine them and make a poly-Ala trace:
shelxe xx.zzz -s0.5 -a3
where zzz is phi (phs file format), fcf (from SHELXL) or hlc (Hendrickson-Lattman coefficients, e.g. from SHARP or BP3).
In all cases, native data are read from xx.hkl in SHELX format, and the density modified phases are output to xx.phs (or xx_i.phs if -i was set). The listing file is xx.lst (or xx_i.lst). If xx_fa.hkl is read, substructure phases are output to xx.pha (or xx_i.pha) and the revised substructure is written to xx.hat (or xx_i.hat).
### Full list of SHELXE options ( Version 2021/1; defaults in brackets)
-aN - N cycles autotracing [off]
-bX - B-value to weight anomalous map (xx.pha and xx.hat) [-b5.0]
-B1 - anti-parallel beta sheet, -B2 parallel and -B3 both [off]
-cX - fraction of pixels in crossover region [-c0.4]
-dX - truncate reflection data to X Angstroms [off]
-D - fuse disulfides before looking for NCS [off]
-eX - add missing 'free lunch' data up to X Angstroms [dmin+0.2]
-f - read F rather than intensity from native .hkl file [off]
-FX - fract. weight for phases from previous global cycle [-F0.8]
-gX - solvent gamma flipping factor [-g1.1]
-GX - threshold for accepting new peptide when tracing [-G0.7]
-h or -hN - (N) heavy atoms also present in native structure [-h0]
-i - invert space group and input (sub)structure or phases [off]
-IN - in cycle 1 only, do N cycles DM (free lunch if -e) [off]
-kX - minimum height/sigma for heavy atom sites in xx.hat [-k4.5]
-KN - keep starting fragment unchanged for N global cycles [off]
-K - keep fragment unchanged throughout
-lN - reserve space for 1000000N reflections [-l2]
-LN - minimum chain length (at least 3 chains are retained) [-L6}
-mN - N iterations of density modification per global cycle [-m20]
-n or -nN - apply N-fold NCS to traces [off]
-o or -oN - prune up to N residues to optimize CC for xx.pda [off]
-O - trace side chains [off]
-p or -pN - search for N DNA or RNA phosphates (-p = -p12) [off]
-qN - search for alpha-helices of length 6<N<15; -q sets -q7 [off]
-Q - search for 12-helix,' extended by sliding (overrides -q) [off]
-rX - FFT grid set to X times maximum indices [-r3.0]
-sX - solvent fraction [-s0.45]
-SX - radius of sphere of influence. Increase for low res [-S2.42]
-tX - time for initial searches (-t3 or more if difficult) [-t1.0]
-uX - allocable memory in MB for fragment optimization [-u500]
-UX - abort if less than X% of initial CA stay within 0.7A [-U0]
-vX - density sharpening [default set by resol., 0 if .pda read]
-wX - add experimental phases with weight X each iteration [-w0.2]
-x - diagnostics, requires PDB reference file xx.ent [off]
-yX - highest resol. in Ang. for calc. phases from xx.pda [-y1.8]
-zN - substructure optimization for a maximum of N atoms [off]
-z - substructure optimization, number of atoms not limited [off]
-t values of 3.0 or more switch to more accurate but appreciably
slower tracing algorithms, this is recommended when the resolution
is poor or the initial phase information is weak; -a10 is preferred.
In case of side chain tracing with -O, sequence will be docked
and output only once CC>30 so poly-alanine tracing scores
can be used to identify solutions as before.
Please cite: I. Uson & G.M. Sheldrick (2018), "An introduction to
experimental phasing of macromolecules illustrated by SHELX;
new autotracing features" Acta Cryst. D74, 106-116
(Open Access) if SHELXE proves useful.
Meaning of additional output when using the -x option:
MPE and wMPE are given as two numbers, the one after the '/' is for centric reflections only.
The first nine numbers in the row after locating a strand or in the 'Global chain diagnostics' are the percentages of CA within 0-0.1, 0.1-0.2, 0.2-0.3Å etc from the nearest CA in the reference structure. The tenth number is the percentage further than 0.9Å from the nearest CA.
The next number is 100 times the number of CA found divided by the number expected for the whole structure. The last number is the mean distance of a CA atom from the nearest CA in the reference structure, whereby distances greater than 2.5Å are replaced by 2.5. One should always look at the second number from the right; for a good trace it should be as low as possible. If you are expanding from a MR solution the program also tells you the percentages of starting atoms retained.
### Phasing and density modification
SHELXE normally requires a few command line switches, e.g.
shelxe xx yy -m20 -s0.45 -h8 -b
would do 20 cycles density modification with a solvent content of 0.45, phasing from the first 8 heavy atoms in the yy.res file from SHELXD assuming that they are also present in the native structure (-h8), and then use the modified density to generate improved heavy atoms (-b). The switch -i may be added to invert the substructure (and if necessary the space group), this writes xx_i.phs instead of xx.phs etc., and so may be run in parallel.
A big difference in the contrast between the two heavy-atom enantiomorphs usually indicates a good SHELXE solution. However in the case of SIR, both have the same contrast but one gives the inverted protein structure. The contrast is also the same for both if the heavy-atom substructure is centrosymmetric (there is a server to find out). In the case of SAD both heavy atom enantiomers then give the correct structure, for SIR the result is an uninterpretable double image.
The pseudo-free correlation coefficient (based on the comparison of Eo and Ec for 10% of the data left out at random in the calculation of a map that is then density modified and Fourier back-transformed in the usual way) is now printed out before every Nth cycle (set by -j, the default is -j5); a value above 70% usually indicates an interpretable map. The pseudo-free CC (which is also reported in the hkl2map plot of contrast against cycle number) is also a good indication as to when the phase refinement has converged.
The solvent content (-s) is by far the most critical parameter for SHELXE, and it is often worth varying it in steps of about 0.05 to maximize the difference in contrast between the two enantiomorphs and the 'pseudo-free CC' (another application for a computer farm!). Usually the optimal solvent content is higher than the calculated value at low resolution (disordered side-chains?) and lower at high resolution (ordered solvent?). Sometimes it is necessary to use many (several hundred) cycles (-m) if the starting phase information is weak but the resolution is very high. For low resolution data, the use of more than 20 refinement cycles is normally counter-productive. The current values of all parameters are output at the start of the SHELXE output, the default values of other parameters will rarely need changing. The -b switch in SHELXE causes updated heavy atom positions to be written to the file name.hat (or name_i.hat). This file can be copied or renamed to the .res file (which should be saved first!) and used to recycle the heavy atoms. The graphics program Coot should be able to deduce the space group name from the symmetry operators in this file, and so a very convenient way to obtain a map after running SHELXE is to start Coot, read in 'coordinates' from the .hat or _i.hat file, and then input the phases from the .phs or _i.phs files and the phases of the heavy atom substructure from the .pha or _i.pha files. It is normally necessary to increase the σ level of the latter map (by hitting '+' several times). This procedure even works correctly when the space group has been inverted by SHELXE!
Good quality MAD data, a high solvent content and/or high resolution for the native data can lead to maps of high quality that can be autotraced (e.g. with wARP) immediately. The .phs files contain h, k, l, F, fom, φ and σ(F) in free format and can be read directly into Coot or converted to CCP4 .mtz format using f2mtz, e.g. for further density modification exploiting NCS using the CCP4 program Pirate. Note that if the inverted heavy atom enantiomorph is the correct one, the corresponding phases are in the *_i.phs file and SHELXE may have inverted the space group (e.g. P41 to P43), which should be taken into account when moving to other programs!
A writeup for a case study, by GMS, as of Jan 13, 2013, is at [1].
### The free lunch algorithm (FLA)
The switch -e may be used to extrapolate the data to the specified resolution (the free lunch algorithm), based closely on work by the Bari group (Caliandro et al., Acta Crystallogr. (2005) D61, 556-565) and independently implemented in the program Acorn (Yao et al., (2005) Acta Crystallogr. D61, 1465-1475): -e1.0 can produce spectacular results when applied to data collected to 1.6 to 2.0 Å, but since a large number of cycles is required (-m400) and the 'contrast' and 'connectivity' become unreliable (the pseudo-free CC is the only reliable map quality indicator when the FLA is used), it may be best to establish the substructure enantiomorph and solvent content without -e first. The default setting when -e is not specified is to fill in missing low and medium resolution data but not to extrapolate to higher resolution than actually measured (to switch off this filling in, use -e999). The resolution requirements for the FLA still need to be explored, but so far there have been no reports of it causing a deterioration in map quality, and in a few cases the mean phase error was reduced by as much as 30º relative to density modification without it.
### How to find out if a molecular replacement solution is correct or wrong
From a November 2011 posting of George Sheldrick on CCP4BB: We have unintentionally discovered a very simple way of telling whether an MR solution is correct or not, provided that (as in this case) native data have been measured to about 2.1A or better. This uses the current beta-test of SHELXE that does autotracing (available on email request).
First rename the PDB file from MR to name.pda and generate a SHELX format file name.hkl, e.g. using Tim Gruene's mtz2hkl, where 'name' may be chosen freely but should be the same for both input files. Then run SHELXE with a large number of autotracing cycles (here 50), e.g.
shelxe name.pda -a50 -s0.5 -y2
-s sets the solvent content and -y a resolution limit for generating starting phases. If the .hkl file contains F rather than intensity the -f switch is also required.
If the model is wrong the CC value for the trace will gradually decrease as the model disintegrates. If the model is good the CC will increase, and if it reaches 30% or better the structure is solved. In cases with a poor but not entirely wrong starting fragment, the CC may vary erratically for 10-30 cycles before it locks in to the correct solution and the CC increases over three or four cycles to the value for a solved structure (25 to 50%). The solution with the best CC is written to name.pdb and its phases to name.phs for input to e.g. Coot.
### How to tell SHELXE about NCS in a molecular replacement solution PDB file
(communicated by Isabel Uson) Insert a line
REMARK 299 NCS GROUP BEGIN
before the ATOM (or HETATM) lines of each NCS group (e.g. chain), and insert the line
REMARK 299 NCS GROUP END
after the last of these. The -n option is not needed then. The output of SHELXE should tell you about the fact that it understood the NCS specification.
## RIP with SHELXC/D/E
RIP (radiation damage induced phasing) can be regarded as a sort of isomorphous replacement where the 'after' dataset has lost a few atoms that are particularly susceptible to radiation damage. In fact, many structures have been solved unintentionally with a helping hand from RIP! In a MAD experiment, provided that the 'inflection point' dataset is collected last from the same crystal, the radiation damage has the effect of making f' for the MAD element at this wavelength even more negative than usual, enhancing the dispersive part of the MAD signal. This is especially true of bromine MAD on bromouracil derivatives, because the radiation near the bromine absorption edge appears to be particularly effective at breaking the bromine-carbon bonds irreversibly. Of course if the inflection data are collected first the RIP and dispersive component of the MAD signal will tend to cancel one another, causing the MAD analysis to fail, although SAD may still be able to solve the structure (also a common scenario).
RIP (without using anomalous scattering) or RIPAS (like SIRAS, assuming that the anomalous atoms are also those most sensitive to radiation damage) can be capable of solving difficult structures. A typical procedure on a third generation synchrotron beamline is to collect the 'before' dataset with an attenuator in the beam, then to fry the crystal for a couple of minutes with the unattenuated beam, and finally to collect an 'after' dataset with the attenuator in. In the SHELXC instructions, the 'before' data are called 'NAT' or 'BEFORE' and the 'after' data are called 'RIP' or 'AFTER'. The critical parameter is the scale factor applied to the 'after' data after both datasets have been brought onto a common scale. This is set by the SHELXC instruction 'DSCA' and should usually be in the range 0.9 to 1.05. This scale factor may also be used for SIR and SIRAS, where it is applied to the native data, but it appears to be less critical than for RIP. For RIPAS, the 'after' data should be called 'RIPA' and the 'RIPW' instruction specifies the weight w (default 0.6) for the anomalous contribution from the 'before' dataset (a weight 1–w is applied to the 'after' data).
In RIP or RIPAS phase determination is usually necessary to recycle the 'heavy atom' sites by renaming the output .hat (or _i.hat) file as .res and rerunning SHELXE. It is advisable to edit this file so as to retain the stronger negative sites, these may well correspond to the new positions of displaced atoms. SHELXE can read negative occupancies but SHELXD can only search for positive atoms. SHELXE inserts HKLF 4 and END before the first negative peak when writing the revised substructure to the .hat file. Normally this is a good way of finding where the noise begins, but for RIP if you want to recycle the negative peaks these lines should be removed.
It should be noted that in a pure RIP experiment, both hands of the radiation damage substructure will give the same figures of merit, but one will lead to an electron density map that is a mirror image of the true map (the helices will go the wrong way round).
## Examples
### RIP
shelxc jia <<EOF
BEFORE jia_nat.hkl
AFTER jia_burnt.sca
CELL 96.00 120.00 166.13 90 90 90
SPAG C2221
FIND 8
DSCA 0.97
NTRY 1000
EOF
shelxd jia_fa
shelxe jia jia_fa -h -s0.6 -m20 -b
shelxe jia jia_fa -h -s0.6 -m20 -b -i
The critical point for RIP is that you have to try many (about 100) different DSCA values in the range 0.9 to 1.05. The DSCA value that results in the highest CCweak should be chosen.
The -h option is included for SHELXE because the native has heavy atoms. Recycling of the positive and difference peaks produced by –b is normally necessary (rename jia.hat or jia_i.hat to jia_fa.res).
shelxc jia <<EOF
NAT jia_nat.hkl
HREM jia_hrem.sca
PEAK jia_peak.sca
INFL jia_infl.sca
LREM jia_lrem.sca
CELL 96.00 120.00 166.13 90 90 90
SPAG C2221
FIND 8
NTRY 10
EOF
shelxd jia_fa
shelxe jia jia_fa -s0.6 -m20
shelxe jia jia_fa -s0.6 -m20 -i
In this example (kindly donated by Zbigniew Dauter; Li et al., Nature Struct. Biol. 7 (2000) 555-559), Se-Met MAD data at four wavelengths are used to calculated the FA-values and phase shifts that are written to the file jia_fa.hkl. The native (S-Met) data are read from jia_nat.hkl and written to jia.hkl. The file jia_fa.ins is prepared using the given cell, space group, FIND and NTRY instructions as well as a suitable SHEL command to truncate the resolution. SHELXD then searches for 8 (FIND) selenium atoms using 10 attempts (NTRY), and SHELXE is run for 20 cycles (-m) of density modification for both heavy atom enantiomorphs (-i inverts) with a solvent content (-s) of 0.6. The protein phases are written to jia.phs and jia_i.phs resp. If NAT is not specified, SHELXC would analyze the four MAD datasets to generate the (SeMet) native data jia.hkl, in which case -h should be specified for SHELXE since the selenium atoms are present in the ‘native’ structure. For MAD at least two wavelengths are required, at least one of which should be PEAK or INFL.
If the MAD experiment fails, one should insert the line 'SMAD' somewhere in the SHELXC input instructions and run the job again. This makes a MAD experiment into a SAD experiment in which a suitably weighted mean of the anomalous differences is employed and the dispersive differences are ignored. If the CC values in SHELXD come out better, this SAD approach is likely to give a better solution, but it may be then worth trying commenting out one or more of the PEAK, INFL, HREM and LREM commands to see if there is a further improvement (if just one remains, it should be renamed SAD).
This example of thaumatin phasing by means of the native sulfur anomalous signal (Debreczeni et al., Acta Crystallogr. D59 (2003) 688-696) uses 1.55 Å in-house CuKα data:
shelxc thau <<EOF
CELL 58.036 58.036 151.29 90 90 90
SPAG P41212
FIND 9
DSUL 8
MIND –3.5
NTRY 100
EOF
shelxd thau_fa
shelxe thau thau_fa -h -s0.5 -m20
shelxe thau thau_fa -h -s0.5 -m20 –i
The anomalous differences are extracted from the native data so only one data file is required. The sites specified by FIND consist of one methionine and 8 super-sulfurs, which are then resolved into disulfides using the DSUL instruction that is passed on to SHELXD (Debreczeni et al., Acta Crystallogr. D59 (2003) 2125-2132). Alternatively one could try to find the individual sulfurs with:
SHEL 999 2.0
FIND 17
MIND –1.7
Here the resolution cutoff has been reduced from 2.1 Å (which SHELXC would have suggested) to 2.0 Å to improve the chances of resolving the sulfurs. The SHEL, FIND, MIND and NTRY instructions are transferred to the file thau_fa.ins for the sulfur atom location with SHELXD. Note that the phases can be improved further in this case by using more SHELXE cycles than the usual 20.
shelxe exp1 exp1_fa -a -q -h -s0.6 -m20 -b
will use exp1.hkl, exp1_fa.hkl, exp1.ins (as above) and will try 3 cycles of backbone building.
### SIRAS
This involves the solution of the thaumatin structure using the above 1.55 Å data as native and 2.0 Å CuKα data from a quick iodide soak. SIRAS usually gives the best results for iodide soaks, but it is also possible in this case to use SIR (change ‘SIRA’ to ‘SIR’) or iodine SAD (change ‘SIRA’ to ‘SAD’).
shelxc thaui <<EOF
NAT thau-nat.hkl
SIRA thau-iod.hkl
CELL 58.036 58.036 151.29 90 90 90
SPAG P41212
FIND 17
NTRY 10
MIND –3.5 –0.1
EOF
shelxd thaui_fa
shelxe thaui thaui_fa -s0.5 –m20
shelxe thaui thaui_fa -s0.5 –m20 -i
## Obtaining the SHELX programs
SHELXC/D/E are distributed with CCP4.
The programs and test data may also be downloaded from the SHELX fileserver. First fill the application form at http://shelx.uni-goettingen.de/register.php Password and downloading instructions will then be emailed to the address given on the form. The programs are free to academics but a small license fee is required for 'for-profit' use.
Beta-test versions are also available from time to time. They are announced by George Sheldrick and are available from the beta-test directory. The username and password for accessing these may be obtained from GS.
hkl2map can be downloaded from a website at EMBL Hamburg. XDSGUI can be downloaded from its XDSwiki article.
## References
If these programs prove useful, you may wish to cite (and read!):
Sheldrick, G.M. (2008). "A short history of SHELX", Acta Crystallogr. D64, 112-122 [Standard reference for all SHELX* programs].
Sheldrick, G.M., Hauptman, H.A., Weeks, C.M., Miller, R. & Usón, I. (2001). "Ab initio phasing". In International Tables for Crystallography, Vol. F, Eds. Rossmann, M.G. & Arnold, E., IUCr and Kluwer Academic Publishers, Dordrecht pp. 333-351 [Full background to the dual-space recycling used in SHELXD].
Schneider, T.R. & Sheldrick, G.M. (2002). "Substructure Solution with SHELXD", Acta Crystallogr. D58, 1772-1779 [Heavy atom location with SHELXD].
Sheldrick, G.M. (2002), "Macromolecular phasing with SHELXE", Z. Kristallogr. 217, 644-650
Nanao, M.H., Sheldrick, G.M. & Ravelli, R.B.G. (2005). "Improving radiation-damage substructures for RIP", Acta Crystallogr. D61, 1227-1237 [Practical details of RIP phasing with SHELXC/D/E].
Uson, I., Stevenson, C.E.M., Lawson, D.M. & Sheldrick, G.M. (2007). "Structure determination of the O-methyltransferase NovP using the `free lunch algorithm' as implemented in SHELXE", Acta Crystallogr. D63, 1069-1074 [Implementation of the FLA in SHELXE].
|
2022-11-29 17:36:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5490425825119019, "perplexity": 4889.612834848877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710710.91/warc/CC-MAIN-20221129164449-20221129194449-00228.warc.gz"}
|
https://physics.stackexchange.com/questions/181120/2nd-law-of-thermodynamics-thought-experiment
|
# 2nd law of thermodynamics - thought experiment
I have designed this simple thought experiment that seems to contradict 2nd law of thermodynamics. Could you please find a mistake in my reasoning?
Fixed box with reflective (white) walls
________ ___________
| | |
| | |
| | |
|_______ | __________|
^
This part is free to move horizontally and is black on the left and white on the right side
In the right half of the box we don't have any photons since box is completely white inside and the moving part is white on its right side. The left part however is filled with bouncing photons as they are emitted by the left side of the movable wall which is black.
According to my reasoning the moving part should move to the right because of the radiation pressure. Photons will lose energy due to doppler effect.
Since every part of the device is at the same temperature movement of the wall inside would contradict 2nd rule of thermodynamics.
Whole device is in the vacuum box which in turn is placed on a table in a laboratory.
• I think you have misunderstood the idea of a "blackbody". An ideal blackbody (which is approximated best by black objects) emits photons with a blackbody spectrum. However, that doesn't mean that white or red or blue objects don't emit any photons at all! White objects above absolute zero absolutely do emit photons, just with a different, possibly more complicated, spectrum. If a white object and a black object are in thermal equilibrium, they will also emit the same average power in photons. – Brionius May 4 '15 at 22:17
• @Brionius Therefore why Stefan-Boltzmann’s law states that bodies with emissivity equal to 0 don't emit any energy? – user1354439 May 4 '15 at 22:20
• That's forbidden by the third law. :-) – CuriousOne May 4 '15 at 22:22
• @user1354439 I should have said "emit or reflect the same average power" – Brionius May 4 '15 at 23:48
There is also, a gas of photons in the right side.
It was trapped there when you assembled your box, and since you are assuming a perfectly zero emissivity, these photons must be perfectly reflected from all surfaces. That means they are blue shifted if the wall moves toward the right and red-shifted if the wall moves toward the left.
Result:
• If you assembled the apparatus at the test temperature then it was and remains in equilibrium without motion.
• If you assembled it at a different temperature then your experiment is equivalent to heating or cooling one side while the other is adiabatic. This case includes all attempts to exclude the photon gas from the right-hand side.
What happens is exactly what you expect: as the photon gas on the left warms (cools) the wall moves to the right (left) causing the gas on the right to warm (cool) until equilibrium is re-established.
Thermodynamics for the win.
To compute the equilibrium position for the wall, you'll need several different equations of state: (a) that for the gas on the right of a fixed number of always-reflected photons and (b) for the gas on the left of photons in thermal contact with a blackbody (the wall) whose number will vary and one for the wall (well, at least you'll need the heat capacity).
• That makes sense. Shame on me, I did not think about it :) Thank you for this clear and exhaustive explanation – user1354439 May 4 '15 at 23:24
• @user1354439 I'm not sure that it is entirely obvious. I had to think for a while before I realized that there had to be a trapped photon gas, because for more ordinary emissivities the original photons would be quickly absorbed. – dmckee --- ex-moderator kitten May 5 '15 at 0:37
|
2021-01-20 19:26:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.539779007434845, "perplexity": 425.62694412499195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521987.71/warc/CC-MAIN-20210120182259-20210120212259-00577.warc.gz"}
|
https://www.sanfoundry.com/energy-engineering-questions-answers-velocity-power-wind/
|
# Energy Engineering Questions and Answers – Velocity and Power from Wind
«
»
This set of Energy Engineering Multiple Choice Questions & Answers (MCQs) focuses on “Velocity and Power from Wind”.
1. Select the formula for total power pt?
a) Pt = $$\frac{1}{2gc}$$ ρAVi3
b) Pt = ρAVi3D3
c) Pt = $$\frac{1}{2gc}$$ Vi3D3
d) Pt = $$\frac{2gc}{Vi^3}$$
Explanation:
2. Why blade velocity of wind turbine varies?
a) Due to varying wind speeds
c) Due to the height of mount
d) Because of hotness of Sun
Explanation: Wind turbine experiences change in velocity dependent upon the blade inlet angle and the blade velocity. Since the blades are long, the blade velocity varies with the radius to a greater degree than steam or gas-turbine blades and the blades are therefore twisted.
3. When was the Halladay wind mill introduced?
a) 1920
b) 1923
c) 1854
d) 1864
Explanation: Invented by Daniel Halladay in 1854, the Halladay Standard was the first commercially successful self-governing windmill in 1854 was the firms of Halladay, McCray & Co., Ellington, Conn. Partners in the company were inventor Daniel Halladay, John Burnham and Henry McCray.
Sanfoundry Certification Contest of the Month is Live. 100+ Subjects. Participate Now!
4. How much ideal efficiency should practical turbine have?
a) 10 – 12%
b) 18 – 25%
c) 80 – 90%
d) 50 – 70%
Explanation: As wind turbine wheel cannot be completely closed, and because of spillage and other effects, practical turbines have 50 to 70% of the ideal efficiency. The real efficiency η is the product of this and ηmax and is the ratio of an actual to total power.
P = ηPtot.
5. How many types are acting on propeller type wind mill?
a) 2
b) 3
c) 4
d) 5
Explanation: There are two types of forces operating on the blades of a propeller type wind turbine. They are the circumferential forces in the direction of wheel rotation that provide the torque and the axial forces in the direction of the wind stream that provide an axial thrust that must be counteracted by proper mechanical design.
6. Calculate the air density, when 10m/s wind is at 1std atmospheric pressure and 15oC?
a) 1.226 kg/m3
b) 1.033 kg/m3
c) 2.108 kg/m3
d) 0.922 kg/m3
Explanation: For air, gas constant R = 287 J/kgK, 1atm = 1.01325 X 105 Pa
Air density, ρ = P/RT = (1.01325 ×105)/(287(15+273.15)) = 1.226 kg/m3.
7. Calculate the air density when 18m/s wind is at 1std atmospheric pressure and 34oC?
a) 1.149 kg/m3
b) 1.9 kg/m3
c) 2.88 kg/m3
d) 5.89 kg/m3
Explanation: For air, gas constant R = 287 J/kgK, 1atm = 1.01325 X 105 Pa
Air density, ρ = P/RT = (1.01325 × 105)/(287(34+273.15)) = 1.149 kg/m3.
8. What is the total power produced if the turbine diameter is 120m?
a) 0.277 KW
b) 1.224 KW
c) 4.28 KW
d) 0.89 KW
Explanation: Total power P,
P = 0.245 X (πD2/4)
= 0.245 X (π (120)2/4)
= 0.277 KW.
9. What is the total power produced if the turbine diameter is 90m?
a) 0.155KW
b) 0.982 KW
c) 1.452 KW
d) 3.12 KW
Explanation: Total power P,
P = 0.245 X (πD2/4)
= 0.245 X (π (90)2/4)
= 0.155KW.
|
2023-03-21 04:10:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6529791355133057, "perplexity": 8271.066352517371}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00570.warc.gz"}
|
http://www.studiofratini.com/best-adobe-photoshop-books/
|
AMMINISTRAZIONI IMMOBILIARI E CONDOMINIALI
* _Photoshop._ www.photoshop.com
* _Capture Pro._ www.capture.com/products
Figure 4.2 illustrates an example of a raster image.
1. **Open a JPEG image (for example, 2007.jpg) and select Save for Web and Devices from the File menu**.
You see the familiar Save for Web & Devices dialog box.
2. **Make sure the Format is set to JPEG**.
The JPEG image is now in a browser-readable format. You can view it in a browser without compression, saving image quality.
3. **Click Save**.
Figure 4.2. A JPEG is a raster-based file that can be viewed and opened in a browser without being compressed.
4. **Right-click (Control-click on a Mac) the image and choose Open**.
The image appears in Photoshop.
Photoshop’s file system has its own folders, and you probably already know where those are. You can also insert a JPEG image from another folder into a web browser simply by dragging and dropping.
Adobe Photoshop Elements is quite buggy and has problems closing programs and saving images. Most of the glitches are frequent and irritating. Elements is lacking in many features compared to Photoshop.
Adobe Photoshop Elements is a good option for students and people looking for quality graphics images. Once you understand the process of editing images in Elements you’ll know more about how to edit better in Photoshop.
The best features of Elements
Easy to use interface
Most of the tools are right on the toolbar, easy to use and modify. There are 20 tools in the toolbar, 5 shortcut keys, and 5 menus.
This is a good start if you are new to Elements and want to start with creating a complete image.
You’ll have better results using Elements than using Photoshop.
You can avoid using Adobe Lightroom and Photoshop at the same time. Elements, being free, allows you to handle raw images. Elements will allow you to do the same things that Photoshop will allow if you pay for the license.
Adobe Photoshop Elements provides most of the editing tools used by professional photographers.
Lack of many tools
Elements has some tools that Photoshop does not provide. In Elements, you’ll have 20 tools, 4 panels, 2 sliders, 8 menus, 7 keyboard shortcuts, and 2 bottoms.
Elements has only one filmstrip, no curves panel and no photomerge.
Elements has many bugs, so if you use it to do a great editing you’ll most likely crash and lose your work.
Elements does not have a filmstrip, contrast, motion, brightness, layers, healing options, and many other features of Photoshop.
Elements does not have some of the advanced filters available in Photoshop.
Elements does not have a manual-cutout selection, transform paths, and calligraphic tools. These are some of the features that make editing an image in Photoshop so easy.
You cannot save an image in Elements while editing it.
Elements has some bugs, like crashing, freezing and losing data.
The bugs will not affect your creations but the workflow.
Elements seems like it has a good file-save button, but it does not. The saving functionality is a very particular matter in Elements. You cannot save the file in the same place where you created the file, Elements will save it in a folder that you may not know.
Using
05a79cecff
Morningstar’s inclusion of variable investing market neutral ETFs – which could cause issues for indices of funds of funds investors – is the catalyst to incite a range of reactions in the industry, analysts said.
“Morningstar’s variable ETFs are designed to change each month – sometimes by big amounts,” said Eric Halperin, portfolio manager at BlackRock. Halperin noted that Morningstar did not explicitly state what index the fund would track.
“Mornstar’s variable ETFs, like their fixed-ETFs, are designed to be more volatile than a weighted fund, so investors should exercise caution with funds that they do a full month and month out,” Halperin said. “Variable ETFs can be very volatile, and are not suitable for index investors.”
It is understood that Morningstar’s plans have met the disapproval of CIO Kevin McPartland who believes that the “herding” of investors is exactly what regulators fear.
“For index managers, using variable investment strategies, such as variable ETFs, can be problematic for two reasons. First, index investors don’t like herding and being forced to change their investment portfolio. Second, this can increase the volatility of a portfolio. Given these factors, we have not used variable investment strategies for any of our ETFs,” said McPartland. “The indexer community has had a long and positive relationship with Morningstar as a firm that provides access to index data to investors around the world.”
The reaction from Morningstar is the latest in a string of negative headlines.
The company’s decision to stop allowing ‘black box’ start-up firms – like BlackRock’s iShares and others – to list their products on the New York Stock Exchange from April comes on the heels of a rocky IPO of Canadian index provider Horizons ETFs.
Morningstar’s decision followed a decision by the UK to cut ETF listing fees in March.
“We understand why. It doesn’t always work out,” said Michael Gallagher, head of index and ETF products at Glass Capital.
“But I would still think about who’s better to work with,” he said.
Glass Capital has started advising institutions to ask for an explanation from a provider and to get all of the details they have before signing a contract.Q:
Converting Date and Timestamp to String formating
I’m setting a custom view for a calender in which I set dates as follows :
I’ve
Q:
Small query regarding the proof of Bolzano’s Theorem
I am reading the proof for Bolzano’s theorem in the book of Halmos’ “Measure and Integral: A Mathematical Intoduction”.
The proof has two steps. The first is to show that $\inf {n \in \mathbb{N}: A_n eq \emptyset} = 0$, and the second is to show that $\bigcup\limits_{n=1}^{\infty}A_n = \mathbb{R}$.
At step one it is written that if $S$ is a countable set, then the set of non-empty finite subsets of $S$ has the property that any finite subset of the set is non-empty, and any infinite subset of the set has an infinite subset with an infinite subset. So here we take a set $S$ to be $\mathbb{N}$, so the claim that $\inf A_n = 0$ follows immediately.
I don’t get the second step. The set $S$ is defined to be $\mathbb{N}$, but then $S$ is also a set of elements of $\mathbb{N}$. So if $X \subseteq \mathbb{N}$, then $X$ has a minimum element, and if $X \subseteq \mathbb{N}$, then it has a supremum element. Why are we able to assume that $S$ is countable, and then show that every $x \in \mathbb{R}$ is in the set of real numbers from the countable set?
A:
If $S$ is countable, then it is a countable union of finite sets, so it is countable. This means that $S\subseteq\mathbb{R}\setminus\{0\}$, or in other words, the only elements of $S$ are positive. In particular, the set of all positive elements of $\mathbb{N}$ is not empty.
Playing Cards Loser Special
The playing cards loser special is the weakest card in the poker deck. It is reserved for minnows in the game of poker, like yourself. You don’t even have to have the worst hand to get the loser card.
With the loser card, every hand must go according to the
|
2022-11-27 11:53:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25386470556259155, "perplexity": 1631.23660236932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710237.57/warc/CC-MAIN-20221127105736-20221127135736-00076.warc.gz"}
|
http://tex.stackexchange.com/questions/58848/ap%c3%a9ndices-appendix-spanish-accent?answertab=active
|
# Apéndices (Appendix spanish accent)
Dear friends: there is a problem with appendix and ápendice.
I have a scrbook document and I use appendix package,
\usepackage[title, titletoc]{appendix}
because I like the word "Apéndice" in the toc
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
However, when I compile with latex there is an error in file.toc (maybe in file.aux before). The reason is that in the file.aux there is a line like this:
\@writefile{toc}{\contentsline {chapter}{Ap\IeC {\'e}ndice \numberline {A}Sobre el \LaTeX }{27}{ApÈndice.1.Alph1}}
and in the second compilation the character "È" produces and error. If I compile with lualatex I have not any error.
I have tried to modify \indexname, \@chapapp but I can not find any solution.
Yes, that is a solution. I am going to clarify the problem with a MWE. In my opinion the problem is motivated by UTF-8 codify and inputenc package, as it is possible to see with this code:
\documentclass[paper=a4,10pt, twoside]{scrbook}%
\usepackage{etoolbox}
\usepackage{tgheros} % (Una versión libre de Helvetica)
\renewcommand{\sfdefault}{qhv} %qhv
%:Fuente para todo lo demás (Una versión libre de Times)
\renewcommand{\rmdefault}{qtm}
\usepackage{textcomp}
\normalfont
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc} %para LaTex
%:Paquetes relacionados con el idioma y codificación
\usepackage[english, spanish, es-nosectiondot, es-noindentfirst, es-nolists, activeacute]{babel}
\spanishdecimal{.}
%:Para poner las referencias y los nombres de figuras y tablas en castellano
\addto\captionsspanish
{%
\def\figurename{Figura}%
\def\tablename{Tabla}%
\def\listfigurename{Índice de Figuras}
\def\listtablename{Índice de Tablas}
\def\contentsname{Índice}%
\def\chaptername{Capítulo}
\def\Agradecimientos{Agradecimientos}
\def\Prefacename{Prefacio}%
\def\refname{Referencias}%
\def\abstractname{Resumen}%
\def\bibname{{Bibliografía}}
\def\appendixname{Apéndice}
\def\miapendice{\appendixname}
\def\glossaryname{Glosario}%
\def\indexname{{Índice Alfabético}}
}
%:Para definir APÉNDICES
\usepackage[title, titletoc]{appendix}
\usepackage{hyperref}
\begin{document}
\cleardoublepage
\phantomsection
\tableofcontents
%:Empieza el contenido del libro
\mainmatter
\chapter{Primer capítulo}
\section{Primera sección, primer capítulo}
\chapter{Segundo capítulo}
\section{Primera sección, segundo capítulo}
\begin{appendices}
\chapter{Primer apéndice}
\section{Primera sección, primer apéndice}
\chapter{Segundo apéndice}
\section{Primera sección, segundo apéndice}
\end{appendices}
\end{document}
If you compile, you obtain this error message:
Package inputenc Error: Unicode char \u8:énd not set up for use with La- TeXSee the inputenc package documentation for explanation.Your command was ignored.Type I to replace it with another command,or to continue without it.ice.alph1.Alph11ice.alph1.Alph20P
But the solution proposed it is perfect. Thank you
-
Hi Javier, Welcome to TeX.SE! I edited your code using the {} and backticks- it just makes it a bit easier to distinguish code from text. It would be best if you could post a complete MWE to help those who might be able to work on your problem. Welcome to the group! – cmhughes Jun 6 '12 at 23:56
Please edit your question and add a minimal, yet complete, version of your document allowing us to reproduce the problem. – Gonzalo Medina Jun 7 '12 at 0:06
Javier: I could answer just because this reminded me about another question. Please, always try to present a minimal complete example; I add one to my answer so you can adjust it if something doesn't work as advertised. – egreg Jun 7 '12 at 8:31
## 1 Answer
This seems very similar to How to make appendix and hyperref packages work together with cyrillic (non ASCII) characters? and indeed a part of that answer should make your day.
Add the following after loading appendix (it's needed only when hyperref is used)
\usepackage{etoolbox}
\makeatletter
\appto{\appendices}{\def\Hy@chapapp{Appendix}}
\makeatother
For some reasons, hyperref defines \Hy@chapapp as \appendixname, which in turn becomes (with Spanish babel and appendix) \spanishappendixname that again becomes Ap\'{e}ndice. However, it seems that \Hy@chapapp is used only for building the anchor name for hyperlinks, so it should be immaterial what string is used, as long as it is unique. Since the expansion of \Hy@chapapp is subject to complete expansion, it's better for it to be composed by characters only.
Here's a minimal document
\documentclass{scrbook}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage[spanish]{babel}
\usepackage[title, titletoc]{appendix}
\usepackage{etoolbox} % we load it for the workaround
\usepackage{hyperref}
% workaround
\makeatletter
\appto{\appendices}{\def\Hy@chapapp{Appendix}}
\makeatother
\begin{document}
\frontmatter
\tableofcontents
\mainmatter
\begin{appendices}
\chapter{Sobre el \LaTeX}
\end{appendices}
\end{document}
-
|
2014-11-24 03:57:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8528741598129272, "perplexity": 7059.4149623989815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380358.68/warc/CC-MAIN-20141119123300-00259-ip-10-235-23-156.ec2.internal.warc.gz"}
|
http://nyjm.albany.edu/j/2017/23-67.html
|
New York Journal of Mathematics Volume 23 (2017) 1447-1529
Volume 23 Volume Index
Strong approximation theorem for absolutely integral varieties over PSC Galois extensions of global fields
view print
Published: October 19, 2017 Keywords: PAC field, strong approximation theorem, stabilizing element, Picard group Subject: 12E30
Abstract
Let K be a global field, V a proper subset of the set of all primes of K, S a finite subset of V, and \tilde K (resp. Ksep) a fixed algebraic (resp. separable algebraic) closure of K. Let Gal(K)=Gal(Ksep/K) be the absolute Galois group of K. For each p∈V we choose a Henselian (respectively, a real or algebraic) closure Kp of K at p in \tilde K if p is nonarchimedean (respectively, archimedean). Then,
Ktot,S=\capp∈S\capτ∈Gal(K)Kpτ
is the maximal Galois extension of K in Ksep in which each p∈S totally splits. For each p∈V we choose a p-adic absolute value | |p of Kp and extend it in the unique possible way to \tilde K.
For σ=(σ1,...,σe)∈Gal(K)e let Ktot,S[σ] be the maximal Galois extension of K in Ktot,S fixed by σ1,...,σe. Then, for almost all σ∈Gal(K)e (with respect to the Haar measure), the field Ktot,S[σ] satisfies the following local-global principle:
Let V be an absolutely integral affine variety in AKn. Suppose that for each p∈S there exists zp∈ Vsimp(Kp) and for each p∈V\S there exists zp∈ V(\tilde K) such that in both cases |zp|p≦1 if p is nonarchimedean and |zp|p<1 if p is archimedean. Then, there exists z∈ V(Ktot,S[σ]) such that for all p∈V and for all τ∈Gal(K) we have: |zτ|p≦1 if p is archimedean and |zτ|p<1 if p is nonarchimedean.
Author information
Wulf-Dieter Geyer:
Department of Mathematics, Universität Erlangen-Nürnberg, Erlangen, Germany
geyer@mi.uni-erlangen.de
Moshe Jarden:
School of Mathematics, Tel Aviv University, Tel Aviv, Israel
jarden@post.tau.ac.il
Aharon Razon:
Elta, Ashdod, Israel
razona@elta.co.il
|
2018-10-17 14:25:30
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9131972193717957, "perplexity": 3297.7421644861956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511175.9/warc/CC-MAIN-20181017132258-20181017153758-00479.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-1-foundations-for-algebra-get-ready-page-1/16
|
## Algebra 1
We divide the numerator and the denominator by 10, giving us: $\frac{7}{10}=\frac{0.7}{1}=0.7$
|
2018-10-19 14:28:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.954136848449707, "perplexity": 436.9465057067796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512400.59/warc/CC-MAIN-20181019124748-20181019150248-00470.warc.gz"}
|
https://proofwiki.org/wiki/Quotient_Group_is_Group/Corollary
|
# Quotient Group is Group/Corollary
## Corollary to Quotient Group is Group
Let $G$ be a group.
Let $N$ be a normal subgroup of $G$.
If $G$ is finite, then:
$\index G N = \order {G / N}$
## Proof
From Quotient Group is Group, $G / N$ is a group.
From Lagrange's Theorem, we have:
$\index G N = \dfrac {\order G} {\order N}$
From the definition of quotient group:
$\order {G / N} = \dfrac {\order G} {\order N}$
Hence the result.
$\blacksquare$
|
2019-12-05 18:15:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9759288430213928, "perplexity": 635.5256650683518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481281.1/warc/CC-MAIN-20191205164243-20191205192243-00244.warc.gz"}
|
http://asc.harvard.edu/chips/faq/axis.limits.html
|
## How do I change the limits of an axis?
The limits command is used to change the range displayed by one or both axes. It can be used to select the minimum values that will include all the data (the AUTO setting) or explicit values can be given. Examples of its use are:
chips> clear()
chips> add_curve([0.5, 1.5, 3], [1, 2, 3])
chips> add_curve([8, 9, 15], [12, 15, 8])
chips> limits(X_AXIS, 1, 10)
chips> limits(X_AXIS, AUTO, AUTO)
chips> limits(X_AXIS, 1, AUTO)
which will change the current X axis to display the range 1 to 10, the range covered by the data in the plot (so 9,5 to 15), and have a minimum of one with the maximum set to the data maximum (here 15). The set_axis call was used to ensure that no padding was added to the limits when the AUTO setting was used.
The option Y_AXIS is used to set the Y axis and XY_AXIS will change both axes at once, so
chips> limits(XY_AXIS, AUTO, AUTO)
is a simple way to make sure that all the data is visible in a plot.
### What about axes drawn using a log scale?
When an axis is drawn using a logarithmic scale, limit values do not have to be changed (so you would still use 100 rather than 2). Limit values that are 0 or negative are ignored and replaced by the minimum (or maximum) positive data value for the axis.
### The AUTO setting
This setting makes it easy to make sure the plot limits cover all the data but they do not take into account the range of the other axis; so after
chips> clear()
chips> limits(XY_AXIS, 4, 11)
chips> get_plot_yrange()
[4.0, 11.0]
chips> limits(Y_AXIS, AUTO, AUTO)
chips> get_plot_yrange()
[5.5, 104.5]
the Y range is set to match the full dataset rather than just the part displayed by the X-axis limits.
### The majortick.mode setting
The previous discussion has been for when the majortick.mode attribute of the axis is set to "limits", which is the default value. If set to "nice" then the displayed limits are modified so that a sensible number of major tick marks are displayed and that the axis starts and ends at a major tick mark.
### Selecting the axis range of a single object
The limits command can be used to select the axis ranges for a given object (curve, contour, histogram or image). For example, the following command sets the X and Y axes to match the data range of the curve with the label "crv2".
chips> limits(chips_curve, "crv2")
#### The ChIPS GUI
The ChIPS GUI makes it easy to modify a visualization using your mouse, rather than Python functions. The GUI can also be used to add annotations - such as labels, lines, points and regions - and to zoom or pan into plots.
|
2018-03-22 08:08:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44134852290153503, "perplexity": 1175.9807255637331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647782.95/warc/CC-MAIN-20180322073140-20180322093140-00436.warc.gz"}
|
https://py.api.tudat.space/en/latest/dependent_variable.html
|
# dependent_variable#
This module provides the functionality for creating dependent variable settings. Note that all output is in SI units (meters, radians, seconds). All epochs are provided in seconds since J2000.
## Functions#
mach_number(body, central_body) Function to add the Mach number to the dependent variables to save. altitude(body, central_body) Function to add the altitude to the dependent variables to save. airspeed(body, body_with_atmosphere) Function to add the airspeed to the dependent variables to save. body_fixed_airspeed_velocity(body, central_body) Function to add the airspeed velocity vector to the dependent variables to save. body_fixed_groundspeed_velocity(body, ...) Function to add the groundspeed velocity vector to the dependent variables to save. density(body, body_with_atmosphere) Function to add the local freestream density to the dependent variables to save. temperature(body) Function to add the local freestream temperature to the dependent variables to save. Function to add the local freestream dynamic pressure to the dependent variables to save. Function to add the total aerodynamic G-load to the dependent variables to save. relative_position(body, relative_body) Function to add the relative position vector to the dependent variables to save. relative_distance(body, relative_body) Function to add the relative distance to the dependent variables to save. relative_velocity(body, relative_body) Function to add the relative velocity vector to the dependent variables to save. relative_speed(body, relative_body) Function to add the relative speed to the dependent variables to save. keplerian_state(body, central_body) Function to add the Keplerian state to the dependent variables to save. modified_equinoctial_state(body, central_body) Function to add the modified equinoctial state to the dependent variables to save. single_acceleration(acceleration_type, ...) Function to add a single acceleration to the dependent variables to save. single_acceleration_norm(acceleration_type, ...) Function to add a single scalar acceleration to the dependent variables to save. Function to add the total scalar acceleration (norm of the vector) acting on a body to the dependent variables to save. Function to add the total acceleration vector acting on a body to the dependent variables to save. single_torque_norm(torque_type, ...) Function to add a single torque (norm of the torque vector) to the dependent variables to save. single_torque(torque_type, ...) Function to add a single torque vector to the dependent variables to save. Function to add the total torque (norm of the torque vector) to the dependent variables to save. total_torque(body) Function to add the total torque vector to the dependent variables to save. Function to add single degree/order contributions of a spherical harmonic acceleration vector to the dependent variables to save. Function to add a single term of the spherical harmonic acceleration (norm of the vector) to the dependent variables to save. aerodynamic_force_coefficients(body[, ...]) Function to add the aerodynamic force coefficients to the dependent variables to save. aerodynamic_moment_coefficients(body[, ...]) Function to add the aerodynamic moment coefficients to the dependent variables to save. latitude(body, central_body) Function to add the latitude to the dependent variables to save. geodetic_latitude(body, central_body) Function to add the geodetic latitude to the dependent variables to save. longitude(body, central_body) Function to add the longitude to the dependent variables to save. heading_angle(body, central_body) Function to add the heading angle to the dependent variables to save. flight_path_angle(body, central_body) Function to add the flight path angle to the dependent variables to save. angle_of_attack(body, central_body) Function to add the angle of attack to the dependent variables to save. sideslip_angle(body, central_body) Function to add the sideslip angle to the dependent variables to save, as defined by Mooij, 1994 [1] . bank_angle(body, central_body) Function to add the bank angle to the dependent variables to save, as defined by Mooij, 1994 [1] . radiation_pressure(body, radiating_body) Function to add the radiation pressure to the dependent variables to save. Function to add the acceleration induced by the total time-variability of a gravity field to the dependent variables to save. Function to add the acceleration induced by a single time-variability of a gravity field to the dependent variables to save. Function to add the acceleration induced by a single time-variability of a gravity field, at a given list of degrees/orders, to the dependent variables to save. Function to add the rotation matrix from inertial to body-fixed frame to the dependent variables to save. tnw_to_inertial_rotation_matrix(body, ...) Function to add the rotation matrix from the TNW to the inertial frame to the dependent variables to save. rsw_to_inertial_rotation_matrix(body, ...) Function to add the rotation matrix from the RSW to the inertial frame to the dependent variables to save. Function to add the 3-1-3 Euler angles for the rotation from inertial to body-fixed frame to the dependent variables to save. Function to add the rotation matrix between any two reference frames used in aerodynamic calculations. periapsis_altitude(body, central_body) Function to add the altitude of periapsis to the dependent variables to save. apoapsis_altitude(body, central_body) Function to add the altitude of apoapsis to the dependent variables to save. Function to add the spherical, body-fixed position to the dependent variables to save. Function to add the relative Cartesian position, in the central body's fixed frame, to the dependent variables to save. body_mass(body) Function to add the current body mass to the dependent variables to save. radiation_pressure_coefficient(body, ...) Function to add the current radiation pressure coefficient to the dependent variables to save. Function to add the total mass rate to the dependent variables to save. Function to add the gravitational potential to the dependent variables to save. Function to add the laplacian of the gravitational potential to the dependent variables to save. minimum_body_distance(body_name, bodies_to_check) Function to compute the minimum distance between a given body, and a set of other bodies. Function to compute the minimum distance between a ground station, and a set of other bodies visible from that station. custom_dependent_variable(custom_function, ...) Function to compute a custom dependent variable.
mach_number(body: str, central_body: str) #
Function to add the Mach number to the dependent variables to save.
Function to add the Mach number to the dependent variables to save. The calculation of the altitude uses the atmosphere model of the central body and the current state of the body for which the Mach number is to be calculated.
Parameters:
• body (str) – Body whose Mach number is to be saved.
• central_body (str) – Body with atmosphere with respect to which the Mach number is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
Examples
To create settings for saving of a Mach number of a body name ‘Spacecraft’ w.r.t. the atmosphere of body ‘Earth’, use:
# Define save settings for Mach number
propagation_setup.dependent_variable.mach_number( "Spacecraft", "Earth" )
altitude(body: str, central_body: str) #
Function to add the altitude to the dependent variables to save.
Function to add the altitude to the dependent variables to save. The calculation of the altitude uses the shape model of the central body and the current state of the body for which the altitude is to be calculated.
Parameters:
• body (str) – Body whose altitude is to be saved.
• central_body (str) – Body with respect to which the altitude is computed (requires this body to have a shape model defined).
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
airspeed(body: str, body_with_atmosphere: str) #
Function to add the airspeed to the dependent variables to save.
Function to add the airspeed to the dependent variables to save. The calculation of the airspeed uses the rotation and wind models of the central body (to determine the motion of the atmosphere in inertial space), and the current state of the body for which the airspeed is to be calculated.
Parameters:
• body (str) – Body whose dependent variable should be saved.
• central_body (str) – Body with atmosphere with respect to which the airspeed is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
body_fixed_airspeed_velocity(body: str, central_body: str) #
Function to add the airspeed velocity vector to the dependent variables to save.
Function to add the airspeed velocity vector to the dependent variables to save. The airspeed velocity vector is not provided in an inertial frame, but instead a frame centered on, and fixed to, the central body. It defines the velocity vector of a body w.r.t. the relative atmosphere It requires the central body to have an atmosphere.
Parameters:
• body (str) – Body whose dependent variable should be saved.
• central_body (str) – Body with respect to which the airspeed is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
body_fixed_groundspeed_velocity(body: str, central_body: str) #
Function to add the groundspeed velocity vector to the dependent variables to save.
Function to add the groundspeed velocity vector to the dependent variables to save. The groundspeed velocity vector is not provided in an inertial frame, but instead a frame centered on, and fixed to, the central body. It defines the velocity vector of a body w.r.t. ‘the ground’ or (alternatively and identically) the relative atmosphere in the case the atmosphere would be perfectly co-rotating with the central body.
Parameters:
• body (str) – Body whose dependent variable should be saved.
• central_body (str) – Body with respect to which the groundspeed is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
density(body: str, body_with_atmosphere: str) #
Function to add the local freestream density to the dependent variables to save.
Function to add the freestream density (at a body’s position) to the dependent variables to save. The calculation of the density uses the atmosphere model of the central body, and the current state of the body for which the density is to be calculated.
Parameters:
• body (str) – Body whose dependent variable should be saved.
• body_with_atmosphere (str) – Body with atmosphere with respect to which the density is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
temperature(body: str) #
Function to add the local freestream temperature to the dependent variables to save.
Function to add the freestream temperature (at a body’s position) to the dependent variables to save. The calculation of the temperature uses the atmosphere model of the central body, and the current state of the body for which the temperature is to be calculated.
Parameters:
body (str) – Body whose dependent variable should be saved.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
dynamic_pressure(body: str) #
Function to add the local freestream dynamic pressure to the dependent variables to save.
Function to add the freestream dynamic pressure (at a body’s position) to the dependent variables to save. The calculation of the temperature uses the atmosphere model of the central body, and the current state of the body for which the temperature is to be calculated.
Parameters:
body (str) – Body whose dependent variable should be saved.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
Function to add the total aerodynamic G-load to the dependent variables to save.
Function to add the total aerodynamic G-load of a body to the dependent variables to save. The calculation uses the atmosphere model of the central body, and the current state of the body for which the temperature is to be calculated.
Parameters:
body (str) – Body whose dependent variable should be saved.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
relative_position(body: str, relative_body: str) #
Function to add the relative position vector to the dependent variables to save.
Function to add a body’s relative position vector with respect to a second body to the dependent variables to save. The relative position is computed between the bodies’ centers of mass.
Parameters:
• body (str) – Body whose dependent variable should be saved.
• relative_body (str) – Body with respect to which the relative position is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
relative_distance(body: str, relative_body: str) #
Function to add the relative distance to the dependent variables to save.
Function to add a body’s relative distance (norm of the position vector) with respect to a second body to the dependent variables to save. The relative distance is computed between the bodies’ centers of mass.
Parameters:
• body (str) – Body whose dependent variable should be saved.
• relative_body (str) – Body with respect to which the relative distance is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
relative_velocity(body: str, relative_body: str) #
Function to add the relative velocity vector to the dependent variables to save.
Function to add a body’s relative velocity vector with respect to a second body to the dependent variables to save. The relative velocity is computed between the bodies’ centers of mass.
Parameters:
• body (str) – Body whose dependent variable should be saved.
• relative_body (str) – Body with respect to which the relative velocity is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
relative_speed(body: str, relative_body: str) #
Function to add the relative speed to the dependent variables to save.
Function to add a body’s relative speed (norm of the relative velocity vector) with respect to a second body to the dependent variables to save. The relative speed is computed between the bodies’ centers of mass.
Parameters:
• body (str) – Body whose dependent variable should be saved.
• relative_body (str) – Body with respect to which the relative speed is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
keplerian_state(body: str, central_body: str) #
Function to add the Keplerian state to the dependent variables to save.
Function to add the Keplerian state to the dependent variables to save. The Keplerian state is returned in this order: 1: Semi-major Axis. 2: Eccentricity. 3: Inclination. 4: Argument of Periapsis. 5. Right Ascension of the Ascending Node. 6: True Anomaly.
Parameters:
• body (str) – Body whose dependent variable should be saved.
• central_body (str) – Body with respect to which the Keplerian state is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
modified_equinoctial_state(body: str, central_body: str) #
Function to add the modified equinoctial state to the dependent variables to save.
Function to add the modified equinoctial state to the dependent variables to save. The value of the parameter I is automatically chosen as +1 or -1, depending on whether the inclination is smaller or larger than 90 degrees. The elements are returned in the order $$p$$, $$f$$, $$g$$, $$h$$, $$k$$, $$L$$
Parameters:
• body (str) – Body whose dependent variable should be saved.
• central_body (str) – Body with respect to which the modified equinoctial state is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
single_acceleration(acceleration_type: tudatpy.kernel.numerical_simulation.propagation_setup.acceleration.AvailableAcceleration, body_undergoing_acceleration: str, body_exerting_acceleration: str) #
Function to add a single acceleration to the dependent variables to save.
Function to add a single acceleration vector to the dependent variables to save. The requested acceleration is defined by its type, and the bodies undergoing and exerting the acceleration. This acceleration vector represents the acceleration in 3D in the inertial reference frame. NOTE: When requesting a third-body perturbation be saved, you may use either the direct acceleration type, or the third body type. For instance, for saving a point-mass third-body perturbation, you may specify either point_mass_gravity_type or third_body_point_mass_gravity_type as acceleration type.
Parameters:
• acceleration_type (AvailableAcceleration) – Acceleration type to be saved.
• body_undergoing_acceleration (str) – Body undergoing acceleration.
• body_exerting_acceleration (str) – Body exerting acceleration.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
Examples
To create settings for saving a point mass acceleration acting on body called ‘Spacecraft’, exerted by a body named ‘Earth’, use:
# Define save settings for point-mass acceleration on Spacecraft by Earth
propagation_setup.dependent_variable.single_acceleration(
propagation_setup.acceleration.point_mass_gravity_type, 'Spacecraft', 'Earth' )
single_acceleration_norm(acceleration_type: tudatpy.kernel.numerical_simulation.propagation_setup.acceleration.AvailableAcceleration, body_undergoing_acceleration: str, body_exerting_acceleration: str) #
Function to add a single scalar acceleration to the dependent variables to save.
Function to add a single scalar acceleration (norm of the acceleration vector) to the dependent variables to save. The requested acceleration is defined by its type, and the bodies undergoing and exerting the acceleration. NOTE: When requesting a third-body perturbation be saved, you may use either the direct acceleration type, or the third body type. For instance, for saving a point-mass third-body perturbation, you may specify either point_mass_gravity_type or third_body_point_mass_gravity_type as acceleration type.
Parameters:
• acceleration_type (AvailableAcceleration) – Acceleration type to be saved
• body_undergoing_acceleration (str) – Body undergoing acceleration.
• body_exerting_acceleration (str) – Body exerting acceleration.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
Examples
To create settings for saving norm of a point mass acceleration acting on body called ‘Spacecraft’, exerted by a body named ‘Earth’, use:
# Define save settings for point-mass acceleration on Spacecraft by Earth
propagation_setup.dependent_variable.single_acceleration_norm(
propagation_setup.acceleration.point_mass_gravity_type, 'Spacecraft', 'Earth' )
total_acceleration_norm(body: str) #
Function to add the total scalar acceleration (norm of the vector) acting on a body to the dependent variables to save.
Parameters:
body (str) – Body undergoing acceleration.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
total_acceleration(body: str) #
Function to add the total acceleration vector acting on a body to the dependent variables to save.
Parameters:
body (str) – Body undergoing acceleration.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
single_torque_norm(torque_type: tudatpy.kernel.numerical_simulation.propagation_setup.torque.AvailableTorque, body_undergoing_torque: str, body_exerting_torque: str) #
Function to add a single torque (norm of the torque vector) to the dependent variables to save.
Parameters:
• torque_type (AvailableTorque) – Torque type to be saved.
• body_undergoing_torque (str) – Body undergoing torque.
• body_exerting_torque (str) – Body exerting torque.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
single_torque(torque_type: tudatpy.kernel.numerical_simulation.propagation_setup.torque.AvailableTorque, body_undergoing_torque: str, body_exerting_torque: str) #
Function to add a single torque vector to the dependent variables to save.
Parameters:
• torque_type (AvailableTorque) – Torque type to be saved.
• body_undergoing_torque (str) – Body undergoing torque.
• body_exerting_torque (str) – Body exerting torque.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
total_torque_norm(body: str) #
Function to add the total torque (norm of the torque vector) to the dependent variables to save.
Parameters:
body (str) – Body whose dependent variable should be saved.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
total_torque(body: str) #
Function to add the total torque vector to the dependent variables to save.
Parameters:
body (str) – Body whose dependent variable should be saved.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
spherical_harmonic_terms_acceleration(body_undergoing_acceleration: str, body_exerting_acceleration: str, component_indices: List[Tuple[int, int]]) #
Function to add single degree/order contributions of a spherical harmonic acceleration vector to the dependent variables to save.
Function to add single degree/order contributions of a spherical harmonic acceleration vector to the dependent variables to save. The spherical harmonic acceleration consists of a (truncated) summation of contributions at degree $$l$$ and order $$m$$. Using this function, you can save the contributions of separate $$l,m$$ entries to the total acceleration. For instance, when requesting dependent variables for $$l,m=2,2$$, the contribution due to the combined influence of $$ar{C}_{22}$$ and ar{S}_{22} are provided
Parameters:
• body_undergoing_acceleration (str) – Body undergoing acceleration.
• body_exerting_acceleration (str) – Body exerting acceleration.
• component_indices (list[tuple]) – Tuples of (degree, order) indicating the terms to save.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
Examples
To create settings for saving spherical harmonic acceleration contributions of degree/order 2/0, 2/1 and 2/2, acting on a body names ‘Spacecraft’, exerted by a body named ‘Earth’, use the following for the acceleration. The resulting dependent variable will contain nine entries (three acceleration components for 2/0, 2/1 and 2/2, respectively).
# Define degree/order combinations for which to save acceleration contributions
spherical_harmonic_terms = [ (2,0), (2,1), (2,2) ]
# Define save settings for separate spherical harmonic contributions
propagation_setup.dependent_variable.spherical_harmonic_terms_acceleration( "Spacecraft", "Earth", spherical_harmonic_terms )
spherical_harmonic_terms_acceleration_norm(body_undergoing_acceleration: str, body_exerting_acceleration: str, component_indices: List[Tuple[int, int]]) #
Function to add a single term of the spherical harmonic acceleration (norm of the vector) to the dependent variables to save.
Function to add single term of the spherical harmonic acceleration (norm of the vector) to the dependent variables to save.
Parameters:
• body_undergoing_acceleration (str) – Body undergoing acceleration.
• body_exerting_acceleration (str) – Body exerting acceleration.
• component_indices (list[tuple]) – Tuples of (degree, order) indicating the terms to save.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
Examples
To create settings for saving spherical harmonic acceleration contributions of degree/order 2/0, 2/1 and 2/2, acting on a body names ‘Spacecraft’, exerted by a body named ‘Earth’, use the following for the acceleration. The resulting dependent variable will contain three entries (one acceleration norm for 2/0, 2/1 and 2/2, respectively).
# Define degree/order combinations for which to save acceleration contributions
spherical_harmonic_terms = [ (2,0), (2,1), (2,2) ]
# Define save settings for separate spherical harmonic contributions
propagation_setup.dependent_variable.spherical_harmonic_terms_acceleration_norm( "Spacecraft", "Earth", spherical_harmonic_terms )
aerodynamic_force_coefficients(body: str, central_body: str = '') #
Function to add the aerodynamic force coefficients to the dependent variables to save.
Function to add the aerodynamic force coefficients to the dependent variables to save. It requires an aerodynamic coefficient interface to be defined for the vehicle. The coefficients are returned in the following order: C_D, C_S, C_l (if coefficient interface defined in aerodynamic frame), or C_X, C_Y, C_Z (if coefficient interface defined in body frame).
Parameters:
• body (str) – Body undergoing acceleration.
• central_body (str) – Body exerting acceleration (e.g. body with atmosphere).
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
aerodynamic_moment_coefficients(body: str, central_body: str = '') #
Function to add the aerodynamic moment coefficients to the dependent variables to save.
Function to add the aerodynamic force coefficients to the dependent variables to save. It requires an aerodynamic coefficient interface to be defined for the vehicle. The coefficients are returned in the following order: C_l, C_m, C_n , respectively about the X, Y, Z axes of the body-fixed frame, see (see Mooij, 1994 [1])
Parameters:
• body (str) – Body undergoing acceleration.
• central_body (str) – Body exerting acceleration (e.g. body with atmosphere).
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
latitude(body: str, central_body: str) #
Function to add the latitude to the dependent variables to save.
Function to add the latitude of a body, in the body-fixed frame of a central body, to the dependent variables to save.
Parameters:
• body (str) – Body whose dependent variable should be saved.
• central_body (str) – Body with respect to which the latitude is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
geodetic_latitude(body: str, central_body: str) #
Function to add the geodetic latitude to the dependent variables to save.
Function to add the geodetic latitude, in the body-fixed frame of a central body, to the dependent variables to save. If the central body has a spherical shape model, this value is identical to the latitude. If the central body has an oblate spheroid shape model, the calculation of the geodetic latitude uses the flattening of the this shape model to determine the geodetic latitude
Parameters:
• body (str) – Body whose dependent variable should be saved.
• central_body (str) – Body with respect to which the geodetic latitude is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
longitude(body: str, central_body: str) #
Function to add the longitude to the dependent variables to save.
Function to add the longitude of a body, in the body-fixed frame of a central body, to the dependent variables to save.
Parameters:
• body (str) – Body whose dependent variable should be saved.
• central_body (str) – Body with respect to which the longitude is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
Function to add the heading angle to the dependent variables to save.
Function to add the heading angle to the dependent variables to save, as defined by Mooij, 1994 [1] .
Parameters:
• body (str) – Body whose dependent variable should be saved.
• central_body (str) – Body with respect to which the heading angle is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
flight_path_angle(body: str, central_body: str) #
Function to add the flight path angle to the dependent variables to save.
Function to add the flight path angle to the dependent variables to save, as defined by Mooij, 1994 [1] .
Parameters:
• body (str) – Body whose dependent variable should be saved.
• central_body (str) – Body with respect to which the flight path angle is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
angle_of_attack(body: str, central_body: str) #
Function to add the angle of attack to the dependent variables to save.
Function to add the angle of attack angle to the dependent variables to save, as defined by Mooij, 1994 [1] .
Parameters:
• body (str) – Body whose dependent variable should be saved.
• central_body (str) – Body with respect to which the angle of attack is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
sideslip_angle(body: str, central_body: str) #
Function to add the sideslip angle to the dependent variables to save, as defined by Mooij, 1994 [1] .
Parameters:
• body (str) – Body whose dependent variable should be saved.
• central_body (str) – Body with respect to which the sideslip angle is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
bank_angle(body: str, central_body: str) #
Function to add the bank angle to the dependent variables to save, as defined by Mooij, 1994 [1] .
Parameters:
• body (str) – Body whose dependent variable should be saved.
• central_body (str) – Body with respect to which the bank angle is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
Function to add the radiation pressure to the dependent variables to save.
Function to add the local radiation pressure, in N/m^2, to the dependent variables to save. It requires a ‘source power’ to be defined for the radiating body.
Parameters:
• body (str) – Body whose dependent variable should be saved.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
total_gravity_field_variation_acceleration(body_undergoing_acceleration: str, body_exerting_acceleration: str) #
Function to add the acceleration induced by the total time-variability of a gravity field to the dependent variables to save.
Function to add the acceleration induced by the total time-variability of a gravity field to the dependent variables to save. This function does not distinguish between different sources of variations of the gravity field, and takes the full time-variation when computing the contribution to the acceleration. To select only one contribution, use the single_gravity_field_variation_acceleration() function.
Parameters:
• body_undergoing_acceleration (str) – Body whose dependent variable should be saved.
• body_exerting_acceleration (str) – Body exerting the acceleration.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
single_gravity_field_variation_acceleration(body_undergoing_acceleration: str, body_exerting_acceleration: str, deformation_type: tudatpy.kernel.numerical_simulation.environment_setup.gravity_field_variation.BodyDeformationTypes, identifier: str = '') #
Function to add the acceleration induced by a single time-variability of a gravity field to the dependent variables to save.
Function to add the acceleration induced by a single time-variability of a gravity field to the dependent variables to save. The user specifies the type of variability for which the induced acceleration is to be saved.
Parameters:
• body_undergoing_acceleration (str) – Body whose dependent variable should be saved.
• body_exerting_acceleration (str) – Body exerting the acceleration.
• deformation_type (BodyDeformationTypes) – Type of gravity field variation for which the acceleration contribution is to be saved
• identifier (str, default="") – Identifier for the deformation type. To be used in case multiple realizations of a single variation type are present in the given body. Otherwise, this entry can be left empty
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
single_per_term_gravity_field_variation_acceleration(body_undergoing_acceleration: str, body_exerting_acceleration: str, component_indices: List[Tuple[int, int]], deformation_type: tudatpy.kernel.numerical_simulation.environment_setup.gravity_field_variation.BodyDeformationTypes, identifier: str = '') #
Function to add the acceleration induced by a single time-variability of a gravity field, at a given list of degrees/orders, to the dependent variables to save. This combines the functionality of the single_gravity_field_variation_acceleration() and spherical_harmonic_terms_acceleration() variables
Parameters:
• body_undergoing_acceleration (str) – Body whose dependent variable should be saved.
• body_exerting_acceleration (str) – Body exerting the acceleration.
• component_indices (list[tuple]) – Tuples of (degree, order) indicating the terms to save.
• deformation_type (BodyDeformationTypes) – Type of gravity field variation for which the acceleration contribution is to be saved
• identifier (str, default="") – Identifier for the deformation type. To be used in case multiple realizations of a single variation type are present in the given body. Otherwise, this entry can be left empty
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
inertial_to_body_fixed_rotation_frame(body: str) #
Function to add the rotation matrix from inertial to body-fixed frame to the dependent variables to save.
Function to add the rotation matrix from inertial to body-fixed frame to the dependent variables to save. This requires the rotation of the body to be defined (either in the environment or the state vector). NOTE: a rotation matrix is returned as a nine-entry vector in the dependent variable output, where entry $$(i,j)$$ of the matrix is stored in entry $$(3i+j)$$ of the vector (with $$i,j=0,1,2$$),
Parameters:
body (str) – Body for which the rotation matrix is to be saved.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
tnw_to_inertial_rotation_matrix(body: str, central_body: str) #
Function to add the rotation matrix from the TNW to the inertial frame to the dependent variables to save.
Function to add the rotation matrix from the TNW to the inertial frame to the dependent variables to save. It has the x-axis pointing along the velocity vector, the z-axis along the orbital angular momentum vector, and the y-axis completing the right-handed system. NOTE: a rotation matrix is returned as a nine-entry vector in the dependent variable output, where entry $$(i,j)$$ of the matrix is stored in entry $$(3i+j)$$ of the vector (with $$i,j=0,1,2$$),
Parameters:
• body (str) – Body for which the rotation matrix is to be saved.
• central_body (str) – Body with respect to which the TNW frame is determined.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
rsw_to_inertial_rotation_matrix(body: str, central_body: str) #
Function to add the rotation matrix from the RSW to the inertial frame to the dependent variables to save.
Function to add the rotation matrix from the RSW to the inertial frame to the dependent variables to save. It has the x-axis pointing along the position vector (away from the central body), the z-axis along the orbital angular momentum vector, and the y-axis completing the right-handed system. NOTE: a rotation matrix is returned as a nine-entry vector in the dependent variable output, where entry $$(i,j)$$ of the matrix is stored in entry $$(3i+j)$$ of the vector (with $$i,j=0,1,2$$),
Parameters:
• body (str) – Body for which the rotation matrix is to be saved.
• central_body (str) – Body with respect to which the TNW frame is determined.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
inertial_to_body_fixed_313_euler_angles(body: str) #
Function to add the 3-1-3 Euler angles for the rotation from inertial to body-fixed frame to the dependent variables to save.
Function to add the 3-1-3 Euler angles for the rotation from inertial to body-fixed frame to the dependent variables to save. This requires the rotation of the body to be defined (either in the environment or the state vector).
Parameters:
body (str) – Body for which the rotation angles are to be saved.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
intermediate_aerodynamic_rotation_matrix_variable(body: str, base_frame: tudatpy.kernel.numerical_simulation.environment.AerodynamicsReferenceFrames, target_frame: tudatpy.kernel.numerical_simulation.environment.AerodynamicsReferenceFrames, central_body: str = '') #
Function to add the rotation matrix between any two reference frames used in aerodynamic calculations.
Function to add the rotation matrix between any two reference frames used in aerodynamic calculations. The list of available frames is defined by the AerodynamicsReferenceFrames enum. NOTE: a rotation matrix is returned as a nine-entry vector in the dependent variable output, where entry $$(i,j)$$ of the matrix is stored in entry $$(3i+j)$$ of the vector (with $$i,j=0,1,2$$),
Parameters:
• body (str) – Body whose dependent variable should be saved.
• base_frame (AerodynamicsReferenceFrames) – Base reference frame for the rotation.
• target_frame (AerodynamicsReferenceFrames) – Target reference frame for the rotation.
• central_body (str) – Central body w.r.t. which the state of the body is considered.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
periapsis_altitude(body: str, central_body: str) #
Function to add the altitude of periapsis to the dependent variables to save.
Function to add the periapsis altitude of the current osculating orbit to the dependent variables to save. The altitude depends on the shape of the central body. This function takes the current (osculating) orbit of the body w.r.t. the central body, and uses this Kepler orbit to extract the position/altitude of periapsis.
Parameters:
• body (str) – Body whose dependent variable should be saved.
• central_body (str) – Body with respect to which the altitude of periapsis is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
apoapsis_altitude(body: str, central_body: str) #
Function to add the altitude of apoapsis to the dependent variables to save.
Function to add the apoapsis altitude of the current osculating orbit to the dependent variables to save. The altitude depends on the shape of the central body. This function takes the current (osculating) orbit of the body w.r.t. the central body, and uses this Kepler orbit to extract the position/altitude of apoapsis.
Parameters:
• body (str) – Body whose dependent variable should be saved.
• central_body (str) – Body with respect to which the altitude of apoapsis is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
central_body_fixed_spherical_position(body: str, central_body: str) #
Function to add the spherical, body-fixed position to the dependent variables to save.
Function to add the spherical position to the dependent variables to save. The spherical position is return as the radius, latitude, longitude, defined in the body-fixed frame of the central body
Parameters:
• body (str) – Body whose spherical position is to be saved.
• central_body (str) – Body with respect to which the spherical, body-fixed is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
central_body_fixed_cartesian_position(body: str, central_body: str) #
Function to add the relative Cartesian position, in the central body’s fixed frame, to the dependent variables to save.
Parameters:
• body (str) – Body whose relative cartesian position is to be saved.
• central_body (str) – Body with respect to which the cartesian, body-fixed is computed.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
body_mass(body: str) #
Function to add the current body mass to the dependent variables to save.
Parameters:
body (str) – Body whose mass should be saved.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
Function to add the current radiation pressure coefficient to the dependent variables to save.
Parameters:
• body (str) – Body whose dependent variable should be saved.
• emitting_body (str) – Emitting body.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
total_mass_rate(body: str) #
Function to add the total mass rate to the dependent variables to save.
Function to add the total mass rate to the dependent variables to save. It requires the body mass to be numerically propagated.
Parameters:
body (str) – Body whose mass rate should be saved.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
gravity_field_potential(body_undergoing_acceleration: str, body_exerting_acceleration: str) #
Function to add the gravitational potential to the dependent variables to save.
Function to add the gravitational potential to the dependent variables to save. The gravitational potential is defined by the bodies undergoing and exerting the acceleration.
Parameters:
• body_undergoing_acceleration (str) – Body whose dependent variable should be saved.
• body_exerting_acceleration (str) – Body exerting acceleration.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
gravity_field_laplacian_of_potential(body_undergoing_acceleration: str, body_exerting_acceleration: str) #
Function to add the laplacian of the gravitational potential to the dependent variables to save.
Function to add the laplacian of the gravitational potential to the dependent variables to save. The laplacian is defined by the bodies undergoing and exerting the acceleration.
Parameters:
• body_undergoing_acceleration (str) – Body whose dependent variable should be saved.
• body_exerting_acceleration (str) – Body exerting acceleration.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
minimum_body_distance(body_name: str, bodies_to_check: List[str]) #
Function to compute the minimum distance between a given body, and a set of other bodies.
Function to compute the minimum distance between a given body, and a set of other bodies. This function takes the instantaneous position of body body_name, and each body in the list bodies_to_check, and computes the body from this list closest to body_name. In this calculation, the positions of the bodies are evaluated at the current propagation time, and therefore light time is ignored. In addition, this functions does not consider visbility requirements (e.g. is a planet between two bodies). The dependent variable is of size 2, and consists of: (0) The distance between the body, and the closest other body; (1) The index from bodies_to_check for which the distance (given by the first index) is closest to body Typically, this function is used to compute the closest body in a constellation of satellites.
Parameters:
• body_name (str) – Body for which the distance to other bodies is to be computed.
• bodies_to_check (list[ str ]) – List of bodies for which it is to be checked which of these bodies is closest to body_name.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
minimum_visible_station_body_distances(body_name: str, station_name: str, bodies_to_check: List[str], minimum_elevation_angle: float) #
Function to compute the minimum distance between a ground station, and a set of other bodies visible from that station.
Function to compute the minimum distance between a ground station, and a set of other bodies visible from that station This function takes the instantaneous position of the ground station station_name on body_name, and each body in the list bodies_to_check, and computes the body from this list closest to this ground station, only taking into account those bodies from this list which are visible from teh ground station. For this function, visibility is defined by a single elevation angle cutoff (at the ground station) below which a body is deemed to not be visible. In this calculation, the positions of the bodies are evaluated at the current propagation time, and therefore light time is ignored. The dependent variable is of size 3, and consists of: (0) The distance between the ground station, and the closest visible body; (1) The index from bodies_to_check for which the distance (given by the first index) is closest to thee ground station, and the body is visible. (2) Elevation angle for closest body. In case, no body is visible from the station, this function returns [NaN, -1, NaN]. Typically, this function is used to compute the closest body between a ground staion and a constellation of satellites.
Parameters:
• body_name (str) – Body on which ground station is located, for which the distance to other bodies is to be computed.
• station_name (str) – Name of ground station, for which the distance to other bodies is to be computed.
• bodies_to_check (list[ str ]) – List of bodies for which it is to be checked which of these bodies is closest to station_name on body_name.
• minimum_elevation_angle (float) – Minimum elevation angle (at ground station) below which the distance to the bodies_to_check is not considered.
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
custom_dependent_variable(custom_function: Callable[[], numpy.ndarray[numpy.float64[m, 1]]], variable_size: int) #
Function to compute a custom dependent variable.
Function to compute a custom dependent variable, which can be implemented by the user as a Python function. The custom dependent variable is typically dependent on the current properties of the environment (e.g. bodies in the environment) or a user-defined guidance class (or similar)
Parameters:
• custom_function (Callable[[], numpy.ndarray].) – Function taking no input, and returning the custom dependent variable (as a numpy Nx1 array).
• variable_size (int) – Size N of the array returned by the custom_function
Returns:
Dependent variable settings object.
Return type:
SingleDependentVariableSaveSettings
## Enumerations#
PropagationDependentVariables Enumeration of available propagation dependent variables.
class PropagationDependentVariables#
Enumeration of available propagation dependent variables.
Enumeration of propagation dependent variables supported by tudat.
Members:
mach_number_type :
altitude_type :
airspeed_type :
local_density_type :
relative_speed_type :
relative_position_type :
relative_distance_type :
relative_velocity_type :
total_acceleration_norm_type :
single_acceleration_norm_type :
total_acceleration_type :
single_acceleration_type :
aerodynamic_force_coefficients_type :
aerodynamic_moment_coefficients_type :
rotation_matrix_to_body_fixed_frame_type :
intermediate_aerodynamic_rotation_matrix_type :
relative_body_aerodynamic_orientation_angle_type :
body_fixed_airspeed_based_velocity_type :
stagnation_point_heat_flux_type : No documentation found.
local_temperature_type :
geodetic_latitude_type :
control_surface_deflection_type :
total_mass_rate_type :
tnw_to_inertial_frame_rotation_type :
rsw_to_inertial_frame_rotation_type :
periapsis_altitude_type :
apoapsis_altitude_type :
total_torque_norm_type :
single_torque_norm_type :
total_torque_type :
single_torque_type :
body_fixed_groundspeed_based_velocity_type :
keplerian_state_type :
modified_equinoctial_state_type :
spherical_harmonic_acceleration_terms_type :
spherical_harmonic_acceleration_norm_terms_type :
body_fixed_relative_cartesian_position_type :
body_fixed_relative_spherical_position_type :
total_gravity_field_variation_acceleration_type :
single_gravity_field_variation_acceleration_type :
single_gravity_field_variation_acceleration_terms_type :
acceleration_partial_wrt_body_translational_state_type :
local_dynamic_pressure_type :
euler_angles_to_body_fixed_type :
current_body_mass_type :
custom_type : No documentation found.
gravity_field_potential_type :
gravity_field_laplacian_of_potential_type :
property name#
## Classes#
VariableSettings Functional base class to define settings for variables. SingleDependentVariableSaveSettings VariableSettings-derived class to define settings for dependent variables that are to be saved during propagation. SingleAccelerationDependentVariableSaveSettings SingleDependentVariableSaveSettings-derived class to save a single acceleration (norm or vector) during propagation.
class VariableSettings#
Functional base class to define settings for variables.
This class is a functional base class for defining settings for variables. Any variable that requires additional information in addition to what can be provided here, should be defined by a dedicated derived class.
class SingleDependentVariableSaveSettings#
VariableSettings-derived class to define settings for dependent variables that are to be saved during propagation.
Functional base class for defining settings for dependent variables that are to be computed and saved during propagation. Any dependent variable that requires additional information in addition to what can be provided here, should be defined by a dedicated derived class.
class SingleAccelerationDependentVariableSaveSettings#
SingleDependentVariableSaveSettings-derived class to save a single acceleration (norm or vector) during propagation.
Class to define settings for saving a single acceleration (norm or vector) during propagation. Note: this acceleration is returned in the inertial frame!
|
2023-03-26 09:35:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3838863670825958, "perplexity": 2982.7184261218986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945440.67/warc/CC-MAIN-20230326075911-20230326105911-00529.warc.gz"}
|
https://www.molpro.net/manual/doku.php?id=pes_generators&rev=1591899420&do=diff
|
# Differences
This shows you the differences between two versions of the page.
pes_generators [2020/06/11 18:17]127.0.0.1 external edit pes_generators [2022/02/28 08:35] (current)rauhutmoschneide [Scaling of individual coordinates] 2022/02/28 08:35 rauhutmoschneide [Scaling of individual coordinates] 2021/12/06 16:26 rauhutmoschneide [Selection of Modes] 2021/12/06 16:25 rauhutmoschneide [Options] 2021/11/25 10:26 rauhutmoschneide [Options] 2021/11/25 10:24 rauhutmoschneide [Options] 2021/11/25 10:21 rauhutmoschneide [Options] 2021/11/25 10:19 rauhutmoschneide [Options] 2021/11/25 10:19 rauhutmoschneide 2021/11/25 10:17 rauhutmoschneide [Options] 2021/11/25 10:16 rauhutmoschneide [Options] 2021/11/25 10:15 rauhutmoschneide [Options] 2021/09/16 13:52 rauhutmoschneide [Options] 2021/09/16 13:02 rauhutmoschneide [Options] 2021/09/16 12:59 rauhutmoschneide [Options] 2021/09/16 12:59 rauhutmoschneide [Options] 2021/09/16 12:53 rauhutmoschneide [Options] 2021/09/08 06:50 rauhutmoschneide [Options] 2021/09/08 06:43 rauhutmoschneide [Options] 2021/09/08 06:38 rauhutmoschneide [Options] 2021/08/17 10:37 rauhutmoschneide [Selection of Modes] 2021/08/17 10:31 rauhutmoschneide 2021/08/17 10:24 rauhutmoschneide [Options] 2020/07/15 15:29 qianli [Standard Problems] 2020/07/15 15:28 qianli 2020/07/15 15:03 qianli 2020/06/12 13:08 qianli [Options] 2020/06/11 18:17 external edit 2022/02/28 08:35 rauhutmoschneide [Scaling of individual coordinates] 2021/12/06 16:26 rauhutmoschneide [Selection of Modes] 2021/12/06 16:25 rauhutmoschneide [Options] 2021/11/25 10:26 rauhutmoschneide [Options] 2021/11/25 10:24 rauhutmoschneide [Options] 2021/11/25 10:21 rauhutmoschneide [Options] 2021/11/25 10:19 rauhutmoschneide [Options] 2021/11/25 10:19 rauhutmoschneide 2021/11/25 10:17 rauhutmoschneide [Options] 2021/11/25 10:16 rauhutmoschneide [Options] 2021/11/25 10:15 rauhutmoschneide [Options] 2021/09/16 13:52 rauhutmoschneide [Options] 2021/09/16 13:02 rauhutmoschneide [Options] 2021/09/16 12:59 rauhutmoschneide [Options] 2021/09/16 12:59 rauhutmoschneide [Options] 2021/09/16 12:53 rauhutmoschneide [Options] 2021/09/08 06:50 rauhutmoschneide [Options] 2021/09/08 06:43 rauhutmoschneide [Options] 2021/09/08 06:38 rauhutmoschneide [Options] 2021/08/17 10:37 rauhutmoschneide [Selection of Modes] 2021/08/17 10:31 rauhutmoschneide 2021/08/17 10:24 rauhutmoschneide [Options] 2020/07/15 15:29 qianli [Standard Problems] 2020/07/15 15:28 qianli 2020/07/15 15:03 qianli 2020/06/12 13:08 qianli [Options] 2020/06/11 18:17 external edit Line 28: Line 28: The following //options// are available: The following //options// are available: + * **''BATCH3D''=//n//** After calculating a number of grid points within the iterative interpolation scheme the convergence of the individual surfaces will be checked and, if provided by the keyword ''DUMP'', dumped to disk. This leads typically to 3-5 iterations and thus the same number of restart points within the calculation of the 1D, 2D, ... surfaces. As the number of 3D and 4D terms can be very large this is not sufficient in these cases. Therefore, the lists of 3D and 4D terms is cut into batches which will be processed subsequently. ''BATCH3D'' and ''BATCH4D'' control the number of 3D and 4D surfaces within each batch. By default ''BATCH3D'' is set to 30 times the number of processors and ''BATCH4D'' to 10 times the number of processors. Accordingly the number of restart points is increased. Smaller values for ''BATCH3D'' and ''BATCH4D'', e.g. ''BATCH3D=20'', increase the number of restart points on cost of the efficiency of the parallelization. Note, this keyword is only relevant for ''SURF'' calculations, but not for ''XSURF'' runs. + * **''DELLOG''=//n//** For large molecules or in the case of modelling the 3D and 4D terms, the .log-file may become huge. First of all the .log-file can be directed to scratch within the electronic structure program, i.e. ''logfile'', ''scratch''. The option ''DELLOG=1'' always truncates the .log-file in a way that it contains only the very last energy calculation. Default: ''DELLOG=0''. + * **''EXT12D''=//value//** Outer regions of the potential energy surfaces may be determined by extrapolation rather than interpolation schemes. By default extrapolation is switched off, i.e. ''Ext12D=1.0'' and ''Ext34D=1.0''. However, an extrapolation of 10% in case of the 1D and 2D contributions to the potential (''Ext12D=0.9'') and of 20% in case of the 3D and 4D terms (''Ext34D=0.8'') may be useful as it usually stabilizes the fitting procedure. + * **''FIT1D''=//n//** The maximum order of the polynomials used for fitting within the iterative interpolation scheme can be controlled by the keywords ''%%FIT1D, FIT2D, FIT3D, FIT4D%%''. The default is given by 8. However in certain cases higher values may be necessary, but require an appropriate number of coarse grid points, which can be controlled by ''MIN1D'' etc. + * **''INFO''=//n//** ''INFO=1'' provides a list of the values of all relevant program parameters (options). Default: ''INFO=0''. + * **''MAX1D''=//n//** The maximum number of coarse grid points can be controlled by the keywords ''%%MAX1D, MAX2D, MAX3D, MAX4D%%''. These 4 keywords determine the maximum number of //ab initio// calculations in one dimension for each 1D, 2D, 3D and 4D surface. The defaults are currently ''MAX1D=24'', ''MAX2D=16'', ''MAX3D=10'', ''MAX4D=8''. Presently, values larger than 24 are not supported. + * **''MIN1D''=//n//** The minimum number of coarse grid points can be controlled by the keywords ''%%MIN1D, MIN2D, MIN3D, MIN4D%%''. These 4 keywords determine the minimum number of //ab initio// calculations in one dimension for each 1D, 2D, 3D and 4D surface. The defaults are currently ''%%MIN1D=4, MIN2D=4, MIN3D=4, MIN4D=4%%''. Presently, values larger than 24 are not supported. + * **''MPG''=//n//** Symmetry of the normal modes is recognized by the program automatically. Only Abelian point groups can be handled at the moment. Symmetry of the modes will be determined even if the ''NOSYM'' keyword is used in the electronic structure calculations. In certain cases numerical noise can be very high and thus prohibits a correct determination of the symmetry labels. Symmetry can be switched off by using ''MPG=1''. + * **''NDIM''=//n//** The keyword ''NDIM=n'' terminates the expansion of the PES after the $n$-body term. Currently, at most 4-body terms can be included, but the default is set to 3. Please note, when you use ''NDIM=4'' as a keyword for the ''SURF'' program, you need to pass this information to the ''VSCF'' and ''VCI'' programs also. Otherwise these programs will neglect the 4-body terms. + * **''NGRID''=//n//** Based on a coarse grid of //ab initio// points a fine grid will be generated from automated interpolation techniques. The keyword ''NGRID=n'' determines the number of equidistant grid points in one dimension. ''NGRID=n'' has to be an even number. The default is currently set to 16. Note that the number of grid points also controls the extension of the $n$-dimensional potential energy surfaces (see keyword ''SCALE'') and thus influences many internal thresholds which are optimized to the default value of ''NGRID''. The number of grid points also determines the number of basis functions in the grid-based ''VSCF'' program. At present the maximum grid size is 36. - ''NDIM''=//n// - The keyword ''NDIM=n'' terminates the expansion of the PES after the $n$-body term. Currently, at most 4-body terms can be included, but the default is set to 3. Please note, when you use ''NDIM=4'' as a keyword for the ''SURF'' program, you need to pass this information to the ''VSCF'' and ''VCI'' programs also. Otherwise these programs will neglect the 4-body terms. - ''NGRID''=//n// - Based on a coarse grid of //ab initio// points a fine grid will be generated from automated interpolation techniques. The keyword ''NGRID=n'' determines the number of equidistant grid points in one dimension. ''NGRID=n'' has to be an even number. The default is currently set to 16. Note that the number of grid points also controls the extension of the $n$-dimensional potential energy surfaces (see keyword ''SCALE'') and thus influences many internal thresholds which are optimized to the default value of ''NGRID''. The number of grid points also determines the number of basis functions in the grid-based ''VSCF'' program. At present the maximum grid size is 36. ^Grid points ^ 14 ^ 16 ^ 18 ^ 20 ^ ^Grid points ^ 14 ^ 16 ^ 18 ^ 20 ^ |Surface extension | 4.30 | 4.69 | 5.05 | 5.39 | |Surface extension | 4.30 | 4.69 | 5.05 | 5.39 | - ''VAR1D''=//variable// + * **''ORIENT''** Allows to specify a certain orientation of the molecule. With ''ORIENT=//yes//'' (Default) the orientation is choosed automatically according to the asymmetric parameter of the molecule. To choose a certain orientation, ''ORIENT=//XC//'' need to be set. X represents a number from 1 to 3 (in arabic or roman letters), and C need to be set to **r** or **l**. For example, ''ORIENT=//IIl//'' orientates the molecule according to the **IIl** convention. ''ORIENT=//old//'' does not rotate the molecule at all. - The ''SURF'' program reads the energy of electronic structure calculations from the internal Molpro variables, e.g. ''ENERGY'', ''EMP2'', $\dots$. The internal variable is specified by the keyword ''VAR1D''. Within the example shown above, ''VAR1D=ENERGY'' would read the CCSD energy, while ''VAR1D=EMP2'' would read the MP2 energy, which is a byproduct of the CCSD calculation. The default for the ''VAR1D'' keyword is the internal variable ''ENERGY''. + * **''PLOT''=//n//** ''PLOT''=//n// plots all //n//D surfaces and a corresponding Gnuplot script in a separate subdirectory (''plots1'') in the //home//-directory in order to allow for visualization of the computed //n//D surfaces. E.g. the command "gnuplot plotV1D.gnu" in the ''plots1'' directory produces .eps files for all 1D surfaces. Default: ''PLOT=0''. - ''SYM''=//variable// + * **''SADDLE''=//n//** Standard ''SURF'' calculations expect the reference structure to be a (local) minimum on the PES, i.e. ''SADDLE=0'' (default). Alternatively, one may start the PES generation from a transition state, which is recommended for the calculation of double-minimum potentials. This situation is not recognized automatically and thus requires the keyword ''SADDLE=1''. Within ''XSURF'' calculations, this keyword needs not to be provided. - Symmetry within electronic structure calculations can be exploited by the keyword ''SYM=Auto''. Usually this leads to significant time savings. By default this symmetry recognition is switched off as certain calculations may cause some trouble (e.g. local correlation methods). Symmetry in electronic structure calculations may not be mistaken by the symmetry of the mode-coupling terms (see keyword ''MPG''). Once ''SYM=Auto'' is used, it is advisable to insert an ''INT'' card prior to the call of the Hartree-Fock program. + * **''SCALE''=//value//** The extension of the potential energy surfaces is determined from Gauss-Hermite quadrature points. Using a fine grid ''NGRID=16'' the surface stretches out to the ''NGRID''/2$^{th}$ Gauss-Hermite point, i.e. 4.69, in each direction (see keyword ''NGRID''). As these values are fairly large within the calculation of fundamental modes, a scaling factor, ''SCALE=f'', has been introduced. A default scaling of 0.75 is used. Increasing the size of the surfaces usually requires the calculation of further //ab initio// points as the surface interpolation is more stable for surfaces of limited size. Alternative to the ''SCALE'' option, which introduces a uniform scaling of all coordinates, individual scaling of the coordinates as provided by the directive ''SCALNM'' may be used. - ''MPG''=//n// + * **''SKIP3D''=//value//** As the number of 3D and 4D surfaces can increase very rapidly, there exists the possibility to neglect unimportant 3D and 4D surfaces by the keywords ''SKIP3D'' and ''SKIP4D''. The criterion for the prescreening of the 3D surfaces is based on the 2D terms and likewise for the 4D terms the 3D surfaces are used. The neglect of 3D surfaces automatically leads to the neglect of 4D surfaces, as the latter depend on the previous ones. By default prescreening is switched on, but can be switched off by ''SKIP3D=0.0'' and ''SKIP4D=0.0''. - Symmetry of the normal modes is recognized by the program automatically. Only Abelian point groups can be handled at the moment. Symmetry of the modes will be determined even if the ''NOSYM'' keyword is used in the electronic structure calculations. In certain cases numerical noise can be very high and thus prohibits a correct determination of the symmetry labels. Symmetry can be switched off by using ''MPG=1''. + * **''SYM''=//variable//** Symmetry within electronic structure calculations can be exploited by the keyword ''SYM=Auto''. Usually this leads to significant time savings. By default this symmetry recognition is switched off as certain calculations may cause some trouble (e.g. local correlation methods). Symmetry in electronic structure calculations may not be mistaken by the symmetry of the mode-coupling terms (see keyword ''MPG''). Once ''SYM=Auto'' is used, it is advisable to insert an ''INT'' card prior to the call of the Hartree-Fock program. - ''SCALE''=//value// + * **''THRFIT''=//value//** The iterative algorithm for generating potential energy surfaces is based on a successive increase of interpolation points. The iterations are terminated once the interpolation of two subsequent iteration steps became stable. The convergence threshold can be changed by the keyword ''THRFIT=f''. There is currently just one control variable for the different 1D, 2D, 3D, and 4D iterations. The 4 thresholds are different but depend on each other. Consequently, changing the default value (''THRFIT=4.0d-2'') will change all thresholds simultaneously which keeps the calculation balanced. - The extension of the potential energy surfaces is determined from Gauss-Hermite quadrature points. Using a fine grid ''NGRID=16'' the surface stretches out to the ''NGRID''/2$^{th}$ Gauss-Hermite point, i.e. 4.69, in each direction (see keyword ''NGRID''). As these values are fairly large within the calculation of fundamental modes, a scaling factor, ''SCALE=f'', has been introduced. A default scaling of 0.75 is used. Increasing the size of the surfaces usually requires the calculation of further //ab initio// points as the surface interpolation is more stable for surfaces of limited size. Alternative to the ''SCALE'' option, which introduces a uniform scaling of all coordinates, individual scaling of the coordinates as provided by the directive ''SCALNM'' may be used. + * **''TYPE''=//variable//** ''TYPE=QFF'' calls a macro, which modifies the parameters of the ''SURF'' program in order to compute a quartic force field in the most efficient manner. This implies a reduction of the size of the coupling surfaces and a limitation of the maximum number of points for the $n$D-terms. It should be used for VPT2 calculations. ''TYPE=ZPVE'' calls a macro, which changes the defaults for several parameters of the ''SURF'', ''VSCF'' and ''VCI'' programs. It is meant for the quick and efficient calculation of zero point vibrational energies on cost of some accuracy. For example, the expansion of the potential will be truncated after the 2D terms. As a consequence the output of course is reduced to the presentation of the vibrational ground state only. ''TYPE=FULL'' (default) performs a standard calculation as needed for ''VSCF'' or ''VCI'' calculations. Note that, within ''XSURF'' calculations, this keyword will be ignored, but Taylor expansions of the potential can be generated by using the ''VTAYLOR'' directive. - ''THRFIT''=//value// + * **''USEMRCC''=//n//** Once the Mrcc program of M. Kallay or the Gecco program of A. Köhn is used for determining individual grid points, the option ''USEMRCC=1'' needs to be set, which is needed to ensure proper communication between Molpro and Mrcc. Default: ''USEMRCC=0''. - The iterative algorithm for generating potential energy surfaces is based on a successive increase of interpolation points. The iterations are terminated once the interpolation of two subsequent iteration steps became stable. The convergence threshold can be changed by the keyword ''THRFIT=f''. There is currently just one control variable for the different 1D, 2D, 3D, and 4D iterations. The 4 thresholds are different but depend on each other. Consequently, changing the default value (''THRFIT=4.0d-2'') will change all thresholds simultaneously which keeps the calculation balanced. + * **''VAR1D''=//variable//** The ''SURF'' program reads the energy of electronic structure calculations from the internal Molpro variables, e.g. ''ENERGY'', ''EMP2'', $\dots$. The internal variable is specified by the keyword ''VAR1D''. Within the example shown above, ''VAR1D=ENERGY'' would read the CCSD energy, while ''VAR1D=EMP2'' would read the MP2 energy, which is a byproduct of the CCSD calculation. The default for the ''VAR1D'' keyword is the internal variable ''ENERGY''. - ''FIT1D''=//n// + * **''VRC''=//n//** Once the keyword ''VRC=1'' is provided, the ''SURF'' program will also compute the vibrational-rotational coupling surfaces and thus increases the number of degrees of freedom to 3N-3. Vibrational-rotational coupling surfaces can only be used within the ''PESTRANS'' program (see below), but will be neglected in any VSCF or VCI calculations. - The maximum order of the polynomials used for fitting within the iterative interpolation scheme can be controlled by the keywords ''%%FIT1D, FIT2D, FIT3D, FIT4D%%''. The default is given by 8. However in certain cases higher values may be necessary, but require an appropriate number of coarse grid points, which can be controlled by ''MIN1D'' etc. + - ''MIN1D''=//n// + - The minimum number of coarse grid points can be controlled by the keywords ''%%MIN1D, MIN2D, MIN3D, MIN4D%%''. These 4 keywords determine the minimum number of //ab initio// calculations in one dimension for each 1D, 2D, 3D and 4D surface. The defaults are currently ''%%MIN1D=4, MIN2D=4, MIN3D=4, MIN4D=4%%''. Presently, values larger than 24 are not supported. + - ''MAX1D''=//n// + - The maximum number of coarse grid points can be controlled by the keywords ''%%MAX1D, MAX2D, MAX3D, MAX4D%%''. These 4 keywords determine the maximum number of //ab initio// calculations in one dimension for each 1D, 2D, 3D and 4D surface. The defaults are currently ''MAX1D=24'', ''MAX2D=16'', ''MAX3D=10'', ''MAX4D=8''. Presently, values larger than 24 are not supported. + - ''EXT12D''=//value// + - Outer regions of the potential energy surfaces may be determined by extrapolation rather than interpolation schemes. By default extrapolation is switched off, i.e. ''Ext12D=1.0'' and ''Ext34D=1.0''. However, an extrapolation of 10% in case of the 1D and 2D contributions to the potential (''Ext12D=0.9'') and of 20% in case of the 3D and 4D terms (''Ext34D=0.8'') may be useful as it usually stabilizes the fitting procedure. + - ''SKIP3D''=//value// + - As the number of 3D and 4D surfaces can increase very rapidly, there exists the possibility to neglect unimportant 3D and 4D surfaces by the keywords ''SKIP3D'' and ''SKIP4D''. The criterion for the prescreening of the 3D surfaces is based on the 2D terms and likewise for the 4D terms the 3D surfaces are used. The neglect of 3D surfaces automatically leads to the neglect of 4D surfaces, as the latter depend on the previous ones. By default prescreening is switched on, but can be switched off by ''SKIP3D=0.0'' and ''SKIP4D=0.0''. + - ''VRC''=//n// + - Once the keyword ''VRC=1'' is provided, the ''SURF'' program will also compute the vibrational-rotational coupling surfaces and thus increases the number of degrees of freedom to 3N-3. Vibrational-rotational coupling surfaces can only be used within the ''PESTRANS'' program (see below), but will be neglected in any VSCF or VCI calculations. + - ''BATCH3D''=//n// + - After calculating a number of grid points within the iterative interpolation scheme the convergence of the individual surfaces will be checked and, if provided by the keyword ''DUMP'', dumped to disk. This leads typically to 3-5 iterations and thus the same number of restart points within the calculation of the 1D, 2D, ... surfaces. As the number of 3D and 4D terms can be very large this is not sufficient in these cases. Therefore, the lists of 3D and 4D terms is cut into batches which will be processed subsequently. ''BATCH3D'' and ''BATCH4D'' control the number of 3D and 4D surfaces within each batch. By default ''BATCH3D'' is set to 30 times the number of processors and ''BATCH4D'' to 10 times the number of processors. Accordingly the number of restart points is increased. Smaller values for ''BATCH3D'' and ''BATCH4D'', e.g. ''BATCH3D=20'', increase the number of restart points on cost of the efficiency of the parallelization. + - ''SADDLE''=//n// + - Standard ''SURF'' calculations expect the reference structure to be a (local) minimum on the PES, i.e. ''SADDLE=0'' (default). Alternatively, one may start the PES generation from a transition state, which is recommended for the calculation of double-minimum potentials. This situation is not recognized automatically and thus requires the keyword ''SADDLE=1''. + - ''TYPE''=//variable// + - ''TYPE=QFF'' calls a macro, which modifies the parameters of the ''SURF'' program in order to compute a quartic force field in the most efficient manner. This implies a reduction of the size of the coupling surfaces and a limitation of the maximum number of points for the $n$D-terms. It should be used for VPT2 calculations. ''TYPE=ZPVE'' calls a macro, which changes the defaults for several parameters of the ''SURF'', ''VSCF'' and ''VCI'' programs. It is meant for the quick and efficient calculation of zero point vibrational energies on cost of some accuracy. For example, the expansion of the potential will be truncated after the 2D terms. As a consequence the output of course is reduced to the presentation of the vibrational ground state only. ''TYPE=FULL'' (default) performs a standard calculation as needed for ''VSCF'' or ''VCI'' calculations. + - ''USEMRCC''=//n// + - Once the Mrcc program of M. Kallay is used for determining individual grid points, the option ''USEMRCC=1'' needs to be set, which is needed to ensure proper communication between Molpro and Mrcc. Default: ''USEMRCC=0''. + - ''DELLOG''=//n// + - For large molecules or in the case of modelling the 3D and 4D terms, the .log-file may become huge. First of all the .log-file can be directed to scratch within the electronic structure program, i.e. ''logfile'', ''scratch''. The option ''DELLOG=1'' always truncates the .log-file in a way that it contains only the very last energy calculation. Default: ''DELLOG=0''. + - ''INFO''=//n// + - ''INFO=1'' provides a list of the values of all relevant program parameters (options). Default: ''INFO=0''. + - ''PLOT''=//n// + - ''PLOT''=//n// plots all //n//D surfaces and a corresponding Gnuplot script in a separate subdirectory (''plots1'') in the //home//-directory in order to allow for visualization of the computed //n//D surfaces. E.g. the command "gnuplot plotV1D.gnu" in the ''plots1'' directory produces .eps files for all 1D surfaces. Default: ''PLOT=0''. + The following example shows the input of a calculation which computes energy and dipole surfaces at the MP2/cc-pVTZ level and subsequently determines the anharmonic frequencies at the VSCF and VCI levels. Hartree-Fock calculations will not be restarted and the .log-file is directed to the scratch directory as defined by the $TMPDIR variable. The following example shows the input of a calculation which computes energy and dipole surfaces at the MP2/cc-pVTZ level and subsequently determines the anharmonic frequencies at the VSCF and VCI levels. Hartree-Fock calculations will not be restarted and the .log-file is directed to the scratch directory as defined by the$TMPDIR variable. Line 111: Line 92: ''VMULT'',//options// ''VMULT'',//options// - The level of the electronic structure calculations can be changed for the different $i$-body terms in the expansion of the potential. As a consequence, the keywords ''START2D'', ''START3D'', ''VAR2D'' and ''VAR3D'' exist in full analogy to the keywords ''START1D'' and ''VAR1D'' in standard calculations (see above). The number always represents the level of the expansion term. Such calculations are termed multi-level calculations. There does //not// exist a corresponding set of keywords for the 4-body terms. 4-body terms will always use the variables specified for the 3-body terms. + The level of the electronic structure calculations can be changed for the different $i$-body terms in the expansion of the potential. As a consequence, the keywords ''START2D'', ''START3D'', ''VAR2D'' and ''VAR3D'' exist in full analogy to the keywords ''START1D'' and ''VAR1D'' in standard calculations (see above). The number always represents the level of the expansion term. Such calculations are termed multi-level calculations. There does //not// exist a corresponding set of keywords for the 4-body terms. 4-body terms will always use the variables specified for the 3-body terms (this restriction is lifted in the ''XSURF'' program. Line 239: Line 220: Dipole surfaces can be computed for all those methods for which analytical gradients are available in Molpro. For all methods except Hartree-Fock this requires the keyword ''%%CPHF,1%%'' after the keyword for the electronic structure method. In multi-level schemes for which the variables ''VAR1D'', ''VAR2D'' and ''VAR3D'' are set individually, the VARDIP//n//D[X,Y,Z] variables have to be set accordingly. Symmetry is currently only implemented for the 1D, 2D and 3D dipole surfaces. For 4D terms symmetry will automatically switched off at the moment. The determination of dipole surfaces beyond Hartree-Fock quality effectively doubles the computation time for surface calculations. Dipole surfaces can be computed for all those methods for which analytical gradients are available in Molpro. For all methods except Hartree-Fock this requires the keyword ''%%CPHF,1%%'' after the keyword for the electronic structure method. In multi-level schemes for which the variables ''VAR1D'', ''VAR2D'' and ''VAR3D'' are set individually, the VARDIP//n//D[X,Y,Z] variables have to be set accordingly. Symmetry is currently only implemented for the 1D, 2D and 3D dipole surfaces. For 4D terms symmetry will automatically switched off at the moment. The determination of dipole surfaces beyond Hartree-Fock quality effectively doubles the computation time for surface calculations. - Allows to switch between the different dipole surface calculations.=0 switches off all dipole calculations. ''DIPOLE''=1 (this is the default) computes the dipole surfaces at the Hartree Fock level of theory, and therefore does not increase the computation time of electronic structure theory. ''DIPOLE''=2 switches on the dipole surfaces at the full level of theory, therefore ''%%CPHF,1%%'' is required. This effectively doubles the computation time for surface calculations. + * **''DIPOLE''=//n//** Allows to switch between the different dipole surface calculations.=0 switches off all dipole calculations. ''DIPOLE''=1 (this is the default) computes the dipole surfaces at the Hartree Fock level of theory, and therefore does not increase the computation time of electronic structure theory. ''DIPOLE''=2 switches on the dipole surfaces at the full level of theory, therefore ''%%CPHF,1%%'' is required. This effectively doubles the computation time for surface calculations. - + * **''NDIMDIP''=//n//** This denotes the term after which the $n$-body expansion of the dipole surfaces is truncated. The default is set to 3. Note that ''NDIMDIP'' has to be lower or equal to ''NDIM''. - By default (''POLAR''=0) Raman intensities will not be computed. ''POLAR''=1 switches the calculation of polarizability tensor surfaces on. Note that currently only Hartree-Fock and MP2 polarizabilities are supported, which requires the ''POLARI'' keyword in the respective programs. Besides that, the frozen core approximation cannot yet be employed within the calculation of MP2 polarizabilities. + * **''NDIMPOL''=//n//** This variable denotes the term after which the $n$-body expansion of the polarizability tensor surfaces is truncated. The default is set to 2. Note that ''NDIMPOL'' has to be lower or equal to ''NDIM'' and must be smaller than 4. - + * **''POLAR''=//n//** By default (''POLAR''=0) Raman intensities will not be computed. ''POLAR''=1 switches the calculation of polarizability tensor surfaces on. Note that currently only Hartree-Fock and MP2 polarizabilities are supported, which requires the ''POLARI'' keyword in the respective programs. Besides that, the frozen core approximation cannot yet be employed within the calculation of MP2 polarizabilities. - This denotes the term after which the $n$-body expansion of the dipole surfaces is truncated. The default is set to 3. Note that ''NDIMDIP'' has to be lower or equal to ''NDIM''. + * **''VARDIP1DX''=//variable//** Variable which is used for the $x$ direction of the dipole moment for 1D surfaces. - + * **''VARDIP1DY''=//variable//** Variable which is used for the $y$ direction of the dipole moment for 1D surfaces. - This variable denotes the term after which the $n$-body expansion of the polarizability tensor surfaces is truncated. The default is set to 2. Note that ''NDIMPOL'' has to be lower or equal to ''NDIM'' and must be smaller than 4. + * **''VARDIP1DZ''=//variable//** Variable which is used for the $z$ direction of the dipole moment for 1D surfaces. - + * **''VARPOL1DXX''=//variable//** Variable which is used for the $xx$ component of the polarizability tensor for 1D surfaces. - Variable which is used for the $x$ direction of the dipole moment for 1D surfaces. + * **''VARPOL1DYY''=//variable//** Variable which is used for the $yy$ component of the polarizability tensor for 1D surfaces. - + * **''VARPOL1DZZ''=//variable//** Variable which is used for the $zz$ component of the polarizability tensor for 1D surfaces. - Variable which is used for the $y$ direction of the dipole moment for 1D surfaces. + * **''VARPOL1DXY''=//variable//** Variable which is used for the $xy$ component of the polarizability tensor for 1D surfaces. - + * **''VARPOL1DXZ''=//variable//** Variable which is used for the $xz$ component of the polarizability tensor for 1D surfaces. - Variable which is used for the $z$ direction of the dipole moment for 1D surfaces. + * **''VARPOL1DYZ''=//variable//** Variable which is used for the $yz$ component of the polarizability tensor for 1D surfaces. - + - Variable which is used for the $xx$ component of the polarizability tensor for 1D surfaces. + - + - Variable which is used for the $yy$ component of the polarizability tensor for 1D surfaces. + - + - Variable which is used for the $zz$ component of the polarizability tensor for 1D surfaces. + - + - Variable which is used for the $xy$ component of the polarizability tensor for 1D surfaces. + - + - Variable which is used for the $xz$ component of the polarizability tensor for 1D surfaces. + - + - Variable which is used for the $yz$ component of the polarizability tensor for 1D surfaces. + The higher order terms VARDIP//n//D[X,Y,Z] and VARPOL//n//D[XX,$\dots$,YZ] can be defined the same way. An example for a calculation, which provides both, infrared and Raman intensities, is given below. The higher order terms VARDIP//n//D[X,Y,Z] and VARPOL//n//D[XX,$\dots$,YZ] can be defined the same way. An example for a calculation, which provides both, infrared and Raman intensities, is given below. Line 287: Line 256: ''ALTER'',//options// ''ALTER'',//options// - The ''ALTER'' directive of the ''SURF'' program allows to apply error correction schemes for individual single point calculations. For example, in case that the Hartree Fock calculation for a certain grid point did not converge and the ''ORBITAL'' directive in the subsequent electron correlation calculation uses the ''IGNORE_ERROR'' option, an alternative calculation scheme can be provided, e.g. MCSCF in contrast to RHF. In the case of //multi level// calculations the ''ALT2D'' and ''ALT3D'' options can be set according to the ''START2D'' and ''START3D'' options. Note that the energy variable has to be the same in the original method and the alternative. + The ''ALTER'' directive of the ''SURF'' program allows to apply error correction schemes for individual single point calculations. For example, in case that the Hartree Fock calculation for a certain grid point did not converge and the ''ORBITAL'' directive in the subsequent electron correlation calculation uses the ''IGNORE_ERROR'' option, an alternative calculation scheme can be provided, e.g. MCSCF in contrast to RHF. In the case of //multi level// calculations the ''ALT2D'' and ''ALT3D'' options can be set according to the ''START2D'' and ''START3D'' options. Note that the energy variable has to be the same in the original method and the alternative. Within the ''XSURF'' program the ''ALTER'' directive is defined in a different way. * **''ALT1D''=//label//** Alternative procedure to calculate the single points of the 1st level. * **''ALT1D''=//label//** Alternative procedure to calculate the single points of the 1st level. Line 310: Line 279: ''LINCOMB'',//options// ''LINCOMB'',//options// - The ''LINCOMB'' directive allows for the calculation of linear combinations of normal coordinates for the expansion of the potential. This is realized by 2x2 Jacobi rotations. At most 3N-6/2 rotations can be provided in the input. Using an angle of 45$^o$ between the degenerate modes of non-Abelian molecules avoids symmetry breaking in the subsequent ''VSCF'' and ''VCI'' calculations. + The ''LINCOMB'' directive allows for the calculation of linear combinations of normal coordinates for the expansion of the potential. This is realized by 2x2 Jacobi rotations. At most 3N-6/2 rotations can be provided in the input. - * **''NM1''=//n//, ''NM2''=//m//** Denotes the normal coordinates to be rotated. * **''ANGLE''=//value//** Rotation angle in degree. * **''ANGLE''=//value//** Rotation angle in degree. * **''LOCAL''=//n//** ''LOCAL''=1 localizes the normal coordinates of the CH-stretchings. Note that this destroys symmetry of these modes. Usually localization has strong impact on subsequent ''VSCF'' calculations. ''LOCAL''=3 localizes the normal coordinates of a molecular cluster to the contributing entities. This localization scheme localizes within the individual irreps, which usually leads to a very faint localization. Switching symmetry off by ''MPG''=1 in the ''SURF'' program leads to a much stronger localization. ''LOCAL''=2 is a combination of ''LOCAL''=1 and ''LOCAL''=3. * **''LOCAL''=//n//** ''LOCAL''=1 localizes the normal coordinates of the CH-stretchings. Note that this destroys symmetry of these modes. Usually localization has strong impact on subsequent ''VSCF'' calculations. ''LOCAL''=3 localizes the normal coordinates of a molecular cluster to the contributing entities. This localization scheme localizes within the individual irreps, which usually leads to a very faint localization. Switching symmetry off by ''MPG''=1 in the ''SURF'' program leads to a much stronger localization. ''LOCAL''=2 is a combination of ''LOCAL''=1 and ''LOCAL''=3. + * **''NM1''=//n//, ''NM2''=//m//** Denotes the normal coordinates to be rotated. * **''THRLOC''=//value//** (=1.0d-6 Default) Threshold within the localization procedure. * **''THRLOC''=//value//** (=1.0d-6 Default) Threshold within the localization procedure. Line 323: Line 292: The ''SCALE'' option of the ''SURF'' program enables a modification of the extension of all difference potentials by a common factor. In contrast to that the ''SCALNM'' directive allows for the scaling with respect to the individual normal coordinates. This is the recommended choice for potentials dominated by quartic rather than quadratic terms. At most 3N-6 individual scale factors and shift parameters can be provided. In particular the ''AUTO'' option was found to be very helpful in practical applications. The ''SCALE'' option of the ''SURF'' program enables a modification of the extension of all difference potentials by a common factor. In contrast to that the ''SCALNM'' directive allows for the scaling with respect to the individual normal coordinates. This is the recommended choice for potentials dominated by quartic rather than quadratic terms. At most 3N-6 individual scale factors and shift parameters can be provided. In particular the ''AUTO'' option was found to be very helpful in practical applications. + * **''AUTO''=//on / off//** ''AUTO''=//on// (default) switches on an automatic scaling procedure of the potential in order to determine meaningful elongations and ''SHIFT'' values with respect to all coordinates, i.e. for each normal mode an optimized scaling parameter ''SFAC'' and ''SHIFT'' parameter will be determined. Usually this results in an increased number of 1D grid points. The ''AUTO'' keyword intrinsically depends on the thresholds and parameters, which can be controlled by the keywords ''THRSHIFT'', ''ITMAX'', ''LEVMAX'', ''DENSMAX'', and ''DENSMIN''. + * **''DENSMAX''=//value//** Threshold for the maximum vibrational density on the edges of the potential needed for the automated upscaling of the potentials (see keyword ''AUTO''). + * **''DENSMIN''=//value//** Threshold for the minimum vibrational density on the edges of the potential needed for the automated downscaling of the potentials (see keyword ''AUTO''). + * **''ITMAX''=//n//** Specifies the maximum number of iterations within the automatic scaling of the potentials (see Keyword ''AUTO''). + * **''LEVMAX''=//n//** Maximum number of vibrational states to be included for controlling the automated scaling and shifting procedure. The default is set to 5. This value should support subsequent VCI calculations. * **''MODE''=//n//** Denotes the normal coordinate to be scaled or shifted. * **''MODE''=//n//** Denotes the normal coordinate to be scaled or shifted. * **''SFAC''=//value//** Scaling factor for mode ''MODE''. The default is 1.0. * **''SFAC''=//value//** Scaling factor for mode ''MODE''. The default is 1.0. * **''SHIFT''=//n//** Allows to shift the potential with respect to the specified coordinate by //n// or //-n// grid points, respectively. Default: ''SHIFT=0''. * **''SHIFT''=//n//** Allows to shift the potential with respect to the specified coordinate by //n// or //-n// grid points, respectively. Default: ''SHIFT=0''. - * **''AUTO''=//on / off//** ''AUTO''=//on// (defaulr) switches on an automatic scaling procedure of the potential in order to determine meaningful elongations and ''SHIFT'' values with respect to all coordinates, i.e. for each normal mode an optimized scaling parameter ''SFAC'' and ''SHIFT'' parameter will be determined. Usually this results in an increased number of 1D grid points. The ''AUTO'' keyword intrinsically depends on the thresholds and parameters, which can be controlled by the keywords ''THRSHIFT'', ''ITMAX'', ''LEVMAX'', ''DENSMAX'', and ''DENSMIN''. - * **''ITMAX''=//n//** Specifies the maximum number of iterations within the automatic scaling of the potentials (see Keyword ''AUTO''). * **''THRSHIFT''=//value//** Threshold controlling the automated shifting of potentials as obtained from the state densities on the lhs and rhs of the potentials. The default is given as ''THRSHIFT=0.05''. * **''THRSHIFT''=//value//** Threshold controlling the automated shifting of potentials as obtained from the state densities on the lhs and rhs of the potentials. The default is given as ''THRSHIFT=0.05''. - * **''LEVMAX''=//n//** Maximum number of vibrational states to be included for controlling the automated scaling and shifting procedure. The default is set to 5. This value should support subsequent VCI calculations. - * **''DENSMAX''=//value//** Threshold for the maximum vibrational density on the edges of the potential needed for the automated upscaling of the potentials (see keyword ''AUTO''). - * **''DENSMIN''=//value//** Threshold for the minimum vibrational density on the edges of the potential needed for the automated downscaling of the potentials (see keyword ''AUTO''). ==== Deleting individual surfaces ==== ==== Deleting individual surfaces ==== Line 349: Line 318: Within the framework of multi-level calculations (see the directive ''VMULT''), 3D and 4D terms can be modeled. The modeling scheme is based on a reparametrization of the semiempirical AM1 method. Consequently, in the input stream the energy variable to be read in must refer to a semiempirical calculation. After the 2D terms the program optimizes the semiempirical parameters in order to represent the 1D and 2D surfaces best. Within the framework of multi-level calculations (see the directive ''VMULT''), 3D and 4D terms can be modeled. The modeling scheme is based on a reparametrization of the semiempirical AM1 method. Consequently, in the input stream the energy variable to be read in must refer to a semiempirical calculation. After the 2D terms the program optimizes the semiempirical parameters in order to represent the 1D and 2D surfaces best. - * **''RMS1D''=//value//** The keywords ''RMS1D'' and ''RMS2D'' specify the threshold for terminating the 1D and 2D iterations in the local optimization of the semiempirical parameters. The defaults are given by ''RMS1D''=1.d-6 and ''RMS2D''=1.d-6. * **''ITMAX1D''=//n//** The maximum number of iterations in the local optimization of the semiempirical parameters can be controlled by ''ITMAX1D'' and ''ITMAX2D''. The defaults are ''ITMAX1D''=100 and ''ITMAX2D''=150. * **''ITMAX1D''=//n//** The maximum number of iterations in the local optimization of the semiempirical parameters can be controlled by ''ITMAX1D'' and ''ITMAX2D''. The defaults are ''ITMAX1D''=100 and ''ITMAX2D''=150. + * **''RMS1D''=//value//** The keywords ''RMS1D'' and ''RMS2D'' specify the threshold for terminating the 1D and 2D iterations in the local optimization of the semiempirical parameters. The defaults are given by ''RMS1D''=1.d-6 and ''RMS2D''=1.d-6. The following example shows the input for a surface calculation in which the 3D terms will be modeled. The following example shows the input for a surface calculation in which the 3D terms will be modeled. Line 492: Line 461: - [\tt Problem:] + **Problem:** The Surf calculation crashes with an error message like The Surf calculation crashes with an error message like Line 499: Line 468: CURRENT STACK: MAIN CURRENT STACK: MAIN - [\tt Solution:] + + **Solution:** The program has problems in the symmetry conversion when restarting a Hartree-Fock calculation from the reference calculation at the equilibrium geometry. You need to start the Hartree-Fock calculations independently by using the keywords ''%%start,atden%%''. The program has problems in the symmetry conversion when restarting a Hartree-Fock calculation from the reference calculation at the equilibrium geometry. You need to start the Hartree-Fock calculations independently by using the keywords ''%%start,atden%%''. - [\tt Problem:] + **Problem:** In parallel calculations (mppx) the CPU-time of a ''SURF'' calculation differs considerably from the real-time (wallclock time). In parallel calculations (mppx) the CPU-time of a ''SURF'' calculation differs considerably from the real-time (wallclock time). - [\tt Solution:] + + **Solution:** There may be two reasons for this: (1) Usually a ''SURF'' calculation spends a significant amount of the total time in the Hartree-Fock program and the 2-electron integrals program. As the integrals are stored on disk, 2 processes on the same machine may write on disk at the same time and thus the calculation time depends to some extend on the disk controller. It is more efficient to stripe several disks and to use several controllers. This problem can be circumvented by distributing the job over several machines, but limiting the number of processors for each machine to 1. (2) The integrals program buffers the integrals. Parallel jobs may require too much memory (factor of 2 plus the shared memory) and thus the integrals buffering will be inefficient. Try to reduce the memory as much as you can. It might be advantageous to separate the memory demanding ''VCI'' calculation from the ''SURF'' calculation. There may be two reasons for this: (1) Usually a ''SURF'' calculation spends a significant amount of the total time in the Hartree-Fock program and the 2-electron integrals program. As the integrals are stored on disk, 2 processes on the same machine may write on disk at the same time and thus the calculation time depends to some extend on the disk controller. It is more efficient to stripe several disks and to use several controllers. This problem can be circumvented by distributing the job over several machines, but limiting the number of processors for each machine to 1. (2) The integrals program buffers the integrals. Parallel jobs may require too much memory (factor of 2 plus the shared memory) and thus the integrals buffering will be inefficient. Try to reduce the memory as much as you can. It might be advantageous to separate the memory demanding ''VCI'' calculation from the ''SURF'' calculation. Line 513: Line 484: ''XSURF'',//options// [xsurf] ''XSURF'',//options// [xsurf] - The ''XSURF'' program is not just an extension of the old ''SURF'' program, but a new program, which works in a completely different manner. However, the syntax for controlling it is very much the same as for the ''SURF'' program. In contrast to the ''SURF'' program, ''XSURF'' can handle $n$-mode and Taylor expansions of the PES of arbitrary order. Moreover, it can handle any kind of symmetry, e.g. non-Abelian point groups or permutational symmetry, Besides that it offers a much more flexible multi-level input and many more options of minor importance. Most importantly, ''XSURF'' calculations can be restarted at any point and the external restart files are much smaller and have a completely different structure. In the following, only those options and directives will be listed, which are not valid for the ''SURF'' program or which differ considerably. + The ''XSURF'' program is not just an extension to the old ''SURF'' program, but a new program, which works in a completely different manner. However, the syntax for controlling it is very much the same as for the ''SURF'' program. In contrast to the ''SURF'' program, ''XSURF'' can handle $n$-mode and Taylor expansions of the PES of arbitrary order. Moreover, it can handle any kind of symmetry, e.g. non-Abelian point groups or permutational symmetry, Besides that it offers a much more flexible multi-level input and many more options of minor importance. Most importantly, ''XSURF'' calculations can be restarted at any point and the external restart files are much smaller and have a completely different structure. In the following, only those options and directives will be listed, which are not valid for the ''SURF'' program or which differ considerably. The ''XSURF'' program does no longer support a number of keywords being relevant for the ''SURF'' program, e.g. ''BATCH'' and ''VRC''. The ''SADDLE'' keyword is no longer needed as ''XSURF'' will recognize automatically if the expansion point is a minimum or a transition state. Moreover, the keywords ''NDIM'', ''NDIMDIP'' and ''NDIMPOL'' are no longer restricted to values between 1 and 4.\\ + B. Ziegler, G. Rauhut, //Rigorous use of symmetry within the construction of multidimensional potential energy surfaces//, [[https://dx.doi.org/10.1063/1.5047912|J. Chem. Phys.]] **149**, 164110 (2018).\\ + B. Ziegler, G. Rauhut, //Localized Normal Coordinates in Accurate Vibrational Structure Calculations: Benchmarks for Small Molecules//, [[https://dx.doi.org/10.1021/acs.jctc.9b00381|J. Chem. Theory Comput.]] **15**, 4187 (2019)\\ ==== Options ==== ==== Options ==== - * **''CORRECT''=//n//** (=1 (on) Default) If a certain subsurface does not converge despite increasing the number of ab initio calculations, symmetry in this subsurface (if any) will be neglected in order to avoid any errors due to inaccuracies in the displacement vectors and the subsurface will be recalculated accordingly. This option will automatically switched off in any Taylor expansions of the PES. + * **''AUTOFIT''=//n//** (=0 Default) If ''AUTOFIT''=1, the number of basis function for fitting the grid points is determined automatically. To do so, the fine grid of the energy is compared to the coarse grid points. If the deviation is too high, another basis function is added. The procedure starts with 8 basis functions and stops at the latest at ''FITXD_MAX'' basis functions. Once 'AUTOFIT' is used, the keywords ''FITXD'' does not have any use. + * **''CORRECT''=//n//** (=1 (on) Default) If a certain subsurface does not converge despite increasing the number of ab initio calculations, symmetry in this subsurface (if any) will be neglected in order to avoid any errors due to inaccuracies in the displacement vectors and the subsurface will be recalculated accordingly. This option is automatically switched off in any Taylor expansions of the PES. + * **''FITXD_MAX''=//n//**(=10 Default) For the automated procedure with ''AUTOFIT'' an upper limit for the number of basis functions can be set with this keyword. * **''FITMETHOD''=//n//** (=1 Default) Within the iterative build-up of the individual subsurfaces, intermediate fitting will be used. This can be based on true multidimensional Kronecker product fitting (''FITMETHOD''=1) or on fitting along one-dimensional cuts (''FITMETHOD''=2). * **''FITMETHOD''=//n//** (=1 Default) Within the iterative build-up of the individual subsurfaces, intermediate fitting will be used. This can be based on true multidimensional Kronecker product fitting (''FITMETHOD''=1) or on fitting along one-dimensional cuts (''FITMETHOD''=2). * **''INFO''=//n//** (=1 Default) ''INFO''=0 suppresses any information about the program parameters and symmetry information. ''INFO''=1 refers to the standard output, while ''INFO''=2 provides additional information about the symmetry recognition. * **''INFO''=//n//** (=1 Default) ''INFO''=0 suppresses any information about the program parameters and symmetry information. ''INFO''=1 refers to the standard output, while ''INFO''=2 provides additional information about the symmetry recognition. Line 524: Line 500: * **''NSA''=//n//** (=1 Default) This option prints out some information about the progress of the ''XSURF'' calculation. ''NSA''=1 prints this information is an additional file, which will be deleted once the ''XSURF'' calculation is completed, ''NSA''=2 prints this information to the console. * **''NSA''=//n//** (=1 Default) This option prints out some information about the progress of the ''XSURF'' calculation. ''NSA''=1 prints this information is an additional file, which will be deleted once the ''XSURF'' calculation is completed, ''NSA''=2 prints this information to the console. * **''ONLYFREQ''=//n//** (=0 (off) Default) If set to 1, the ''XSURF'' calculation will be terminated after writing the header of the external restart file, i.e. prior to the calculation of the 1D terms. * **''ONLYFREQ''=//n//** (=0 (off) Default) If set to 1, the ''XSURF'' calculation will be terminated after writing the header of the external restart file, i.e. prior to the calculation of the 1D terms. - * **''POINT_SCHEME''=//variable//** (=''NOSHIFT'' Default) The distribution of ab initio points along a coordinate is determined by a fixed point scheme. This distribution has been generated for potentials, which have not been shifted. For strongly shifted potentials, improved point scheme can be used by the option ''POINT_SCHEME''=''SHIFT''. + * **''POINT_SCHEME''=//variable//** (=''NOSHIFT'' Default) The distribution of ab initio points along a coordinate is determined by a fixed point scheme. This distribution has been generated for potentials, which have not been shifted. For strongly shifted potentials, improved point schemes can be used by the option ''POINT_SCHEME''=''SHIFT''. * **''RDM''=//n//** (=0 (off) Default) Degenerate modes can be rotated in a manner, that the corresponding 1D potentials will be identical. By default, this feature is switched off, but can be activated by ''RDM''=1. Typically this results in rotational angles of 45 or 135 degrees. * **''RDM''=//n//** (=0 (off) Default) Degenerate modes can be rotated in a manner, that the corresponding 1D potentials will be identical. By default, this feature is switched off, but can be activated by ''RDM''=1. Typically this results in rotational angles of 45 or 135 degrees. * **''RDM_THR''=//n//** (=1.0d-10 Default) This threshold controls, if the potentials of two degenerate modes are identical or not. See the keyword ''RDM''=//n//. * **''RDM_THR''=//n//** (=1.0d-10 Default) This threshold controls, if the potentials of two degenerate modes are identical or not. See the keyword ''RDM''=//n//. - * **''SKIP''=//n//** (=0 (off) Default) By default, (pre)screening of any terms higher than 2D of the PES expansion is switched off. It can be activated by ''SKIP''=1. + * **''SKIP''=//n//** (=1 (on) Default) By default, (pre)screening of any terms higher than 2D of the PES expansion is switched on. It can be deactivated by ''SKIP''=0. * **''SKIPCRIT''=//n//** (=1 Default) ''SKIPCRIT'' defines the method and thus the criterion for (pre)screening of the high-order terms of the PES expansion. ''SKIPCRIT''=1 activates prescreening and ''SKIPCRIT''=2 screening. In the latter case a label must be defined (see ''SKIPLABEL''). * **''SKIPCRIT''=//n//** (=1 Default) ''SKIPCRIT'' defines the method and thus the criterion for (pre)screening of the high-order terms of the PES expansion. ''SKIPCRIT''=1 activates prescreening and ''SKIPCRIT''=2 screening. In the latter case a label must be defined (see ''SKIPLABEL''). * **''SKIPLAB''=//variable//** The name of the label in the input stream must be defined, which determines the electronic structure level to be used for screening the high-order terms of a PES expansion. * **''SKIPLAB''=//variable//** The name of the label in the input stream must be defined, which determines the electronic structure level to be used for screening the high-order terms of a PES expansion. Line 537: Line 513: * **''THRSED''=//value//** (=1.0d-6 Default) Threshold for determining symmetry elements of the molecule. * **''THRSED''=//value//** (=1.0d-6 Default) Threshold for determining symmetry elements of the molecule. * **''THRSYMx''=//value//** ($x$=1,2,...) Threshold used for recognizing symmetry within a subsurface of the PES expansion - in dependence on the order of the expansion term. The defaults are ''THRSYM1''=5.0d-5, ''THRSYM2''=1.0d-5,''THRSYM3''=5.0d-6,''THRSYM4''=5.0d-6,''THRSYM5''=1.0d-7. * **''THRSYMx''=//value//** ($x$=1,2,...) Threshold used for recognizing symmetry within a subsurface of the PES expansion - in dependence on the order of the expansion term. The defaults are ''THRSYM1''=5.0d-5, ''THRSYM2''=1.0d-5,''THRSYM3''=5.0d-6,''THRSYM4''=5.0d-6,''THRSYM5''=1.0d-7. + * **''DELAUTO''=//variable//**(=//off// Default) If ''DELAUTO''=//on//, all not converged surfaces of the highest considered dimension are deleted. It only works after a restart from an external potfile. + ==== Selection of Modes ==== + ''VIBMODE'',//options// + + The ''VIBMODE'' directive allows to span the PES only with predefined modes. The following options can be combined in various ways. + + * **''ENERGHIGH''=//x//** Modes with a frequency lower than **x** are used to span the surface (according to the harmonic frequencies) + * **''ENERGLOW''=//x//** Modes with a frequency higher than **x** are used to span the surface (according to the harmonic frequencies) + * **''HIGH''=//n//** the highest **n** modes are used to span the surface + * **''LOW''=//n//** the lowest **n** modes are used to span the surface + * **''MODE''=//n//** Mode which is used to span the surface (can be used multiple times) ==== Visualisation and interfaces ==== ==== Visualisation and interfaces ==== Line 544: Line 531: The ''GRAPH'' directive is the interface to programs for visualising potential terms and to provide potential information for any other programs in a most simple manner. The ''GRAPH'' directive is the interface to programs for visualising potential terms and to provide potential information for any other programs in a most simple manner. - * **''NDIM''=//n//** (=0 Default) This keyword writes all //n//D surfaces and a corresponding Gnuplot script in a separate subdirectory (''plots1'') in the //home//-directory in order to allow for visualization of the computed //n//D surfaces. E.g. the command "gnuplot plotV1D.gnu" in the ''plots1'' directory produces .eps files for all 1D surfaces. This keyword corresponds to the ''PLOT'' keyword of the ''SURF'' program. + * **''DIRECTORY''=//path//** (=./plots1 Default) Path or name of the directory to be specified for dumping the files for visualisation. See the keyword ''NDIM''. - * **''DIRECTORY''=//path//** (=./plots Default) Path or name of the directory to be specified for dumping the files for visualisation. See the keyword ''NDIM''. + - * **''MOLDEN''=//file name//** This allows to dump a file, which can directly read in by the Molden or Wxmacmolplot programs. This allows for the visualisation of the geometry and the harmonic frequencies of the molecule. + * **''GEOM''=//file name//** This option dumps the xyz coordinates of all structures used within the electronic structure calculations into an ASCII file. * **''GEOM''=//file name//** This option dumps the xyz coordinates of all structures used within the electronic structure calculations into an ASCII file. + * **''MOLDEN''=//file name//** This allows to dump a file, which can directly read in by the Molden or Wxmacmolplot programs. This allows for the visualisation of the geometry and the harmonic frequencies of the molecule. + * **''NDIM''=//n//** (=0 Default) This keyword writes all //n//D surfaces and a corresponding Gnuplot script in a separate subdirectory (''plots1'') in the //home//-directory in order to allow for visualization of the computed //n//D surfaces. E.g. the command "gnuplot plotV1D.gnu" in the ''plots1'' directory produces .eps files for all 1D surfaces. This keyword corresponds to the ''PLOT'' keyword of the ''SURF'' program. ==== Error correction schemes ==== ==== Error correction schemes ==== Line 553: Line 540: ''ALTER'',//options// ''ALTER'',//options// - AS the ''ALTER'' directive of the ''SURF'' program was slightly confusing, it has been completely redefined for the ''XSURF'' program. It allows to apply error correction schemes for individual single point calculations. For example, in case that the Hartree Fock calculation for a certain grid point did not converge and the ''ORBITAL'' directive in the subsequent electron correlation calculation uses the ''IGNORE_ERROR'' option, an alternative calculation scheme can be provided, e.g. MCSCF in contrast to RHF. The ''ALTER'' directive always requests to specify a new label, which replaces the old one. If more than one label shall be replaced, the ''ALTER'' directive needs to be called repeatedly. + As the ''ALTER'' directive of the ''SURF'' program was slightly confusing, it has been completely redefined for the ''XSURF'' program. It allows to apply error correction schemes for individual single point calculations. For example, in case that the Hartree Fock calculation for a certain grid point did not converge and the ''ORBITAL'' directive in the subsequent electron correlation calculation uses the ''IGNORE_ERROR'' option, an alternative calculation scheme can be provided, e.g. MCSCF in contrast to RHF. The ''ALTER'' directive always requests to specify a new label, which replaces the old one. If more than one label shall be replaced, the ''ALTER'' directive needs to be called repeatedly. * **''NEW''=//label//** Specification of the new label. * **''NEW''=//label//** Specification of the new label. Line 696: Line 683: ''VFREQ'',//options// ''VFREQ'',//options// - Usually, the diplacements vectors of the normal coordinates are retrieved from a preceding harmonic frequency calculation called by the ''FREQ'' program. Alternatively, these vectors can be obtained from the ''XSURF'' program and the ''VFREF'' directive. However, this alternative is solely based on a twofold numerical differentiation and does not take advantage out of analytical derivatives. However it offers a couple of options, which are not available in the ''FREQ'' program. + Usually, the diplacements vectors of the normal coordinates are retrieved from a preceding harmonic frequency calculation called by the ''FREQ'' program. Alternatively, these vectors can be obtained from the ''XSURF'' program and the ''VFREQ'' directive. However, this alternative is solely based on a twofold numerical differentiation and does not take advantage out of analytical derivatives. However it offers a couple of options, which are not available in the ''FREQ'' program. * **''COORD''=//n//** (=2 Default) Symmetry adapted coordinates, ''COORD''=1, or Cartesian coordinates, ''COORD''=2, may be used within the numerical differentiation. * **''COORD''=//n//** (=2 Default) Symmetry adapted coordinates, ''COORD''=1, or Cartesian coordinates, ''COORD''=2, may be used within the numerical differentiation. + * **''METHOD''=//n//** (=1 Default) This option specifies the number of points within the numerical differentiation, i.e. ''METHOD''=1 refers to the standard 3-point formula (central differences), ''METHOD''=2 denotes the more accurate 5-point formula and ''METHOD''=3 the 7-point formula. + * **''PRINT''=//n//** (=1 Default) Printout control. * **''START''=//label//** This sets the label in the input stream to determine the electronic structure level to be used. * **''START''=//label//** This sets the label in the input stream to determine the electronic structure level to be used. - * **''METHOD''=//n//** (=1 Default) This option specifies the number of points with the numerical differentiation, i.e. ''METHOD''=1 refers to the standard 3-point formula (central differences), ''METHOD''=2 denotes the more accurate 5-point formula and ''METHOD''=3 the 7-point formula. * **''STEP''=//value//** (=1.0d-2 Default) This option specifies the step width within the numerical differentiation. * **''STEP''=//value//** (=1.0d-2 Default) This option specifies the step width within the numerical differentiation. - * **''PRINT''=//n//** (=1 Default) Printout control. ==== Linear combinations of normal coordinates ==== ==== Linear combinations of normal coordinates ==== Line 718: Line 705: By default, the ''XSURF'' program generates an $n$-mode expansion. However, the program structure allows also to retrieve a Taylor expansion of the potential, which is identical with a Taylor expansion obtained by differentiation. In principal Taylor expansions of arbitrary order can be generated, but of course it must be guaranteed that the order of coupling terms is sufficiently high, e.g. a sextic force field cannot be obtained from a ''NDIM''=4 calculation, because this calculation generates coupling terms with at most 4 different indices. In such a case, the missing terms will simply be neglected. By default, the ''XSURF'' program generates an $n$-mode expansion. However, the program structure allows also to retrieve a Taylor expansion of the potential, which is identical with a Taylor expansion obtained by differentiation. In principal Taylor expansions of arbitrary order can be generated, but of course it must be guaranteed that the order of coupling terms is sufficiently high, e.g. a sextic force field cannot be obtained from a ''NDIM''=4 calculation, because this calculation generates coupling terms with at most 4 different indices. In such a case, the missing terms will simply be neglected. - * **''POINTS''=//n//** (=5 Default) Number of ab initio points controlling the accuracy of the derivatives (e.g. 5-point formula). * **''ORDER''=//n//** (=5 Default) Number of basis functions within the Taylor expansion. * **''ORDER''=//n//** (=5 Default) Number of basis functions within the Taylor expansion. - * **''TYPE''=//variable//** ''TYPE''=QFF (corresponds to ''POINTS''=5 and ''ORDER''=5) specifies a full quartic force field. ''TYPE''=SQFF specifies a semi-quartic force field (as used in VPT2 calculations). ''TYPE''=SEXTIC (corresponds to ''POINTS''=7 and ''ORDER''=7) is the shortcut for a sextic force field. + * **''POINTS''=//n//** (=5 Default) Number of ab initio points controlling the accuracy of the derivatives (e.g. 5-point formula). * **''SCALE''=//value//** (=7.0d-2 Default) This keyword controls the step width used and corresponds to the ''SCALE'' keyword in the ''SURF'' and ''XSURF'' programs. * **''SCALE''=//value//** (=7.0d-2 Default) This keyword controls the step width used and corresponds to the ''SCALE'' keyword in the ''SURF'' and ''XSURF'' programs. + * **''TYPE''=//variable//** ''TYPE''=''QFF'' (corresponds to ''POINTS''=5 and ''ORDER''=5) specifies a full quartic force field.\\ + ''TYPE''=''SQFF'' specifies a semi-quartic force field (as used in VPT2 calculations).\\ + \\ + ''TYPE''=''SEXTIC'' (corresponds to ''POINTS''=7 and ''ORDER''=7) is the shortcut for a sextic force field. ==== Additional properties ==== ==== Additional properties ==== Line 729: Line 719: The ''XSURF'' programs allows to compute energy surfaces, dipole surfaces and polarizability surfaces. In addition to that, arbitrary property surfaces can be generated and dumped into an external restart file. The ''XSURF'' programs allows to compute energy surfaces, dipole surfaces and polarizability surfaces. In addition to that, arbitrary property surfaces can be generated and dumped into an external restart file. - * **''VARx''=//variable//** (x=number) Name of the variable, which shall be read from the input file. - * **''NEL''=//n//** Number of data to be read in for one point. * **''NDIM''=//n//** Dimension of the $n$-mode expansion to be used for the new property. * **''NDIM''=//n//** Dimension of the $n$-mode expansion to be used for the new property. + * **''NEL''=//n//** Number of data to be read in for one point. + * **''VARx''=//variable//** (x=number) Name of the variable, which shall be read from the input file. ==== Interface to other programs ==== ==== Interface to other programs ==== Line 741: Line 731: * **''COPY''=//n//** (=0 Default) Once new data have been generated in the external ASCII file, the coefficients of the corresponding polynomials can be displayed in the ''POLY'' program using the option ''COEF_INTERFACEx'' with (x=1,2...). It is also possible to replace the energy or dipole surfaces generated by Molpro by these new quantities by ''COPY=ENE'' or ''COPY=DIP''. * **''COPY''=//n//** (=0 Default) Once new data have been generated in the external ASCII file, the coefficients of the corresponding polynomials can be displayed in the ''POLY'' program using the option ''COEF_INTERFACEx'' with (x=1,2...). It is also possible to replace the energy or dipole surfaces generated by Molpro by these new quantities by ''COPY=ENE'' or ''COPY=DIP''. * **''DATA''=//n//** (=1 Default) ''DATA=1'' provides detailed information about each single point of the PES in a formatted output. ''DATA=2'' provides the geometry and energy of a given point in a single line. New information about this point needs to be added at the end of the line. ''DATA=2'' prints the displacements along the coordinates and the energy in a single line. Again, new information needs to be added at the end of this line. * **''DATA''=//n//** (=1 Default) ''DATA=1'' provides detailed information about each single point of the PES in a formatted output. ''DATA=2'' provides the geometry and energy of a given point in a single line. New information about this point needs to be added at the end of the line. ''DATA=2'' prints the displacements along the coordinates and the energy in a single line. Again, new information needs to be added at the end of this line. - * **''WFU''=//file name//** Specifies the name of the external file. * **''NDIM''=//n//** (=0 Default) Dimension of the $n$-mode expansion to which the geometry information shall be dumped. * **''NDIM''=//n//** (=0 Default) Dimension of the $n$-mode expansion to which the geometry information shall be dumped. * **''NRES''=//n//** (=1 Default) Number of columns being added to the external file by an external program. * **''NRES''=//n//** (=1 Default) Number of columns being added to the external file by an external program. Line 747: Line 736: * **''TYPE''=//variable//** (=OUT Default) This option controls, if the file shall be written ''TYPE=OUT'' or read in ''TYPE=IN''. * **''TYPE''=//variable//** (=OUT Default) This option controls, if the file shall be written ''TYPE=OUT'' or read in ''TYPE=IN''. * **''ZERO''=//n//** (=1 Default) If set to 1, geometries of lower orders of the $n$-mode representation will be printed, i.e. the external file contains redundand data. ''ZERO=0'' neglects all redundancies and prints only unique points. As a consequence, an external file generated this way cannot be read in again for technical reasons. * **''ZERO''=//n//** (=1 Default) If set to 1, geometries of lower orders of the $n$-mode representation will be printed, i.e. the external file contains redundand data. ''ZERO=0'' neglects all redundancies and prints only unique points. As a consequence, an external file generated this way cannot be read in again for technical reasons. + * **''WFU''=//file name//** Specifies the name of the external file. ==== Grid computing interface ==== ==== Grid computing interface ==== Line 758: Line 748: * **''MEMORY''=//n//** Memory request of the individual single point calculations in MW. * **''MEMORY''=//n//** Memory request of the individual single point calculations in MW. * **''WFU''=//file name//** If additional information need to be read in from a .wfu file, this can be specified here. * **''WFU''=//file name//** If additional information need to be read in from a .wfu file, this can be specified here. +
|
2022-05-18 06:42:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6575555205345154, "perplexity": 1995.3773189118297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521152.22/warc/CC-MAIN-20220518052503-20220518082503-00049.warc.gz"}
|
http://math.stackexchange.com/questions/65914/relation-between-a-map-and-its-lifting-into-the-covering-space
|
# Relation between a map and its lifting into the covering space
I have the following question: Let $\mathbb{D}$ denote the unit disk. Let $f:X_1 \longrightarrow X_2$ be a continuous mapping between Riemann Surfaces. Let $\pi_1 : \mathbb{D} \longrightarrow X_1$ , and $\pi_2 : \mathbb{D} \longrightarrow X_2$ be the universal covering spaces of $X_1$ and $X_2$, respectively. A lifting of $f$ is a continuous map $\tilde{f}: \mathbb{D}\longrightarrow \mathbb{D}$ such that $f\circ \pi_1=\pi_2\circ \tilde{f}.$
The question is to show if $f$ is homeomorphism, then so is $\tilde{f},$ or to give a counterexample.
Any help will be appreciated.
Thank you.
-
Yes, $\tilde f$ is a homeomorphism. But let me begin with two remarks.
1) The map $\tilde f$ is not well defined, since you can compose any chosen $\tilde f$ with an automorphism of the covering $\pi_2$. The correct context is that of pointed coverings, covering spaces with a distinguished point chosen.
2) More importantly, the result is easier to understand in a general context, so let us forget about disks and Riemann surfaces! [In what follows, all topological spaces are assumed connected and locally pathwise connected.]
Crucial property A pointed covering map $\pi: (\tilde X, \tilde x_0)\to (X, x_0)$ with $\tilde X$ simply connected is universal in the following sense:
Given any pointed covering $\rho :(\hat Y,\hat y_0) \to (Y, y_0)$ and any pointed continuous map $f:(X,x_0) \to (Y,y_0)$ , there exists a unique morphism of pointed coverings $\tilde f:(\tilde X, \tilde x_0)\to (T,t_0)$ [meaning, of course, that $\tilde f$ is continuous, that $\tilde f (\tilde x_0)=\hat y_0$ and that $\rho \circ \tilde f=f\circ \pi$].
If $f$ is a homeomorphism with inverse $g$, the uniqueness property above (=functoriality) will immediately imply that $\tilde f$ is a homeomorphism with inverse $\tilde g$, which answers your question in the particular case you are interested in.
Bibliography You will find a more general version of the Crucial property on page 61 of Hatcher's Algebraic Topology (Proposition 1.33)
-
Thank you for your help. – yaa09d Sep 21 '11 at 21:38
|
2016-06-28 09:43:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9740528464317322, "perplexity": 127.06218965298198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00119-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://www.earth-syst-sci-data.net/12/771/2020/
|
Journal topic
Earth Syst. Sci. Data, 12, 771–787, 2020
https://doi.org/10.5194/essd-12-771-2020
Earth Syst. Sci. Data, 12, 771–787, 2020
https://doi.org/10.5194/essd-12-771-2020
Data description paper 02 Apr 2020
Data description paper | 02 Apr 2020
# Historic photographs of glaciers and glacial landforms from the Ralph Stockman Tarr collection at Cornell University
Historic photographs of glaciers and glacial landforms from the Ralph Stockman Tarr collection at Cornell University
Julie Elliott1 and Matthew E. Pritchard2 Julie Elliott and Matthew E. Pritchard
• 1Department of Earth, Atmospheric, and Planetary Sciences, Purdue University, West Lafayette, IN 47907, USA
• 2Department of Earth and Atmospheric Sciences, Cornell University, Ithaca, NY 14850, USA
Correspondence: Julie Elliott (julieelliott@purdue.edu)
Abstract
Historic photographs are useful for documenting glacier, environmental, and landscape change, and we have digitized a collection of about 1949 images collected during an 1896 expedition to Greenland and trips to Alaska in 1905, 1906, 1909, and 1911, led by Ralph Stockman Tarr and his students at Cornell University. These images are openly available in the public domain through Cornell University Library (http://digital.library.cornell.edu/collections/tarr, last access: 15 March 2020; Tarr and Cornell University Library, 2014, https://doi.org/10.7298/X4M61H5R). The primary research targets of these expeditions were glaciers (there are about 990 photographs of at least 58 named glaciers), but there are also photographs of people, villages, and geomorphological features, including glacial features in the formally glaciated regions of New York state. Some of the glaciers featured in the photographs have retreated significantly in the last century or even completely vanished. The images document terminus positions and ice elevations for many of the glaciers and some glaciers have photographs from multiple viewpoints that may be suitable for ice volume estimation through photogrammetric methods. While some of these photographs have been used in publications in the early 20th century, most of the images are only now widely available for the first time. The digitized collection also includes about 300 lantern slides made from the expedition photographs and other related images and used in classes and public presentations for decades. The archive is searchable by a variety of terms including title, landform type, glacier name, location, and date. The images are of scientific interest for understanding glacier and ecological change; of public policy interest for documenting climate change; of historic and anthropological interest as local people, settlements, and gold-rush era paraphernalia are featured in the images; and of technological interest as the photographic techniques used were cutting edge for their time.
1 Background
In recent decades, glacier retreat has become symbolic of climate change, but the relationship between climate and glacier response is complex. While global trends indicate significant ice loss throughout the 20th and early 21st centuries, glaciers are not losing ice at the same rate and a small fraction are continuing to gain mass (e.g., Larsen et al., 2007, 2015; Hill et al., 2018). A variety of factors control whether an individual glacier is advancing or retreating and how it will respond to regional climate change (e.g., Post et al., 2011; Larsen et al., 2015). To better understand the link between climate and glacier behavior, long-term records beyond the relatively short temporal limits of satellite observations are essential. Historic photographs can significantly expand the time span of observations, leading to both qualitative and quantitative evaluations of glacier fluctuations and their possible causes (e.g., Molnia, 2007; Bjørk et al., 2012). In recent years, historic photographs have been used in applications as diverse as studies of human history in Glacier Bay, Alaska; television documentaries; and education materials (e.g., Maness et al., 2017; Conner et al., 2009).
Here we describe a newly digitized collection of photographs from a series of expeditions undertaken in the late 19th and early 20th centuries to study glaciers and other geographical features in Greenland and Alaska led by Professor Ralph Stockman Tarr and his students from Cornell University. Tarr was a professor of physical geography with particular interests in glaciology and geomorphology. In pursuit of these interests, he led or participated in expeditions to western Greenland in 1896 and to various regions of Alaska and western Canada in 1905, 1906, 1909, and 1911. His former student and frequent collaborator, Lawrence Martin, joined Tarr on the expeditions of 1905, 1909, and 1911 and made trips to Alaska without Tarr in 1904, 1910, and 1913 (Tarr and Martin, 1913). The collection presented here includes images from the 1905, 1906, 1909, and 1911 expeditions and the 1896 Greenland expedition. The expeditions are discussed further below. In addition to the images from expeditions, the collection includes digitized images of lantern slides (glass slides used in magic lanterns, an early version of a projector) that were used in teaching at Cornell and public lectures. These lantern slides duplicate a few of the original images from the expeditions but also include images from other scientific expeditions as well from commercial teaching collections. A summary of the digitized collection is shown in Table 1. Approximate image locations are shown in Figs. 1 and 2 (Alaska) and Fig. 3 (Greenland).
Table 1Summary of digitized photographs.
Figure 1Locations visited during expeditions to Alaska between 1905 and 1911.
Figure 2Sites visited in southeast Alaska. (a) Sites visited in Yakutat Bay. DB is Disenchantment Bay, RF is Russell Fjord, and NF is Nunatak Fjord. Numbered locations are (1) Marvine Gl., (2) Blossom Island, (3) Hayden Gl., (4) Floral Hills, (5) Floral Pass, (6) Kwik Stream, (7) Lucia Gl., (8) Lucia Stream, (9) Terrace Point, (10) Strawberry Island, (11) Atrevida Gl., (12) Ampitheater Knob, (13) Esker Stream, (14) Galianao Gl., (15) Black Gl., (16) Turner Gl., (17) Haenke Gl., (18) Hubbard Gl., (19) Osier Island, (20) Gilbert Point, (21) Haenke Island, (22) Marble Point, (23) Mt. Alexander, (24) Alexander Gl., (25) Indian Camp, (26) Logan Beach, (27) Knight Island, (28) Otmeloi Island, (29) Khantaak Island, (30) Yakutat, (31) Cape Stoss, and (32) Cape Enchantment. (b) Sites visited in Glacier Bay. Numbered locations are (1) Rendu Gl., (2) Rendu Inlet, (3) Reid Gl., (4) Hugh Miller Gl., (5) Charpentier Gl., (6) Geike Gl., (7) Wood Gl., (8) Carroll Gl., (9) Morse Gl., (10) Muir Gl., (11) McBride Gl., (12) Casement Gl., (13) Tidal Inlet, (14) Muir Inlet, (15) Herbert Gl., (16) Mendenhall Gl., (17) Norris Gl., (18) Taku Gl., (19) Russell Island, (20) Triangle Island, and (21) N. Marble Island.
Figure 3Sites visited in Greenland. (a) Site locations in the Qaanaaq region. (b) Site locations in the Upernavik Archipelago region. WH is Wilcox Head and NP is Nugsauk (Nuussuaq) Peninsula. (c) Site locations in the Disko Bay region. UF is Umenak (Umanak/Uummannaq) Fjord. WS is Waigat (Vaigat/Sullorsuaq) Strait.
Counting the expedition images and lantern slides together, there are images of at least 50 named glaciers in Alaska and eight in Greenland in the collection (Tables 2 and 3). The glaciers featured in these images are of global importance as glaciers in coastal Greenland and Alaska are significant contributors to current sea-level rise because of their rapid loss of ice mass (e.g., Gardner et al., 2013). Of these 58 glaciers, 35 have photos that clearly show the majority of their terminus, which will allow the position to be mapped and compared to modern terminus positions. Roughly half of the glaciers have images in which the vertical extent of ice is easily distinguished against valley walls and other features that can serve as benchmarks for modern comparisons. Eight of the glaciers in Alaska have photographs of the terminus region taken from at least three different viewpoints, which may make the images suitable for ice volume estimation through photogrammetric methods. About 20 % of the glaciers, either through single photographs or a combination of photographs, have imagery covering at least 5 km of their length, measured from their terminus. Tables 2 and 3 list which glaciers fall into each of these categories.
Table 2Alaska glacier photographs in collection.
a Location and identification uncertain. b Tarr referred to this glacier as Baird; the later accepted name is Allen. c Tarr used both names for this glacier in the photos; the accepted name is Fourth. d Tarr and Martin (1914) reported that this glacier slid into the Disenchantment Bay on 4 July 1905, causing a local tsunami. e Glacier only appears in a photo by H. Reid on a lantern slide. f To be counted in this category, the photo had to show the majority of the terminus region. g At least three viewpoints of the terminus region (e.g., front view and angled from each side). h Photos (either as a single photo or collectively) show at least 5 km of the glacier from the terminus region.
In addition to the glaciers themselves, Tarr was very interested in the landforms formed and left behind by the glaciers. The collection includes images of alluvial fans, various types of moraines, outwash plains, eskers, kettles, fosse, and other glacial features (Fig. 4). In Alaska and Greenland, the images show active and recently active features. The collection also includes images of features developed during the last glacial maximum in the Finger Lakes region of upstate New York. Moraines are the most frequently featured glacier landforms in the images. In many images, the moraine appears alongside other features, such as mountains or shorelines, that are easily located in modern maps and imagery. This is especially true in Alaska, where moraine locations in images from Prince William Sound (Columbia, Spencer, and Shoup glaciers), the Wrangell Mountains (Kennicott Glacier), and the Yakutat Bay/Russell Fjord region (Hubbard, Orange, Hidden, Variegated, Atrevida, and Hidden glaciers) can be mapped and compared to present-day landscapes. Other types of geological changes are also documented in these photographs. One goal of the expeditions to Alaska was to document changes caused by a series of earthquakes in the area (e.g., Tarr, 1909; Martin, 1910; Tarr and Martin, 1912a) that caused significant, abrupt uplift and subsidence (Fig. 5). These observations have been used in modern tectonic studies (Plafker and Thatcher, 2008) and can be useful in separating instantaneous tectonic motion from the effects of glacial isostatic adjustment that have accumulated over the past century. For all of the regions Tarr visited, the collection includes images of people, towns, and smaller settlements that provide a glimpse into life at the turn of the 20th century (in particular, gold-rush era Interior Alaska, the Yukon, and British Columbia).
Historic photographs have been used for decades to observe glacier change (e.g., Molnia, 2007, 2008; Meier et al., 1985), but the digitized photographs described here are a significant addition as they have been little studied and include glaciers with few historic photographs. Although most of the photographs in this collection have been publically available in the Division of Rare and Manuscript Collections (RMC) of the Cornell University Library for decades, only a fraction have been published in articles (e.g., Tarr and Martin, 1914). In particular, photographs from the 1911 expedition were not used extensively in publications (although see, e.g., Tarr and Martin, 1912b, 1913; Martin 1913a), because Tarr died suddenly in March, 1912, at age 48 and the collaborators moved on to other projects (e.g., Brice, 1985, 1989). Thus, many of the photographs have not been otherwise published or catalogued and have been seldom viewed over the past 100 years.
In the following sections, we describe the purposes and context of the expeditions, the types of photographs and the subjects, and how they were digitized.
2 The expeditions and photography
## 2.1 Photographers and equipment
The photographs were taken by a variety of photographers. Tarr and Martin both took photographs. During the 1906 and 1909 expeditions, many of the photos were taken by Oscar D. von Engeln. A keen photographer, von Engeln worked with Tarr as an undergraduate and graduate student at Cornell and later became a professor in the Department of Geology and Geography there. James Otis Martin, a Cornell student who accompanied Tarr to Greenland in 1896, took a number of the photographs during that expedition (Tarr, 1897a–i). Photos in the collection were also taken by members of the U.S. Geological Survey (USGS), engineers of the Copper River and Northwestern Railway, members of the Canadian Boundary Survey, and several unnamed Alaskan photographers (Tarr and Martin, 1914). The lantern slides include photos taken by members of other well-known expeditions, including William Libbey, a Princeton geographer who participated in a trip to explore Mount St. Elias in Alaska in 1886, Peary's 1894 expedition to Greenland, and a 1899 Princeton-funded trip to Greenland (Koelsch, 2016); F. Jay Haynes, a professional photographer who visited Alaska, Yellowstone, and other parts of the American West; Henry Fielding Reid, a professor at Johns Hopkins who performed pioneering studies of glacier dynamics in southeast Alaska in addition to groundbreaking work on how faults related to earthquakes; and Israel Russell, a USGS scientist who explored the regions of Mount St. Elias and Yakutat Bay in Alaska.
Table 3Greenland glacier photographs in collection.
a Regular type gives name of glacier as known to the photographer at the time the image was acquired; italic type gives Greenlandic name or name it is commonly known as today. b Variably spelled as Nugsuak or Nugssuak in collection materials. c Only appears as photos by Libbey on lantern slides. d To be counted in this category, the photo had to show the majority of the terminus region. e At least three viewpoints of the terminus region (e.g., front view and angled from each side). f Photos (either as a single photo or collectively) show at least 5 km of the glacier from the terminus region.
Figure 4Examples of photographs of glacial features. (a) Push moraine at the front of Columbia Glacier (note person for scale) (ID: tve_lanternslide_0004). (b) Kettles in outwash plain of Hidden Glacier (ID: tve_exp1905_162).
Figure 5Photographs showing effects of 1899 earthquakes. (a) Wave-cut bench and sea cliff on east shore of Haenke Island uplifted during earthquakes (ID: tve_exp1906_183). (b) Trees on Khantaak Island killed by submergence in salt water due to earthquakes (ID: tve_exp1909_325).
Equipment varied depending on the trip and the most detail is known about the equipment used in the 1906 and 1909 Alaska expeditions as von Engeln published articles on techniques he used to take and develop photographs in the challenging field conditions (von Engeln, 1907b, 1910). During those expeditions, he used a Rochester Optical Company Pony Premo self-casing folding plate camera (which is preserved in the Cornell RMC; see Fig. 6a and b) as well as several other cameras including multiple Kodak film cameras (von Engeln, 1907b). Exposures were made on glass plates and film negatives (typically standard $\mathrm{4}\phantom{\rule{0.125em}{0ex}}\mathrm{in}.×\mathrm{5}\phantom{\rule{0.125em}{0ex}}\mathrm{in}.$ size, with a few of larger size). Several lenses were used, including a long-focus lens that was custom-made by Bauch and Lomb for the Alaska work (von Engeln, 1907b). At least one camera had a mount system that allowed the capture of panoramic images on film (Fig. 6e, f). As shown in Fig. 6c–e, some of the images capture the process and difficulty of taking photographs in the rugged field environment. The cameras, lenses, shutter, and plate holders were packed into a leather case with straps so that it could be more easily carried along with a tripod, plates and film, and containers for changing plates and protecting exposed film from the excessive humidity of southeast Alaska (von Engeln, 1907b). Although the camera system was designed to be compact, setting it up and making the exposures often took considerably more time than scientific observations and notes at a site (von Engeln, 1910).
Figure 6Equipment and field conditions. (a) Camera used in the Alaska expeditions. (b) Close-up of camera front. (c) Expedition party traversing glacier in Alaska. (d) Expedition member with photography gear traversing cliff. (e) Scientific party and camera setup, Russell Fjord, Alaska (ID: tve_exp1909_002a). (f) Example of panoramic image, Columbia Glacier, Alaska (ID: tve_exp1909_ 035).
## 2.2 The expeditions
We briefly describe the expeditions that are summarized in Table 1 and Figs. 1, 2, and 3.
### 2.2.1 1896 Greenland
In 1896, Tarr, along with other faculty and students from Cornell, traveled to Greenland as part of Robert Peary's expedition that attempted (and failed) to remove the largest of the three Cape York meteorites (Tarr, 1896; Huntington, 2002). The Cornell group was one of three scientific parties along on the expedition; after stops along the coast of Labrador and at Baffin Island, Disko Island, Waigat (Vaigat or Sullorsuaq) Strait, and Umanak (Uummannaq), they were landed on the Nugsuak (Nuussuaq) Peninsula along the Upernavik Archipelago (Fig. 3) where they stayed for several weeks making studies of the geology, plant life, birds, and invertebrates (Tarr, 1896). While there, they described and assigned names to a number of geographic features including Cornell Glacier, Wyckoff Glacier (after an Ithaca businessman who provided financial backing for the expedition), and Mt. Schurman (after the President of Cornell University) (e.g., Tarr, 1896, 1897a, b). Tarr wrote extensively of his field observations on the trip (Tarr, 1896, 1897a–i). The digitized photographs are from a variety of locations (but mostly from the Nugsuak Peninsula) and show glaciers, geological features, local people, and some of the day-to-day activities and challenges of the expedition (Fig. 7).
Figure 7Digitized images from Greenland. (a) Icebergs in harbor of Umanak (Uummannaq) (ID: tve_lanternslide_0173). (b) Terminus of Nugsuak Glacier (ID: tve_exp1896_147). (c) Icebergs in Waigat Strait (ID: tve_exp1986_186). (d) North Cornell Glacier (ID: tve_lanternslide_0090).
In the summer of 1905, Tarr and Martin went to Yakutat Bay (Fig. 2a). Tarr was in charge of a USGS party charged with a general geological survey of the region, while Martin was funded by the American Geographical Society (e.g., Tarr and Martin, 1905). The scientific party made observations of surface changes that occurred after a series of M8 earthquakes that occurred in the region in 1899 (e.g., Plafker and Thatcher, 2008), general descriptions of glaciers in the Yakutat Bay area and evidence for the extent of past glaciation, and noted the return of vegetation to areas in which glaciers had recently retreated. The work done during this trip was the primary focus of Tarr and Martin (1905) and Tarr and Martin (1906a, b, c) and featured in a number of other publications (e.g., Tarr, 1907a, b, c, d, e, 1909, 1910a, b; Tarr and Martin, 1907, 1912a, 1914). Digitized photographs from this expedition include images of the area's glaciers, faults and other features related to the 1899 earthquakes, and glacial landforms (Figs. 4 and 8).
Figure 8Digitized images from the 1905 Alaska expedition. (a) Hubbard Glacier from Osier Island (ID: tve_exp1905_100). (b) Terminus of Nunatak Glacier (ID: tve_lanternslide_0264).
Tarr again led a USGS-sponsored party to Yakutat Bay in the summer of 1906. One of the party's objectives was to cross the Malaspina Glacier, but they discovered that the normally navigable tributary glaciers east of the Malaspina had advanced and created impassable crevasse fields (Tarr, 1907d, e). At least two other glaciers in the Yakutat Bay area had also advanced since the summer of 1905 (Tarr and Martin, 1912a). Scientific findings of the expedition are described in Tarr (1907a–e, 1909, 1910a, b), Tarr and Martin (1907, 1912a, 1914), and von Engeln (1911). Popular accounts of the expedition include von Engeln (1906a, 1907a) and Alley (2012). In the collection , the 1906 digitized photos are primarily images of the glaciers along the eastern edge of the Malaspina Glacier and within Yakutat and Disenchantment bays and Russell Fjord (Fig. 9).
Figure 9Digitized images from the 1906 Alaska expedition. (a) Variegated Glacier from Gilbert Point (ID: tve_exp1906_219). (b) Turner and Haenke glaciers from Haenke Island (ID: tve_exp1906_205_02). (c) Turner Glacier from Gilbert Point (ID: tve_exp1906_213). (d) Hidden Glacier and outwash plain (ID: tve_lanternslide_0267).
Figure 10Digitized images from the 1909 Alaska expedition. (a) Shoup Glacier terminus (ID: tve_exp1909_032). (b) Miles Glacier and Copper River Railroad (ID: tve_exp1909_ 041).
The 1911 expedition, funded again by the National Geographic Society, was the most wide ranging of the Alaska expeditions. While Tarr, Martin, and the rest of the scientific party returned to a few previously visited sites around Prince William Sound, the overwhelming majority of locations had not been previously visited by Tarr or Martin. Members of the party spent time in Glacier Bay (Fig. 2), the Kenai Peninsula, and Prince William Sound before moving inland to the Wrangell Mountains. They then moved north into Interior Alaska, with stops at Fairbanks and other locations involved with gold mining. The group traveled up the Yukon River through Alaska and the Yukon (with a variety of stops including Dawson) before reaching the headwaters of that river in British Columbia (Fig. 1). Scientific observations are presented in Tarr and Martin (1912b, 1913). The digitized images also have the most variety of any of the expeditions: glaciers in southeast and south-central Alaska, railways, mining operations, roadhouses in Interior Alaska, city streets in Fairbanks and Dawson, and small settlements along the Yukon River (Fig. 11).
Figure 11Digitized images from the 1911 Alaska expedition. (a) Muir Glacier (ID: tve_lanternslide_0025). (b) Street in Dawson, Yukon (ID: tve_exp1911_042). (c) Effort to build diversion dam for Spencer Glacier stream (ID: tve_exp1911_030). (d) Tracks displaced by Spencer Glacier outlet stream (ID: tve_exp1911_020).
### 2.2.6 Ithaca and Upstate New York
The collection also includes images from closer to Tarr's home in Ithaca, New York. Over his 20 years at Cornell, Tarr accumulated images of glacial landforms and other geological features (including waterfalls) from Ithaca and upstate New York. Tarr used his observations in a number of publications on glacial erosion, the development of glacial landforms, and the geology of New York, including Tarr (1904), Tarr (1905a, b, c, d), and Tarr (1906a, b). Examples of digitized images from upstate New York are shown in Fig. 12.
Figure 12Digitized images from the Ithaca area. (a) Ridge of an esker, McLean, NY (ID: tve_ithaca_07). (b) Taughannock Falls in 1888 (ID: tve_ithaca_02).
3 Description of dataset
## 3.1 Original material
The original photographs are in the form of prints, glass plates, lantern slides, or negatives. The photographic material was placed by Tarr and his associates in individual paper envelopes with handwritten notes on the outside of the envelopes. As these envelopes were fragile due to age and not acid-free, the materials were rehoused in acid-free paper and stored with copies of the original envelopes. These materials are stored and publically accessible through the Cornell University Library Division of Rare and Manuscript Collections as part of the Ralph Stockman Tarr papers (collection number 14-15-92, 21 boxes) and the Oscar Diedrich von Engeln papers (collection number 14-15-856, 18 boxes). Due to budget constraints, we could not digitize all of the images but focused on images of glaciers and glaciated landscapes from the Alaska and Greenland expeditions as well as the materials that would produce the highest-quality images. There are several hundred other photographs in the RMC that were not digitized along with thousands of lantern slides housed at the Department of Earth and Atmospheric Sciences at Cornell University. Not all of this material is suitable for digitization as the original materials in some cases have significantly degraded over the past century. Some of the remaining materials are prints or duplicates of already scanned material. Most of the lantern slide subjects are not directly related to the topics covered by this paper.
## 3.3 Data availability
The digitized files (original size, full-resolution tiffs) were uploaded and made available through the Cornell University Library digital collections in the collection called Historic Glacial Images of Alaska and Greenland (http://digital.library.cornell.edu/collections/tarr, last access: 15 March 2020). The collection includes 1948 images with metadata, has a digital object identifier (https://doi.org/10.7298/X4M61H5R, last access: 9 February 2020), and can be cited as Tarr and Cornell University Library (2014). The photographs are believed to have no known United States copyright or other restrictions. The Library does not charge for permission to use such material and does not grant or deny permission to publish or otherwise distribute public domain material in its collections. As a matter of good scholarly practice, we recommend that patrons using Library-provided reproductions cite the Library, the DOI, and this article as the source of reproductions.
Beyond the metadata described above, which is listed with each image in the digital collection, additional information was compiled for some of the images. All of this information is included in Table S1, and brief descriptions of the additional fields can be found below. The lantern slides had a systematic naming convention (e.g., most images fall into the North America or glacier categories) and numbering system that was indexed in a handwritten ledger in the RMC that was used to find the slides for teaching and public presentations. In some cases, the envelope includes additional notes about the images that have not been included in the digital published metadata through the Cornell University Library. For example, some images included a letter grade for the quality of the image (A+ being the highest and D being the lowest), presumably assessed by Tarr or von Engeln. In some other images, these notes include the name of the photographer, or if the photograph is duplicated as a lantern slide set, the number of the lantern slide is given. The approximate geographic (in longitude and latitude) location of the images is also given when possible. These positions should be taken as approximate locations only; in many cases a precise location is not possible to determine without additional ground truthing. For example, some detailed descriptions of images (e.g., “NE side Russell fiord, opposite Marble Point”) allow more precise locations while others (e.g., “Glacier Bay”) only allow general locations. We do not attempt to precisely tag which part of a particular glacier is in an image; we assign one set of coordinates for each glacier and use those coordinates as the location for each occurrence of that glacier. For images that contain multiple points of interest, we assign coordinates based on the dominant feature. Table S1 also includes a direct link to each image.
We have not edited the glacier names in the metadata – it is possible that some photos labeled to show a certain glacier do not actually include that glacier, and it is also possible that photos that do not include the name of a glacier could include one. In several instances, names of glaciers used by Tarr and his colleagues were either not official names or were names that were subsequently changed. In these cases, the glacier is indexed by the name used by Tarr, and the currently accepted name is listed in the tables and the Supplement. For Greenland glaciers, our choice of accepted name was guided by Bjørk et al. (2015). As noted above each image was assigned a general region (designated a region in the metadata – e.g., Canada) and a more specific area (designated a subregion in the metadata – e.g., Yukon) when possible (for some images, location information was too vague to assign a more specific area). Several issues arose while assigning place names. One concerned the fact that multiple locations (sometimes separated by significant distances) shared the same name. As an example, there were Serpentine glaciers in Prince William Sound and Yakutat Bay in Alaska. In Greenland, there are multiple Nuussuaq peninsulas and Devil's thumbs. To make accurate location designations, we relied on locations of images taken within the same time frame; publications discussing the images and expeditions; and, in a few cases, field diary entries. For these locations, we have added information to clarify which place is being referred to in the image (e.g., Nuussuaq Peninsula, Upernavik Archipelago). Another issue concerned spelling of place names. Image subjects and locations recorded by Tarr and his colleagues had variable spellings for the same place (e.g., Nugsauk, Nugssauk) that in some cases differ from the current commonly accepted spelling (e.g., Nuussauq). In the figures and tables in this paper, we include both the spelling or spellings used by Tarr as well as the current commonly accepted spelling. We did not standardize or otherwise change the spellings of the transcribed titles. For instances where a designated subregion tag contained a variably spelled name, we used the current common spelling. Image titles also contained variable spellings of other words (e.g., fjord/fiord, canyon/canon); we left the spelling as it was transcribed.
4 Complimentary collections
5 Data availability
The digitized files are available through the Cornell University Library digital collections in the collection called Historic Glacial Images of Alaska and Greenland (http://digital.library.cornell.edu/collections/tarr, last access: 15 March 2020; Cornell University, 2014) and can be cited as Tarr and Cornell University Library (2014), https://doi.org/10.7298/X4M61H5R.
6 Conclusions
The newly digitized dataset will have a variety of uses for researchers. The images are of scientific interest for understanding glacier dynamics and ecological change, of public policy interest for documenting possible effects of climate change, and of historic and anthropological interest for capturing daily life in remote regions at the turn of the 20th century. The glacier images provide documentation of terminus positions and ice elevation and offer the possibility of ice volume estimates. Most the glaciers featured in the digitized images have undergone significant change over the past century, and comparison of the information in the images to modern data will provide new or more robust estimates of the extent of this change.
Supplement
Supplement.
Author contributions
Author contributions.
JE conceived of this project, and both JE and MP worked to collect the metadata, supervise undergraduate student workers, secure funding, interact with Cornell University Library staff to digitize the images and make them available online, and write the article.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
We thank all of the Cornell University Library staff who facilitated the digitization and made the data available on the web: Rhea Garen, Wendy Kozlowski, Jason Kovari, David Lurvey, Hannah Marshall, Danielle Mericle, Liz Muller, and Melissa Wallace. We thank the nine undergraduate students at Purdue University and Cornell University who helped to catalog the Tarr and von Engeln collections and create the metadata: Phoebe Dawkins, Anant Hariharan, Haydn Lenz, Alexis Lopez-Cepero, MacKenzie McAdams, Sam Nadell, Ella Noor, Emma Reed, and Frank Tian. Figures were generated with the Generic Mapping Tools software of Wessel et al. (2013). We are grateful to the late Art Bloom for making us aware of these photographs in the first place. Finally, we thank reviewers Anders Anker Bjørk and Florence Fetterer as well as Topical Editor Reinhard Drews for constructive reviews that improved the manuscript.
Financial support
Financial support.
Funds for digitization of the images were provided by Cornell University through the Grants Program for Digital Collections in Arts and Sciences (Principal Investigator Aaron Sachs), through the Einhorn Discovery Grant and Undergraduate Research Program of the College of Arts and Sciences (for student Emma Reed), and the Morley Research Fund from the College of Agriculture and Life Sciences (for student Sam Nadell).
Review statement
Review statement.
This paper was edited by Reinhard Drews and reviewed by Anders Anker Bjørk and Florence Fetterer.
References
Alley, B.: Rivers of Ice: The Yakutat Bay Expedition of 1906, Outskirts Press, 93 pp., 2012.
Bjørk, A. A., Kjær, K. H., Korsgaard, N. J., Khan, S. A., Kjeldsen, K. K., Andresen, C. S., Box, J. E., Larsen, N. K., and Funder, S. V.: An aerial view of 80 years of climate-related glacier fluctuations in southeast Greenland, Nat. Geosci., 5, 427–432, https://doi.org/10.1038/ngeo1481, 2012.
Bjørk, A. A., Kruse, L. M., and Michaelsen, P. B.: Brief communication: Getting Greenland's glaciers right – a new data set of all official Greenlandic glacier names, The Cryosphere, 9, 2215–2218, https://doi.org/10.5194/tc-9-2215-2015, 2015.
Brice, W. R.: Ralph Stockman Tarr: Scientist, writer, teacher, Geol. Soc. Am., Centennial Special Volume 1, 215–235, 1985.
Brice, W. R.: Cornell Geology Through the years, College of Engineering, Cornell University, 230 pp., 1989.
Conner, C., Streveler, G., Post, A., Monteith, D., and Howell, W.: The Neoglacial landscape and human history of Glacier Bay, Glacier Bay National Park and Preserve, southeast Alaska, USA, The Holocene, 19, 381–393, 2009.
Cornell University: Digital Collections, Historic Glacial Images of Alaska and Greenland, from the Ralph Stockman Tarr expeditions (1896; 1905–1911), available at: http://digital.library.cornell.edu/collections/tarr (last access: 15 March 2020), 2014.
Gardner, A. S., Moholdt, G., Cogley, J. G., Wouters, B., Arendt, A. A., Wahr, J., Berthier, E., Hock, R., Pfeffer, W. T., Kaser, G., and Ligtenberg, S. R: A reconciled estimate of glacier contributions to sea level rise: 2003 to 2009, Science, 340, 852–857, 2013.
Hill, E. A., Carr, J. R., Stokes, C. R., and Gudmundsson, G. H.: Dynamic changes in outlet glaciers in northern Greenland from 1948 to 2015, The Cryosphere, 12, 3243–3263, https://doi.org/10.5194/tc-12-3243-2018, 2018.
Hoppin, B.: A Diary Kept While with the Peary Arctic Expedition of 1896, (New Haven: n.p.), 80 pp., 1897.
Huntington, P. A. M.: Robert E. Peary and the Cape York Meteorites, Polar Geogr., 1, 53–65, https://doi.org/10.1080/789609353, 2002.
Koelsch, W. A.: William Libbey of Princeton: A Forgotten Geographer, Middle States Geogr., 49, 1–8, 2016.
Larsen, C. F., Motyka, R. J., Arendt, A. A., Echelmeyer, K. A., and Geissler, P. E.: Glacier changes in southeast Alaska and northwest British Columbia and contribution to sea level rise, J. Geophys. Res., 112, F01007, https://doi.org/10.1029/2006JF000586, 2007.
Larsen, C. F., Burgess, E., Arendt, A. A., O'Neel, S., Johnson, A. J., and Kienholz, C.: Surface melt dominates Alaska glacier mass balance, Geophys. Res. Lett., 42, 5902–5908, https://doi.org/10.1002/2015GL064349, 2015.
Maness, J., Duerr, R., Dulock, M., Fetterer, F., Hicks, G., Merredyth, A., Sampson, W., and Wallace, A.: Revealing our melting past: Rescuing historical snow and ice data, Geophys. Res. J., 14, 92–97, https://doi.org/10.1016/j.grj.2017.10.002, 2017.
Martin, L.: Alaskan earthquakes of 1899, Bull. GSA, 21, 339–406, 1910.
Martin, L.: The National Geographic Society researches in Alaska, The Nat. Geogr. Mag., 22, 537–561, 1911.
Martin, L.: Alaskan Glaciers in Relation to Life, Bull. Am. Geographical Soc., XLV, 801–818, 1913a.
Martin, L.: Some features of glaciers and glaciation in College Fiord, Prince William Sound, Alaska, Z. Gletscherk., 7, 289–333, 1913b.
Martin, L.: “Juneau-Yakutat Section”, Guide Books of Excursions in Canada, No. 10. (Twelfth International Geological Congress, Toronto, 1913.) Ottawa: Canadian Geological Survey, 121–76, 1913c.
Meier, M. and Post, A.: What are Glacier Surges?, Can. J. Earth Sci., 6, 807–817, https://doi.org/10.1139/e69-081, 1969.
Meier, M. F., Rasmussen, L. A., Krimmel, R. M., Olsen, R. W., and Frank, D.: Photogrammetric determination of surface altitude, terminus position, and ice velocity of Columbia Glacier, Alaska, U.S. Geol. Surv. Prof. Pap., 1258-F, 41 pp., 1985.
Molnia B. F.: Late nineteenth to early twenty-first century behavior of Alaskan glaciers as indicators of changing regional climate, Global Planet. Change, 56, 23–56, https://doi.org/10.1016/j.gloplacha.2006.07.011, 2007.
Molnia, B. F.: Glaciers of North America – Glaciers of Alaska, U.S. Geological Survey Professional Paper 1386-K, 2008.
National Snow and Ice Data Center (comp.): Glacier Photograph Collection, Version 1, [Indicate subset used], Boulder, Colorado USA. NSIDC: National Snow and Ice Data Center, https://doi.org/10.7265/N5/NSIDC-GPC-2009-12, 2015.
Plafker, G. and Thatcher, W.: Geological and geophysical evaluation of the mechanisms of the great 1899 Yakutat Bay earthquakes, Active Tectonics and Seismic Potential of Alaska, 215–236, 2008.
Post, A.: Alaskan Glaciers: Recent Observations in Respect to the Earthquake-Advance Theory, Science, 148, 3668, 366–368, https://doi.org/10.1126/science.148.3668.366, 1965.
Post, A., O'Neel, S., Motyka, R. J., and Streveler, G.: A complex relationship between calving glaciers and climate, Eos Trans. AGU, 92, 305, 2011.
Tarr, R. S.: The Cornell Expedition to Greenland, Science, 4, 520–523, 1896.
Tarr, R. S.: Former extension of Cornell Glacier near the southern end of Melville Bay, Bull. Geol. Soc. Am., 8, 251–268, pls. 25–29, 1897a.
Tarr, R. S.: The margin of Cornell glacier (Greenland), Am. Geol., 20, 139–156, pls. VI–XII, 1897b.
Tarr, R. S.: Former extension of ice in Greenland, Science, 5, 804–805 (see also pg. 344), 1897c.
Tarr, R. S.: The Arctic sea ice as a geological agent, Am. J. Sci., 3, 223–229 (Also: Supplement 44, pp. 17, 941–17,942), 1897d.
Tarr, R. S.: Difference in the climate of Greenland and American sides of Davis Strait and Baffin Bay, Am. J. Sci., 3, 315–321, 1897e.
Tarr, R. S.: Evidence of glaciation in Labrador and Baffin Land, Am. Geol., 19, 191–197, pl. X, 1897f.
Tarr, R. S.: Valley glaciers of the upper Nugusak peninsula, Greenland, Am. Geol., 19, 262–267 pl. XV, 1897g.
Tarr, R. S.: The glaciers of Greenland, Sci. Am., 76, 216–217, 1897h.
Tarr, R. S.: Rapidity of weathering and stream erosion in the Arctic latitudes, Am. Geologist, 19, 131–136, 1897i.
Tarr, R. S.: Hanging valleys in the Finger Lakes region of central New York, Am. Geol. 33, 271–291, 1904.
Tarr, R. S.: Some instances of moderate glacial erosion, J. Geol., 13, 160–173, 1905a.
Tarr, R. S.: Moraines of Seneca and Cayuga Lake valleys, Bull. Geol. Soc. Am., 16, 215–228, 1905b.
Tarr, R. S.: Drainage features of central New York, Bull. Geol. Soc. Am., 16, 229–242, 1905c.
Tarr, R. S.: The gorges and waterfalls of central New York, Am. Geog. Soc. Bull., 27, 193–212, 1905d.
Tarr, R. S.: The Yakutat Bay region, U. S. Geol. Survey, Bull. No. 284, 61–64, 1906a.
Tarr, R. S.: Glacial erosion in the Finger Lakes region of central New York, Geology, 14, 18–21, 1906b.
Tarr, R. S.: Watkins Glen and other gorges of the Finger Lakes region of central New York, Pop. Sci. Monthly, 68, 387–397, 1906c.
Tarr, R. S.: Second expedition to Yakutat Bay, Alaska, Phlla. Geog. Soc. Bull., vol. 5, No. 1, Jan., 1–14, Glacial erosion in Alaska, Pop. Sci. Mon., vol. 70, No. 2, Feb., 1007, 09–110, 1907a.
Tarr, R. S.: Glacial erosion in Alaska, Pop. Sci. Monthly, 70, 99–110, 1907b.
Tarr, R. S.: Recent advance of glaciers in the Yakutat Bay region, Alaska, Bull. Geol. Soc. Am., 18, 257–286, 1907c.
Tarr, R. S.: The Malaspina Glacier, Am. Geogr. Soc. Bull., 3, 273–285, 1907d.
Tarr, R. S.: The advancing Malaspina Glacier, Science, 25, 34–37, 1907e.
Tarr, R. S.: The Yakutat Bay region, Alaska. U. S. Geol. Survey, Professional Paper 64, 190, 183 pp., 1909.
Tarr, R. S.: Oscillations of Alaskan glaciers, Abstract : Science, new ser., vol. 32, Aug. 5, 185–186, Abstract and discussion, Bull. Geol. Soc. Am., 21, 758–759, 1910.
Tarr, R. S.: The theory of advance of glaciers in response to earthquake shaking, Z. Gletscherk., Band V, 1–35, 1910.
Tarr, R. S. and Cornell University Library: Historic Glacial Images of Alaska and Greenland from the Ralph Stockman Tarr expeditions (1896; 1905–1911), Cornell University Library, https://doi.org/10.7298/X4M61H5R, 2014.
Tarr, R. S. and Martin, L.: Recent change of level in Alaska, Science, 22, 879–880, 1905.
Tarr, R. S. and Martin, L.: Recent Changes of Level in the Yakutat Bay Region, Alaska, Bull. GSA, 17, 29–64, 1906a.
Tarr, R. S. and Martin, L.: Recent change of level in Alaska, Geogr. J., July, 30–43, 1906b.
Tarr, R. S. and Martin, L.: Glaciers and glaciation of Yakutat Bay, Alaska, Am. Geog. Soc. Bull., 8, 1906, 145–167, 1 plate, figs. 24, Abstract: Ibid., vol. 38, No. 2, Feb., 99–101, 1906c.
Tarr, R. S. and Martin, L.: Position of Hubbard Glacier front in 1792 and 1794, Am. Geog. Soc. Bull., 39, 129–136, 1907.
Tarr, R. S. and Martin, L.: The National Geographic Society's Alaskan Expedition of 1909, The National Geographic Magazine, XXI, 1–53, 1910.
Tarr, R. S. and Martin, L.: The earthquakes of Yakutat Bay, Alaska in September 1899, U.S. Geol. Surv. Prof. Pap., 69, 135 pp., 1912a.
Tarr, R. S. and Martin, L.: An Effort to Control a Glacial Stream, Ann. Assoc. Am. Geogr., 2, 25–40, 1912b.
Tarr, R. S. and Martin, L.: Glacial Deposits of the Continental Type in Alaska, J. Geol., XXI, 289–300, 1913.
Tarr, R. S. and Martin, L.: Alaskan glacier studies of the National Geographic Society in the Yakutat Bay, Prince William Sound and lower Copper River regions: Washington, D.C., National Geographic Society, 498 p., 1914.
Tarr, R. S. and Cornell University Library: Historic Glacial Images of Alaska and Greenland from the Ralph Stockman Tarr expeditions (1896; 1905–1911), Cornell University Library, https://doi.org/10.7298/X4M61H5R, 2014.
von Engeln, O. D.: Some Alaskan Days – and nights: Cornell Era, Dec., 122–124, 1906.
von Engeln, O. D.: An Alaskan wonderplace: Outlook, v. 86 May 25, 169–180, 1907a.
von Engeln, O. D.: The photographic equipment of a sub-arctic exploring party, Photo-Era, October, 1907b.
von Engeln, O. D.: Photography in glacial Alaska: The National Geographic Magazine, 21, 54–62, 1910.
von Engeln, O. D.: Phenomena associated with glacier drainage and wastage with especial reference to observations in the Yakutat Bay Region, Alaska, Reprinted from Zeitschrift für Gletscherkunde, Band VI, Heft 2, 1911.
Wessell, P., Smith, W. H. F., Scharroo, R., Luis, J. F., and Wobbe, F.: Generic Mapping Tools: Improved version released, EOS Trans. AGU, 94, 409–410, 2013.
|
2020-05-30 15:11:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23531296849250793, "perplexity": 11359.540003382352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347409337.38/warc/CC-MAIN-20200530133926-20200530163926-00477.warc.gz"}
|
https://www.gamedev.net/forums/topic/217335-c-structs-vs-variables/
|
#### Archived
This topic is now archived and is closed to further replies.
# C++ structs vs. variables
This topic is 5222 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Is there any overhead involved in stuffing a bunch of variables into a struct? For instance, if I had a bunch of variables: int a, b, c, d, e, f, g, h, i, j; and I wanted to put them into a struct to make it more mentally managable: struct foo { int a, b, c, d, e, f, g, h, i, j; };, is there now any overhead in accessing those variables? In other words, if I previously did this: SomeFunction(a); and now I do: SomeFunction(my_struct.a), is that any slower?
##### Share on other sites
Twenty years ago I think it was. Today the hardware has direct addressing modes for this type of memory access.
##### Share on other sites
it''s exactly the same - the compiler knows the layout of the struct at compile time.
##### Share on other sites
Architect first, optimize later.
Even if it was slower, optimization should not come at the cost of organization and readability.
##### Share on other sites
What about if I have a lot of structs, or the structs are really big? Ex. my_struct.a.foo.bar.blah.erm.wut.huh I read that direct mode addressing only gives you a small range to work with. So there must be some overhead at some point, right?
##### Share on other sites
quote:
Original post by PlayGGY
These kinds of questions are just interesting to me. I also enjoy the "which one is faster, array[x][y] or array[y][x]?" discussions
##### Share on other sites
a.b.c.d.e.f.h and a.g are exactly the same speed. Here''s why.
Let''s suppose you have an int myInt on the stack (as a local variable). That means that in one of your registers is a stack pointer, and a certain (constant) offset up (or down, depending on architecture), is the beginning of your myInt. This offset is known at compile-time, since it''s just related to all the local variables you define. (In case you were wondering, this is why you used to have to define all your variables at the top of blocks in C). So when you want to, say, assign 3 to myInt, the computer does something like *(SP + 12) = 3, where 12 is the offset into the stack. If instead of myInt you have myStruct which has a struct which has a struct which has an int myOtherInt, etc, you still know--at compile time--where, relative to the stack pointer, myOtherInt is. So you can just change the offset, and you''re good to go.
What will change performance is if your struct stores, not structs, but pointers to structs, since those require extra run-time lookups. a->b->c->d->e->f is considerably slower than f.
"Sneftel is correct, if rather vulgar." --Flarelocke
##### Share on other sites
quote:
Original post by Sneftel
a.b.c.d.e.f.h and a.g are exactly the same speed. Here's why.
quote:
Original post by sjelkjd
it's exactly the same - the compiler knows the layout of the struct at compile time.
You don't know that, since it is totally up to the compiler. He never said which compiler and which platform he is using and neither did you.
There are cases on today's hardware with today's compilers where putting the variables into a struct would result in slower code.
[edited by - JohnBolton on April 3, 2004 3:47:36 PM]
##### Share on other sites
quote:
Original post by JohnBolton
There are cases on today''s hardware with today''s compilers where putting the variables into a struct would result in slower code.
Like what? (not disbelieving, just interested)
"Sneftel is correct, if rather vulgar." --Flarelocke
1. 1
2. 2
3. 3
Rutin
22
4. 4
JoeJ
16
5. 5
• 14
• 29
• 13
• 11
• 11
• ### Forum Statistics
• Total Topics
631775
• Total Posts
3002284
×
|
2018-07-22 03:42:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17772887647151947, "perplexity": 2935.5215680926426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593004.92/warc/CC-MAIN-20180722022235-20180722042235-00344.warc.gz"}
|
https://chemistry.stackexchange.com/questions/37077/total-pressure-in-a-chamber-calculation
|
# Total pressure in a chamber calculation
I have a chamber containing two different gases and a liquid which needs to be heated at a certain temperature to bring it to vapor state. I have worked out that we can calculate the total pressure of a mixture of gas using Dalton's law using partial pressure of each gas. To calculate total vapor pressure we have Raoult's law. If I have both gases and vapors in the medium can we calculate the total pressure by adding them both, as in a "collecting gases over water experiment"?
• Okay, let me put it this way. If I know the partial pressure of 2 gases and vapor pressure of liquid turned vapor, can I directly add them to get total pressure in the chamber? – NidhiS Sep 9 '15 at 10:53
• This not my homework question. I am an electronics engineer working on an interdisciplinary project wherein I need some information on mixture of gases. I have worked out that we can calculate the total pressure of a mixture of gas using Dalton's law using partial pressure of each gas. To calculate total vapor pressure we have Raoult's law. If I have both gases and vapors in the medium I think we can calculate the total pressure by adding the both like 'collecting gases over water experiment'. I read these things long ago so I wanted to confirm it. – NidhiS Sep 9 '15 at 11:15
• Thank you for the clarification, could you please add this detail to the question – user15489 Sep 9 '15 at 11:17
Dalton's Law is only true for ideal compounds, which theoretically have no intermolecular forces between their molecules. Those forces become more and more significant when increasing the concentration of any compound, because all molecules become packed up closer.
This will lower the pressure, because if any molecule is about to hit the container wall, it gets pulled backwards $-$ and thus slowed $-$ due to the attraction of the other molecules behind it. But for high temperature this effect is negligible, because they can't be slowed down so easily.
Now lets come to the calculation using Kay's rule:
Since you only have one liquid component, you don't need Roult's Law to calculate the vapor pressure for a mixture of liquids.
1.) You need to find the values for the critical temperatures and critical pressures for your components (gases and water vapor).
2.) You need to know the relative amounts of each substance: $$y_i = \frac{n_i}{n_m}$$
$n_i$... amount of a specific component [mol]
$n_m$... total amount of substance of the mixture [mol]
3.) Calculate the pseudo-critical temperature and pressure: $$T_c' = y_1 \times T_{c1} + y_2 \times T_{c2} + y_3 \times T_{c3}$$
$$P_c' = y_1\times P_{c1} +y_2\times P_{c2} +y_3\times P_{c3}$$
$T_c$... critical temperature of a specific component [K]
$P_c$... critical pressure of a specific component [Pa]
4.) calculate the pseudo-reduced values for T and the volume: $$T_r = T / T_c'$$ $$v_r' = V / (R \times T_c' / P_c')$$
$T$... temperature in the container
$V$... volume of the gas phase [m$^3$]
$R$... universal gas constant = 8.314 J/(K mol)
5.) Now you need to look up $v_r$ and $T_r$ in a compressibility chart to get the value for a specific $Z$ factor, where their lines intersect.
6.) And finally calculate the pressure: $$P = ZRT / V$$
This will be more accurate than if you would use the standard way for ideal gases, especially for low temperatures
References: real gases
calculation
compressibility chart examples
|
2021-08-03 08:40:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5841869711875916, "perplexity": 604.0840295011865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154432.2/warc/CC-MAIN-20210803061431-20210803091431-00325.warc.gz"}
|
https://www.coursehero.com/file/p7tt222/k-For-any-given-sequence-s-n-s-n-has-a-convergent-subsequence-s-n-k-Sometimes/
|
k For any given sequence s n s n has a convergent subsequence s n k Sometimes
# K for any given sequence s n s n has a convergent
This preview shows page 4 - 8 out of 10 pages.
(k) For any given sequence ( s n ), ( s n ) has a convergent subsequence ( s n k ). Sometimes true: The sequence s n = 1 , n odd 1 /n, n even has a con- vergent subsequence s 2 n 0. The sequence s n = n has no convergent subsequences. Similarly, the sequence s n = n 2 has no convergent subsequences. 4
(l) For any bounded sequence ( s n ), lim sup s n = sup { s n } . Sometimes true: See the sequences in (j). (m) It α is a subsequential limit of a bounded sequence ( s n ), then α lim sup s n Always true: By definition: lim sup s n is the least upper bound of the set of all subsequential limits. (n) If every subsequence of a sequence ( s n ) is convergent, then ( s n ) itself must be convergent. Always true: If every subsequence of ( s n ) is convergent, then ( s n ) must be convergent since ( s n ) is a subsequence of itself. 5
(o) If ( s n ) is a divergent sequence, then some subsequence of ( s n ) must diverge. Always true: If every subsequence of ( s n ) is convergent, then ( s n ) is convergent as shown immediately above. (p) If ( s n ) is unbounded above, then ( s n ) has an increasing subsequence ( s n k ) which diverges to + . Always true: This is Theorem 3, Section 19. 6
2. Let ( s n ) be a positive sequence such that lim n →∞ s n +1 s n = L > 1 . Prove that s n + . Hint: lim n →∞ x n = + for any number x such that x > 1. Choose a number c such that 1 < c < L . Let ϵ = L c . Since lim n →∞ s n +1 s n = L There is a positive integer N such that s n +1 s n L < ϵ for all n > N which implies ϵ < s n +1 s n L < ϵ and L ϵ < s n +1 s n < L + ϵ for all n > N.
#### You've reached the end of your free preview.
Want to read all 10 pages?
• Fall '08
• Staff
• Mathematical analysis, Limit of a sequence, Limit superior and limit inferior, Sn, subsequence
Stuck? We have tutors online 24/7 who can help you get unstuck.
Ask Expert Tutors You can ask You can ask You can ask (will expire )
Answers in as fast as 15 minutes
|
2020-10-25 11:15:40
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9271539449691772, "perplexity": 576.2779496644754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888931.67/warc/CC-MAIN-20201025100059-20201025130059-00061.warc.gz"}
|
https://meta.mathoverflow.net/questions/4863/which-latex-mathjax-features-do-we-need-or-want-to-have-supported-on-the-site
|
# Which Latex / MathJax features do we need or want to have supported on the site?
As we've started discussing, Stack Exchange is planning to redo the editor for all questions and answers across all their sites. As articulated by Emilio Pisanty, these changes will have significant implications for MathJax.
In the initial discussion, I thought the possibility that these implications might even include not fully supporting MathJax seemed like a worst-case scenario, which we would only even mention to confirm that everyone agreed it was not a real option. But it now seems that among the options Stack Exchange has suggested so far, the most reasonable one would entail exactly that, as it would involve switching to some different Latex-like framework.
If we go down that road, then we're going to have to get even more involved with things: we will need to think about which Latex / MathJax features we need to have supported on the site. Because if you leave it to the developers to decide which Latex features are worth supporting, then you end up with something like the Microsoft Equation Editor, a sort of caricature of what it means to support mathematical typesetting.
Though this would be a large task, it also presents an opportunity. If we were to switch away from MathJax, whatever new solution replaces it might be an improvement in certain respects.
So perhaps I'm jumping the gun, but I think it may be worth getting started on this task. We're still in the early stages, so there's no way we will come to a definitive or comprehensive answer right now, but we may start to get some ideas. Up to now, I've never had to dialog with a non-mathematician about what I need in Latex, so I'm not really sure what their preconceptions are about what we need. Some things I might guess they don't anticipate include user-defined macros and commutative diagrams, but there are probably much more basic things they'll need to have explained. On the flip side, my impressions of what they will find difficult or easy to implement are just as ill-informed as their impressions of what we will need. So the question is:
Question: Which Latex features are important to have supported on this site? Which ones, if not strictly necessary, would be very nice to have supported?
It's worth considering both features which are currently supported by MathJax, and ones which are not.
• I'd like this question to be CW, but I'm realizing that it's been years since I converted a question rather than an answer to CW and I don't see how to do it. Is it something that stopped being supported at some point? I'm also not familiar enough with the meta tags to do them justice. – Tim Campion Feb 1 at 18:37
• This feels premature to me at this stage. If the devs do want to switch the backend then this will definitely need to be done, but it's a lot of community work so I'd say let's do that only if it's clear that it's required. – Emilio Pisanty Feb 1 at 18:44
• On a more strategic level, it feels to me that a more effective strategy is to look at the proposed backend (be it KaTeX or anything else), find what its differences are with MathJax, and look for where they overlap with existing posts, and then work from there. But then again you're right that this might not be assertive enough, if non-back-compatible options are on the table. – Emilio Pisanty Feb 1 at 18:46
• @EmilioPisanty You may well be right. For me, commutative diagram support is an essential requirement which I'm certain will need to be explicitly explained to the developers. But I don't want to go around saying "the main thing to look for in a new Latex-like framework is commutative diagram support" before soliciting input from others who may well have specific needs that are just as clear. For example, on the internet I generally don't really use many user-defined macros, but I know that some folks do. Do they find them essential, or just convenient? I don't know. – Tim Campion Feb 1 at 18:57
• This reminded me of an older discussion (about MathJax) on Mathematics Meta: Poll for MathJax macros that should be automatically loaded. – Martin Sleziak Feb 1 at 19:01
• Whatever is suggested, I would like to make sure we absolutely veto the (old?) Wikipedia-style images of rendered equations/expressions. Also, the oft-used WP hack of formatted text as a stand-in for variables (just putting it in italics is not enough!) – theHigherGeometer Feb 2 at 6:53
• Better diagram support would be nice, since AMScd doesn't really do non-diagonal arrows. I don't know what to suggest in its place that SE would actually consider, though. – theHigherGeometer Feb 2 at 6:55
• Since @DavidRoberts mentioned commutative diagrams, I'll add links to some previous discussions on this meta: Diagrams in MathJax via xypic.js, Is it possible to use tikzcd code in MO posts?, Triangle commutative diagram does not work here at MO. Commutative diagrams were mentioned also here: Big list of feature requests and suggestions for a fantasy MO 3.0. – Martin Sleziak Feb 2 at 8:03
• Perhaps its worth pointing out that SE views these changes as a chance to make things better. It's probably counterproductive not to at least try to get on board with their optimism. So maybe we should also put more emphasis on thinking about what Latex features are not supported by MathJax, but would be nice to add. Coming back to commutative diagrams, as others have also pointed out there's significant room for improvement. – Tim Campion Feb 2 at 14:14
• It occurs to me that someone who knows how to do some basic scripting should be able to download the last few years of MO and extract some statistics on which LaTeX features are used how often. This would probably be more useful than asking individual people's opinion. – David E Speyer Feb 3 at 15:05
• @DavidESpeyer I'll add that the data dumps (in form of an xml-file) of Stack Exchange sites - including MO - are publicly available: archive.org/details/stackexchange (I think they are updated quarterly.) So if somebody wants to analyze data, they can be obtained from there. – Martin Sleziak Feb 3 at 17:41
• @DavidESpeyer And, of course, one could also use SEDE or built-in-search. I have tried to expand on this a bit in the MathOverflow chatroom. – Martin Sleziak Feb 4 at 6:53
• As a non-programmer, how is something like backwards compatibility implemented (or is it not?). Presumably any huge overhaul like this will wreak havoc on old posts, no matter how careful the translation from one system to another is done. Is it inevitable that we would be spending years finding incompatibilities in the future and having to fix them by hand? – Dan Rust Feb 4 at 17:30
• @DanRust In my opinion, it's a bit premature to expect that some changes similar the ones described in the question are actually going to happen. Let's wait and see whether something like that is actually confirmed. OTOH, the MathOverflow community does not seem to be that bothered about rendering old posts - at least, there was no reaction when I brought up some broken posts here: Problem with posts and comments relying on macros defined elsewhere. – Martin Sleziak Feb 4 at 20:40
• @DavidESpeyer Although there is a bit of difference (you asked about LaTeX features and this is about commands used in the post), but still, the SEDE query posted by Glorfindel on Mathematics Meta might be of interest in connection with your inquiry: What Mathjax commands are most often used on this site? – Martin Sleziak Feb 23 at 14:37
This is not an answer expressing a preference, but one giving some resources for people to consider.
• Intmath.com made a Speed Comparison Test between KaTeX and MathJax. MathOverflow is currently running MathJax 2.7.5; the new version that is MathJax 3 runs significantly faster. KaTeX does run even faster still.
• There are some breaking changes between MathJax 2.7 and MathJax 3. Though I don't think they are major enough to prevent us from upgrading.
• MathJax 3 is fairly easy to install, and if you use the autoload extension you (meaning StackExchange) don't need to configure precisely the list of extensions to be loaded.
• The list of all supported macros: MathJax 3; KaTeX. As you can see the two are pretty close to feature parity; there are some odd commands here and there that KaTeX doesn't support. The most prominent difference that may be important for MathOverflow is the support for commutative diagrams. MathJax supports amscd style diagrams which has no counterpart in KaTeX.
• Strange that mbox is missing. Since they have textrm, it wouldn't be such a big deal in the future, but mbox is a little nicer and is what I usually use. – David E Speyer Feb 3 at 22:10
• MathJax3 handles configuration options in a significantly different way from MathJax2.7, and the mechanism for reprocessing content that has changed is also different. This causes pain in contexts where MathJax is embedded in some other large framework. That applies to Moodle, for example. I don't know whether it applies to the StackExchange framework. – Neil Strickland Feb 3 at 22:56
• @DavidESpeyer: This is off-topic, but let me mention that for setting part of an equation in 'text' font, one should typically use \text{}, \textrm{}, \mathrm{} or AMS \OperatorName{} rather than \mbox{}. See this answer on TeX.SE for a detailed discussion. – Mateusz Kwaśnicki Feb 4 at 9:03
• The usual thing I am using these commands for is things like $\{ n : n \ \mbox{squarefree and divisible by at least$3$primes} \}$, where I want to use normal text mode spacing inside the box. – David E Speyer Feb 4 at 12:13
• Or $(1+1/n)^n = \exp(n (1+1/n-1/(2n^2) + (\mbox{lower order terms})))$. – David E Speyer Feb 4 at 12:16
• Looks like I might just want \text, I'll play with that. – David E Speyer Feb 4 at 12:26
• @DavidESpeyer afaik you may put any kind of TeX code inside \text – მამუკა ჯიბლაძე Feb 9 at 19:37
• As for amscd, this is something that has been added do KaTeX recently, I don't know when it will be officially released but it works on this preview page: deploy-preview-2396--katex.netlify.app – Sil Feb 12 at 19:32
• Also one of the breaking changes for switching to MathJax 3 was line breaking which is still not supported there... that should be critical to SE sites, see Davide Cervone's comments here math.meta.stackexchange.com/questions/30901/… – Sil Feb 12 at 19:38
Need:
• Most importantly, a true preview which shows me exactly what I have written and which I can look at without affecting my LaTeX.
• Support for at least the symbols in standard LaTeX and amssymb (including amsfonts).
• At least one of array or matrix.
• Both inline and displayed math.
Currently have and very much want:
• Simple user defined macros. (I'm not saying they need to implement the full Turing complete LaTeX macro language, just that I should be able to write $\def\RR{\mathbb{R}}$ and then have $\RR$ turn into $$\mathbb{R}$$.)
• array, matrix, bmatrix, pmatrix and smallmatrix.
• cases
• overbrace and underbrace
• mbox
Would be nice
• xymatrix
• tikz
• My impression is that tikz is super-complicated. When you say it would be nice, how much of that power do you have in mind? Is it mostly tikzcd stuff? (I don't know if that would make a difference.) – Tim Campion Feb 3 at 17:46
• I don't think I know enough about tikz to answer this. I generally use it when xymatrix won't do the job, so I am already filtering for hard cases. – David E Speyer Feb 3 at 17:53
• Yeah, I suppose the reason that tikzcd is more powerful than xymatrix to begin with is probably closely tied up with the fact that there's this huge infrastructure behind it. But at least it sounds like you'd want it primarily for commutative diagrams rather than drawing pictures. – Tim Campion Feb 3 at 17:57
• cases is also rather important, I guess. Also underbrace. – Martin Rubey Feb 3 at 19:13
• I wonder if I should have made each response a separate answer so votes could be more meaningful. Martin Rubey's are all things I would put in the middle category; I'll add them. – David E Speyer Feb 3 at 21:54
• One interesting missing item here is \begin{align}. If nothing else, there's ~4k posts with it on MO. – Emilio Pisanty Feb 4 at 0:05
• @EmilioPisanty I considered adding that, since it is an obvious companion to matrix and array, but I almost never use it myself. I think what this is showing is that the answer format I chose isn't really a good one. – David E Speyer Feb 4 at 2:09
• Davide Cervone left a comment concerning diagonal arrows, xypic, MathML in a discussion on Mathematics Meta. (Although that was back in 2013.) – Martin Sleziak Feb 4 at 11:55
• Implementing Tikz in Mathjax looks like a programmer's worst dream. That package contains everything but the kitchen sink. – Federico Poloni Feb 14 at 18:42
Personally I have never used anything very complicated. I use inline maths, displayed equations and aligned sets of equations, sometimes with matrices. I can't remember whether I have ever used fancier aspects of the array environment, but it's not hard to imagine doing so. Sometimes I use commutative diagrams, but the current arrangements for that are poor. I have never used macros on MathOverflow, but perhaps I should have done.
I have not investigated KaTeX's claim to be faster than MathJax, but that would be somewhat beneficial if it were true.
Sorry if this is not interest of you. I don't know whether it is possible or not but I'd like MathJax have autocomplete feature like tex editors for common commands. e.g. \over---> \overline or \overbrace etc.
This is not so bad but text mode inside math mode is a bit deferent from main text that I think it should be like main text as $$\LaTeX$$ is. e.g. $$a^2+b^2=c^2\quad \text{this text is bold and RomanMath} \quad \sqrt{a^2+b^2}=c.$$
• This seem more like a feature of a specific editor than a feature of MathJax. Perhaps it is worth mentioning that Overleaf has some kind of autocompletion. (You reminded me of this post on Mathematics Meta: Some suggestion for MathJax. The post is now deleted - so 10k+ link.) – Martin Sleziak Mar 17 at 23:12
• This question does not ask for a list of feature requests for MathJax, but for a list of nontrvivial features already implemented in MathJax that are used often enough so that we need them preserved in the event that MathJax is replaced with a different TeX renderer. – Emil Jeřábek Mar 18 at 7:46
|
2021-04-11 03:32:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6146877408027649, "perplexity": 1267.9740268633598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060927.2/warc/CC-MAIN-20210411030031-20210411060031-00276.warc.gz"}
|
http://serialmentor.com/blog/2014/10/4/r-markdown-the-easiest-and-most-elegant-approach-to-writing-about-data-analysis-with-r
|
# R Markdown, the easiest and most elegant approach to writing about data analysis with R
This weekend, I finally spent some time learning R Markdown. I had been aware of its existence for a while, but I had never bothered to check it out. What a mistake. R Markdown rocks! It’s hands down the easiest and most elegant method to creating rich documents that contain data analysis, figures, mathematical formulas, and text. And it’s super easy to learn. I wager that anybody who has RStudio installed can create a useful document in 30 minutes or less. So if you use R, and you’ve never used R Markdown, give it a try.
R Markdown provides a literate programming platform for the R language. Literate programming, invented by Donald Knuth, allows users to write both a program and a document describing the program, at the same time. In the case of R, this means that you can write a document that contains R code, the output that is generated when the R code is run (including graphs), and prose describing the R code and its output. To give you an example, I started writing a tutorial for R’s ggplot2 library this weekend, and the original R Markdown file as well as the HTML output generated from that file are available here.
What does the word Markdown stand for? Markdown is a minimalist approach to writing strutured documents. It consists of plain text with a few simple directives to mark sections, turn text bold or italics, or insert quotes. If you have ever edited a wikipedia article, you have used Markdown.
To give you an example, this is Markdown text:
We can make text **bold**, *italics*, or look like code.
We can also insert links, [e.g. to wikipedia,](http://www.wikipedia.org/) we can quote things:
> It is time to eat — Hungry John
or make lists:
1. Item 1
2. Item 2
3. Item 3
It will be rendered like this:
We can make text bold, italics, or look like code. We can also insert links, e.g. to wikipedia, we can quote things:
It is time to eat — Hungry John
or make lists:
1. Item 1
2. Item 2
3. Item 3
R Markdown works the same, only that it adds the option to insert R code blocks. An R code block could look something like this:
{r}
# place R code here, e.g. to make a plot:
require(ggplot2)
x <- 1:10; y <- x^2
qplot(x, y)
When you convert the R Markdown file to HTML, the R code gets executed, the R output captured and inserted into the document, and you’ve got everything nicely together, with very little work.
To create an R Markdown document in RStudio, all you have to do is go to File, New File, and then select R Markdown. Accept the default settings, and R Studio will generate a new R Markdown file with a few lines of example content. To convert the file into HTML, simply click on the “Knit HTML” button. If you have previously stored your R Markdown file somewhere on your harddisk (with suffix .Rmd), RStudio will automatically save the generated HTML file in the same location, with the same name and suffix .html. The HTML file is self-contained, including all images, so it’s easy to publish it on a web page or share it with people. RStudio also provides you with the option to publish the document online on the RPubs website. Just click on the “Publish” button in the HTML view.
|
2017-10-24 00:20:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2350294589996338, "perplexity": 2186.0718853447174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827662.87/warc/CC-MAIN-20171023235958-20171024015958-00006.warc.gz"}
|
https://mindmatters.ai/2021/01/
|
Mind Matters Natural and Artificial Intelligence News and Analysis
Monthly Archive January 2021
A Physicist Asks, Was the Universe Made For Us? She Says No
But the question is more complicated than it appears at first
Sabine Hossenfelder thinks there is no way to determine an answer to the question of whether the universe was made for us because we have access to only one universe for data: There is no way to ever quantify this probability because we will never measure a constant of nature that has a value other than the one it does have. If you want to quantify a probability you have to collect a sample of data. You could do that, for example, if you were throwing dice.Throw them often enough, and you get an empirically supported probability distribution. But we do not have an empirically supported probability distribution for the constants of nature. And why is that. It’s because… they…
Your Soul Has No “Off Switch”
A major modern misunderstanding of the human mind is to assume that it is like a machine with an “on” and an “off” switch
I have written, in an earlier post, about the problem of “consciousness:” — that is, the problem inherent to the word itself and to the concept it conveys. I believe that “consciousness” is a mere narrative gloss on the mind — it denotes nothing beyond the mental powers of the soul. This is not just linguistic nitpicking. The concept of “consciousness” is much worse than useless. It leads us to misunderstand the mind in a profound way, as I will explain. The point may seem subtle but I believe that, if you think deeply enough about it, you will see that it is obviously true. First, I am not saying “consciousness” is an illusion. or possibly a delusion. This witless…
Will China Find Alien Life First? A Chinese Astronomer Says Yes
Whether either American or Chinese astronomers find anything, it will certainly be an interesting race
One Chinese astronomer, Tong-Jie Zhang, is working on it: In China, Zhang was tirelessly lobbying Chinese authorities to access FAST for his own research. Only recently was he granted the ability to use the telescope through the National Astronomical Observatories’ association. Initially, Zhang and his students had to conduct their observations at FAST while the telescope observed other targets, not allowing him to choose the areas he wanted. But after collaborating with Werthimer and students from the SETI Research Center on a paper published in the Astrophysical Journal, Chinese officials eventually allowed Zhang a window of time with the telescope to shortlist specific solar systems that he and his collaborators believe can most likely harbor intelligent life. Over the next…
Sci-fi Saturday: Rescuing Lost People
Animated, in French, with English subtitles, but don't let that deter you
Here’s a very new (January 26, 2021), very short (5:41 min) animated video from Valérie Bousquie, Joséphine Meis, Côme Roy, Antoine Vignon, and Benjamin Warnitz. In a wild an inhabited desert, a team of rangers is in charge of rescuing people who got lost there. The film is in French with English subtitles (and the promo copy could have used an English-speaking editor). It’s not clear why it is science fiction and I found the story a little hard to understand. But the professional relationships sound pretty real and make it worth the watch. Note: Someone reviewed the film at Filmnosis, commenting, “Staged in a desert, a lot of work has been dedicated to environment and character design, polished framing,…
Who Is Allowed in the Smoke-Filled Rooms of Investment?
How the stock market is manipulated, using the GameStop episode as an example
Full disclosure: I continue to maintain positions in some of the stocks mentioned in this article. But, as you will see, my goal here is neither to promote stock or dissuade from it, but rather to ask a deeper question about who is allowed to do what about a stock. For those who are unaware, the last two weeks in the stock market have gone crazy. GameStop (GME), a company that continues to lose money, skyrocketed from $18/share to, as of the time I’m writing, just about$450 per share. That’s right the stock soared over 20 times in value over the period of a few weeks. Several other stocks have also skyrocketed, including AMC Entertainment (AMC) and Koss Corporation…
We try to understand why the universe seems fine-tuned for life
Neurosurgeon Michael Egnor, a frequent contributor to Mind Matters News, interviewed our Walter Bradley Center director Robert J. Marks on the nature of information. In this second part of the interview (here’s the first part), the question comes up: How do we know if something is an accident or not? https://episodes.castos.com/mindmatters/Mind-Matters-118-Robert-Marks.mp3 A partial transcript follows. This portion begins at 11:02. Show notes and links follow. Michael Egnor: Aristotle said that in order to understand any process in nature, you really need to know four causes of that process. Note: The causes, according to Simply Philosophy are material, formal, efficient, and final. The material cause of a thing is what it is made of. A cat, for example, is made of…
Does Information Just Happen? Or Does the Universe Have Meaning?
The computer revolution did not show that information could be produced from nothing
Neurosurgeon Michael Egnor, a frequent contributor to Mind Matters News, interviewed our Walter Bradley Center director Robert J. Marks, a computer engineer, on the nature of information. Information makes a huge difference to what happens among human beings. But it is not like matter or energy. It doesn’t weigh anything or generate heat. How can we understand it scientifically? https://episodes.castos.com/mindmatters/Mind-Matters-118-Robert-Marks.mp3 A partial transcript follows. This portion begins at 01:10. Show notes and links follow. Robert J. Marks: Well, my background is not in biology, but it is in computer science and computer engineering. And one of the things we do is do artificial intelligence. And I think maybe your question translated to artificial intelligence is, can anything happen in artificial…
To Succeed, Understand Difference Between Influence and Power
Do you wonder why some people are listened to and not others, regardless of the value of their ideas? Well, read on…
At business mag Forbes, some have begun to consider the difference between influence and power: Swarthmore College has been rated the best liberal arts college in the U.S. by Academic Influence, a new college rankings method that uses artificial intelligence technology to search massive databases and measure the impact of work by individuals who’ve been affiliated with colleges and universities throughout the world. Last Monday, Academic Influence released its first-ever ranking of American liberal arts colleges – those four-year institutions that are relatively small in size, focus on bachelor’s level education, emphasize direct engagement with professors, provide an enriched residential experience, and insist on broad grounding in the liberal arts along with focused study in a major. In brief, here’s…
What If Many People Ditched Government Currencies for Bitcoin?
Are cryptocurrencies our best bet for a fair global currency system?
Picture a world racked by economic issues like coronavirus, where people begin ditching government (fiat) currencies for Bitcoin. Small changes are are underway: The chief economist did not mention that a growing number of jurisdictions are already embracing bitcoin for tax payments. For example, the canton of Zug in Switzerland announced that it will start accepting bitcoin for tax payments this year. Several other local governments in Switzerland have made a similar announcement, such as Zermatt. Recently, the mayor of Miami said that he is working on allowing payments for city services in bitcoin. Furthermore, a growing number of stores are accepting bitcoin payments. Payments giant Paypal, for example, is planning to allow people to use cryptocurrency to pay for…
Does the Ability To Think Depend on Consciousness?
From a medical perspective, “consciousness” adds nothing to the description of mental states
The title question might seem like a strange one but it is vitally important if we are to interpret neuroscience correctly and if we are to understand the mind–brain relationship. In my view, the capacity for thought does not depend on consciousness. The term “consciousness” is at best meaningless and at worst an impediment to understanding the mind. “Consciousness” is a very vague term and, ultimately, I don’t think it has any useful meaning at all, apart from other categories such as sensation, perception, imagination, reason etc. Aristotle had no distinct term for it. Nor do I think did any of the ancient or medieval philosophers. Consciousness is a modern term that seems to subsume all of the sensate powers…
Can Deepfakes Substitute for Actors?
Would you care if the actor is a real person or not?
When our Walter Bradley Center director, Robert J. Marks, was discussing with Eric Holloway the events that really made a difference in AI, one very interesting issue that came up was the use of deepfakes to substitute for actors in films. Robert J. Marks: Eric, how is Disney using deep fakes in entertainment? Eric Holloway: Well, Disney is using deep fakes and entertainment as a way to capitalize on not having to hire lots of really expensive actors. So you can have a few expensive actors, they do their thing, and then you copy their body movements and face. And now you can just hire a bunch of cheap actors and stick the expensive actors faces on them. Or you…
How China Has Tried To Suppress Coronavirus Science
So far as investigative journalists have been able to determine, the suppression came directly from the top
An investigation by the Associated Press reveals what everyone has suspected since the beginning of the SARS-CoV-2 (coronavirus) pandemic: The Chinese Communist Party (CCP) has been keeping a tight rein on the publication or distribution of any scientific research on the coronavirus conducted within the country. AP recently found out just how extensive the muzzling of scientific findings has been. Its report also confirms that the orders came from the top: The AP investigation was based on dozens of interviews with Chinese and foreign scientists and officials, along with public notices, leaked emails, internal data and the documents from China’s cabinet and the Chinese Center for Disease Control and Prevention. It reveals a pattern of government secrecy and top-down control…
Will Mediocrity Triumph? The Fallacy That Will Not Die
Economist claims: It is a fundamental economic truth that businesses converge to mediocrity. Is he right?
Nearly 100 years ago, a famous economist named Horace Secrist wrote a book with the provocative title, The Triumph of Mediocrity in Business. He had spent ten years collecting and analyzing data on the success of dozens of companies in dozens of industries during the years 1920 to 1930. For each measure of success, he used the 1920 data to divide the companies in each industry into quartiles: the top 25%, second 25%, third 25%, and bottom 25%. He then calculated the average value of the success metric for the 1920 top-quartile companies every year from 1920 to 1930. He did the same for the other three quartiles. In every case, the companies in both the top two quartiles and the bottom two…
Why “Critical Theory” Might Shape Your Life Going Forward
Critical Theory has begun to rule the public square and we need to understand it
2020 was the year that Critical Theory came to dominate culture in America. It ruled academia for a half century but only in the past year has it begun to rule the American public square as well. Perhaps you’re not interested in Critical Theory but Critical Theory is interested in you. It behooves us to understand it better, because it will be a central theme in American culture for the foreseeable future. For readers who are not familiar with it, I provide here a synopsis. There is a connection to Darwinism at the heart of Critical Theory, as we will see. Critical theory is, at its root, cultural Marxism. It emerged from the failure of Leninism to capture the hearts…
What We Can Do To Prevent More Online Censorship
Encrypted email can be an end-around social media companies' monopoly of free speech
With all the concern about major social media companies deplatforming those they disagree with, there is a concern that these companies’ monopoly on social media will eliminate free speech. New social media platforms such as Gab, Parler and MeWe have popped up to offer freer alternatives. Yet even that is not without peril, as deplatforming can happen lower down the technology stack. Parler was recently kicked off AWS (Amazon Web Services). However, in the midst of all the hubbub we’ve forgotten the original social network: email. Email is still here. The distance between email and modern social media may be smaller than it first appears. Lets make a short list of the perceived differences between social media and email, and…
Why Medical Device Companies Use Priorities Created by Toy Makers
The priorities followed by product developers arise from the ontology they use
The priorities of product development teams arise from the ontology, the beliefs about the nature of reality, they follow. One of the greatest values of defining that ontology is to identify blind spots and wrong assumptions. When the source of priorities is clear, improved, more adaptable options become possible. As Clayton Christensen (1952–2020, pictured) has said: To grow profit margins and revenue, he observes, such companies tend to develop products to satisfy the demands of their most sophisticated customers. As successful as this strategy may be, it means that those companies also tend to ignore opportunities to meet the needs of less sophisticated customers — who may eventually form much larger markets. A hierarchy of products starts with the components,…
The Infinity Mirror Trap: Part 2: The Thought Determinism Paradox
The infinity mirror experience shows that thought determinism cannot explain all human thoughts
In Part 1 of this series, we saw how the belief that “every human thought is an illusion” proves empty and powerless when trying to account for the infinity mirror experience. Part 2 here puts another view held widely by science-trained people, materialism, to the same mirror test. Materialism is the view that everything we observe results from the interplay of matter and energy. Under materialism, each human’s every thought is produced by electrochemical events in the brain. As Marvin Minsky, an artificial intelligence pioneer, wrote in Society of Mind (1988), “Everything, including that which happens in our brains, depends on these and only on these: A set of fixed, deterministic laws and a purely random set of accidents.” Philosopher…
Sci-fi Saturday: A Robot Helps an Old Fellow Rediscover Life
The robot is very well done and how he gets a name is charming
The short sci-fi film, “This Time Away” (13:23) is by Magali Barbe Nigel is an elderly man living as a recluse, haunted by his past and memory of the family he once had, until an unexpected visitor arrives and disrupts his lonely routine. No spoiler, the visitor is a robot, abandoned by children in his back yard. The relationships seem a bit unrealistic. Lots of people abandon their elderly relatives, of course. But we are being asked to believe that a robot was the big solution. In this case, it feels like magic. Well, watch it and see what you think. The robot is done really nicely. Worth watching. Other reviews from the “We are but DUST” files: Sci-fi Saturday:…
Can Robots That Work With People Ever Be Safe?
Robot IQ offers five reasons why not
Cobots are robots designed to be friendly to people. But some doubt that friendship will work: It can be tempting to think of risk as an either/or situation — either your application is safe or it isn’t. In reality, risk is a sliding scale and you can never get rid of all risks completely. You can only know the true risk of a particular task by performing an adequate risk assessment. You need to do this whether the robot is collaborative or not. Truth: cobot safety can be changed to suit task performance The reality is that cobots have always been high-performance robots suitable for a range of industrial applications. Instead of being lesser robots, as some people mistakenly believe,…
Sci-fi Saturday: What If an Old Man Could See His Mother Again?
It is a hard film to watch if you lost a loved one, but worthwhile
A bit sad but worth seeing. (4:01 min from Nick Naum & Csaba Nagy) An old man, with a receding memory, pays to view synthetic recreations of his mother and childhood. I had a hard time watching this film because I would so like to see my parents (who died in their nineties) again, especially when they were young. But I can tell you this: My father once got a call from the country for old men. In case you ever wondered, yes, it’s real. It’s too bad if some people are scamming about it. Other reviews from the “We are but DUST” files: Sci Fi Saturday: A fight for the winning ticket In a 2040 superstorm, engulfing the planet,…
|
2022-06-28 09:55:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18807445466518402, "perplexity": 2316.383195717841}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103360935.27/warc/CC-MAIN-20220628081102-20220628111102-00453.warc.gz"}
|
https://www.zbmath.org/?q=an%3A1173.17014
|
zbMATH — the first resource for mathematics
A universal formula for representing Lie algebra generators as formal power series with coefficients in the Weyl algebra. (English) Zbl 1173.17014
Summary: Given an $$n$$-dimensional Lie algebra $$\mathfrak g$$ over a field $$k\supset\mathbb Q$$, together with its vector space basis $$X_1^0, X_2^0,\dots, X_n^0$$, we give a formula, depending only on the structure constants, representing the infinitesimal generators, $$X_i=X_i^0t$$ in $$\mathfrak g\otimes_kk[[t]]$$, where $$t$$ is a formal variable, as a formal power series in $$t$$ with coefficients in the Weyl algebra $$A_n$$. Actually, the theorem is proved for Lie algebras over arbitrary rings $$k\supset\mathbb Q$$.
We provide three different proofs, each of which is expected to be useful for generalizations. The first proof is obtained by direct calculations with tensors. This involves a number of interesting combinatorial formulas in structure constants. The final step in calculation is a new formula involving Bernoulli numbers and arbitrary derivatives of $$\coth(x/2)$$. The dimensions of certain spaces of tensors are also calculated. The second method of proof is geometric and reduces to a calculation of formal right-invariant vector fields in specific coordinates, in a (new) variant of formal group scheme theory. The third proof uses coderivations and Hopf algebras.
MSC:
17B35 Universal enveloping (super)algebras 16W30 Hopf algebras (associative rings and algebras) (MSC2000) 14L05 Formal groups, $$p$$-divisible groups 14L15 Group schemes 16S80 Deformations of associative rings 11B68 Bernoulli and Euler numbers and polynomials
Full Text:
References:
[1] Amelino-Camelia, G.; Arzano, M.; Doplicher, L., Field theories on canonical and Lie-algebra noncommutative spacetimes, in: Florence 2001, A relativistic spacetime odyssey, pp. 497-512 · Zbl 1043.81067 [2] Amelino-Camelia, G.; Arzano, M., Coproduct and star product in field theories on Lie-algebra non-commutative space – times, Phys. rev. D, 65, 084044, (2002) [3] Berceanu, S., Realization of coherent state Lie algebras by differential operators, (), 1-24 · Zbl 1212.81012 [4] Bourbaki, N., Lie groups and algebras, ch. I-III, (1971), Hermann Paris, (Ch. I), 1972 (Ch. II-III) (in French); Springer 1975, 1989 (Ch. I-III, in English) [5] Dimitrijević, M.; Meyer, F.; Möller, L.; Wess, J., Gauge theories on the κ-Minkowski spacetime, Eur. phys. J. C part. fields, 36, 1, 117-126, (2004) · Zbl 1191.81204 [6] Fresse, B., Lie theory of formal groups over an operad, J. algebra, 202, 2, 455-511, (1998), MR99c:14063 · Zbl 1041.18009 [7] Demazure, M.; Grothendieck, A., Schémas en groupes. I: propriétés générales des schémas en groupes, SGA 3, vol. 1, Lecture notes in math., vol. 151, (1970), Springer [8] Holtkamp, R., A pseudo-analyzer approach to formal group laws not of operad type, J. algebra, 237, 1, 382-405, (2001), MR2002h:14074 · Zbl 1042.14020 [9] Karasev, M.; Maslov, V., Nonlinear Poisson brackets, Transl. math. monogr., vol. 119, (1993), Amer. Math. Soc., (in Russian) [10] Kathotia, V., Kontsevich’s universal formula for deformation quantization and the campbell – baker – hausdorff formula, Internat. J. math., 11, 4, 523-551, (2000), MR2002h:53154 · Zbl 1110.53308 [11] Kontsevich, M., Deformation quantization of Poisson manifolds, Lett. math. phys., 66, 3, 157-216, (2003), MR2005i:53122 · Zbl 1058.53065 [12] Lukierski, J.; Ruegg, H., Quantum κ-Poincaré in any dimensions, Phys. lett. B, 329, 189-194, (1994) [13] Lukierski, J.; Woronowicz, M., New Lie-algebraic and quadratic deformations of Minkowski space from twisted Poincaré symmetries, Phys lett. B, 633, 116-124, (2006) · Zbl 1247.81216 [14] Meljanac, S.; Stojić, M., New realizations of Lie algebra kappa-deformed Euclidean space, Eur. phys. J. C, 47, 531-539, (2006) · Zbl 1191.81138 [15] Odesskii, A.V.; Feigin, B.L., Quantized moduli spaces of the bundles on the elliptic curve and their applications, (), 123-137, MR2002j:14040 · Zbl 1076.14520 [16] Petracci, E., Universal representations of Lie algebras by coderivations, Bull. sci. math., 127, 5, 439-465, (2003), MR2004f:17026 · Zbl 1155.17302
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-03-08 09:52:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8890339732170105, "perplexity": 3089.032362372601}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178383355.93/warc/CC-MAIN-20210308082315-20210308112315-00600.warc.gz"}
|
http://cms.math.ca/cmb/kw/order
|
Search results
Search: All articles in the CMB digital archive with keyword order
Expand all Collapse all Results 1 - 24 of 24
1. CMB Online first
Small Flag Complexes with Torsion We classify flag complexes on at most $12$ vertices with torsion in the first homology group. The result is moderately computer-aided. As a consequence we confirm a folklore conjecture that the smallest poset whose order complex is homotopy equivalent to the real projective plane (and also the smallest poset with torsion in the first homology group) has exactly $13$ elements. Keywords:clique complex, order complex, homology, torsion, minimal modelCategories:55U10, 06A11, 55P40, 55-04, 05-04
2. CMB Online first
Left-orderable fundamental group and Dehn surgery on the knot $5_2$ We show that the resulting manifold by $r$-surgery on the knot $5_2$, which is the two-bridge knot corresponding to the rational number $3/7$, has left-orderable fundamental group if the slope $r$ satisfies $0\le r \le 4$. Keywords:left-ordering, Dehn surgeryCategories:57M25, 06F15
3. CMB Online first
Left-orderable fundamental group and Dehn surgery on the knot $5_2$ We show that the resulting manifold by $r$-surgery on the knot $5_2$, which is the two-bridge knot corresponding to the rational number $3/7$, has left-orderable fundamental group if the slope $r$ satisfies $0\le r \le 4$. Keywords:left-ordering, Dehn surgeryCategories:57M25, 06F15
4. CMB Online first
Sokić, Miodrag
Indicators, chains, antichains, Ramsey property We introduce two Ramsey classes of finite relational structures. The first class contains finite structures of the form $(A,(I_{i})_{i=1}^{n},\leq ,(\preceq _{i})_{i=1}^{n})$ where $\leq$ is a total ordering on $A$ and $% \preceq _{i}$ is a linear ordering on the set $\{a\in A:I_{i}(a)\}$. The second class contains structures of the form $(A,\leq ,(I_{i})_{i=1}^{n},\preceq )$ where $(A,\leq )$ is a weak ordering and $% \preceq$ is a linear ordering on $A$ such that $A$ is partitioned by $% \{a\in A:I_{i}(a)\}$ into maximal chains in the partial ordering $\leq$ and each $\{a\in A:I_{i}(a)\}$ is an interval with respect to $\preceq$. Keywords:Ramsey property, linear orderingsCategories:05C55, 03C15, 54H20
5. CMB Online first
Hakamata, Ryoto; Teragaito, Masakazu
Left-orderable fundamental group and Dehn surgery on the knot $5_2$ We show that the resulting manifold by $r$-surgery on the knot $5_2$, which is the two-bridge knot corresponding to the rational number $3/7$, has left-orderable fundamental group if the slope $r$ satisfies $0\le r \le 4$. Keywords:left-ordering, Dehn surgeryCategories:57M25, 06F15
6. CMB 2012 (vol 56 pp. 850)
Teragaito, Masakazu
Left-orderability and Exceptional Dehn Surgery on Twist Knots We show that any exceptional non-trivial Dehn surgery on a twist knot, except the trefoil, yields a $3$-manifold whose fundamental group is left-orderable. This is a generalization of a result of Clay, Lidman and Watson, and also gives a new supporting evidence for a conjecture of Boyer, Gordon and Watson. Keywords:left-ordering, twist knot, Dehn surgeryCategories:57M25, 06F15
7. CMB 2011 (vol 56 pp. 39)
Ben Amara, Jamel
Comparison Theorem for Conjugate Points of a Fourth-order Linear Differential Equation In 1961, J. Barrett showed that if the first conjugate point $\eta_1(a)$ exists for the differential equation $(r(x)y'')''= p(x)y,$ where $r(x)\gt 0$ and $p(x)\gt 0$, then so does the first systems-conjugate point $\widehat\eta_1(a)$. The aim of this note is to extend this result to the general equation with middle term $(q(x)y')'$ without further restriction on $q(x)$, other than continuity. Keywords:fourth-order linear differential equation, conjugate points, system-conjugate points, subwronskiansCategories:47E05, 34B05, 34C10
8. CMB 2011 (vol 56 pp. 102)
Kong, Qingkai; Wang, Min
Eigenvalue Approach to Even Order System Periodic Boundary Value Problems We study an even order system boundary value problem with periodic boundary conditions. By establishing the existence of a positive eigenvalue of an associated linear system Sturm-Liouville problem, we obtain new conditions for the boundary value problem to have a positive solution. Our major tools are the Krein-Rutman theorem for linear spectra and the fixed point index theory for compact operators. Keywords:Green's function, high order system boundary value problems, positive solutions, Sturm-Liouville problemCategories:34B18, 34B24
9. CMB 2011 (vol 55 pp. 339)
Loring, Terry A.
From Matrix to Operator Inequalities We generalize Löwner's method for proving that matrix monotone functions are operator monotone. The relation $x\leq y$ on bounded operators is our model for a definition of $C^{*}$-relations being residually finite dimensional. Our main result is a meta-theorem about theorems involving relations on bounded operators. If we can show there are residually finite dimensional relations involved and verify a technical condition, then such a theorem will follow from its restriction to matrices. Applications are shown regarding norms of exponentials, the norms of commutators, and "positive" noncommutative $*$-polynomials. Keywords:$C*$-algebras, matrices, bounded operators, relations, operator norm, order, commutator, exponential, residually finite dimensionalCategories:46L05, 47B99
10. CMB 2011 (vol 54 pp. 566)
Zhou, Xiang-Jun; Shi, Lei; Zhou, Ding-Xuan
Non-uniform Randomized Sampling for Multivariate Approximation by High Order Parzen Windows We consider approximation of multivariate functions in Sobolev spaces by high order Parzen windows in a non-uniform sampling setting. Sampling points are neither i.i.d. nor regular, but are noised from regular grids by non-uniform shifts of a probability density function. Sample function values at sampling points are drawn according to probability measures with expected values being values of the approximated function. The approximation orders are estimated by means of regularity of the approximated function, the density function, and the order of the Parzen windows, under suitable choices of the scaling parameter. Keywords:multivariate approximation, Sobolev spaces, non-uniform randomized sampling, high order Parzen windows, convergence ratesCategories:68T05, 62J02
11. CMB 2011 (vol 54 pp. 277)
Farley, Jonathan David
Maximal Sublattices of Finite Distributive Lattices. III: A Conjecture from the 1984 Banff Conference on Graphs and Order Let $L$ be a finite distributive lattice. Let $\operatorname{Sub}_0(L)$ be the lattice $$\{S\mid S\text{ is a sublattice of }L\}\cup\{\emptyset\}$$ and let $\ell_*[\operatorname{Sub}_0(L)]$ be the length of the shortest maximal chain in $\operatorname{Sub}_0(L)$. It is proved that if $K$ and $L$ are non-trivial finite distributive lattices, then $$\ell_*[\operatorname{Sub}_0(K\times L)]=\ell_*[\operatorname{Sub}_0(K)]+\ell_*[\operatorname{Sub}_0(L)].$$ A conjecture from the 1984 Banff Conference on Graphs and Order is thus proved. Keywords:(distributive) lattice, maximal sublattice, (partially) ordered setCategories:06D05, 06D50, 06A07
12. CMB 2010 (vol 54 pp. 270)
Dow, Alan
Sequential Order Under PFA It is shown that it follows from PFA that there is no compact scattered space of height greater than $\omega$ in which the sequential order and the scattering heights coincide. Keywords:sequential order, scattered spaces, PFACategories:54D55, 03E05, 03E35, 54A20
13. CMB 2010 (vol 54 pp. 381)
Velušček, Dejan
A Short Note on the Higher Level Version of the Krull--Baer Theorem Klep and Velu\v{s}\v{c}ek generalized the Krull--Baer theorem for higher level preorderings to the non-commutative setting. A $n$-real valuation $v$ on a skew field $D$ induces a group homomorphism $\overline{v}$. A section of $\overline{v}$ is a crucial ingredient of the construction of a complete preordering on the base field $D$ such that its projection on the residue skew field $k_v$ equals the given level $1$ ordering on $k_v$. In the article we give a proof of the existence of the section of $\overline{v}$, which was left as an open problem by Klep and Velu\v{s}\v{c}ek, and thus complete the generalization of the Krull--Baer theorem for preorderings. Keywords:orderings of higher level, division rings, valuationsCategories:14P99, 06Fxx
14. CMB 2010 (vol 53 pp. 475)
Nonlinear Multipoint Boundary Value Problems for Second Order Differential Equations In this paper we shall discuss nonlinear multipoint boundary value problems for second order differential equations when deviating arguments depend on the unknown solution. Sufficient conditions under which such problems have extremal and quasi-solutions are given. The problem of when a unique solution exists is also investigated. To obtain existence results, a monotone iterative technique is used. Two examples are added to verify theoretical results. Keywords:second order differential equations, deviated arguments, nonlinear boundary conditions, extremal solutions, quasi-solutions, unique solutionCategories:34A45, 34K10
15. CMB 2009 (vol 52 pp. 315)
Yi, Taishan; Zou, Xingfu
Generic Quasi-Convergence for Essentially Strongly Order-Preserving Semiflows By employing the limit set dichotomy for essentially strongly order-preserving semiflows and the assumption that limit sets have infima and suprema in the state space, we prove a generic quasi-convergence principle implying the existence of an open and dense set of stable quasi-convergent points. We also apply this generic quasi-convergence principle to a model for biochemical feedback in protein synthesis and obtain some results about the model which are of theoretical and realistic significance. Keywords:Essentially strongly order-preserving semiflow, compactness, quasi-convergenceCategories:34C12, 34K25
16. CMB 2009 (vol 52 pp. 39)
Cimpri\v{c}, Jakob
A Representation Theorem for Archimedean Quadratic Modules on $*$-Rings We present a new approach to noncommutative real algebraic geometry based on the representation theory of $C^\ast$-algebras. An important result in commutative real algebraic geometry is Jacobi's representation theorem for archimedean quadratic modules on commutative rings. We show that this theorem is a consequence of the Gelfand--Naimark representation theorem for commutative $C^\ast$-algebras. A noncommutative version of Gelfand--Naimark theory was studied by I. Fujimoto. We use his results to generalize Jacobi's theorem to associative rings with involution. Keywords:Ordered rings with involution, $C^\ast$-algebras and their representations, noncommutative convexity theory, real algebraic geometryCategories:16W80, 46L05, 46L89, 14P99
17. CMB 2008 (vol 51 pp. 15)
Aqzzouz, Belmesnaoui; Nouira, Redouane; Zraoula, Larbi
The Duality Problem for the Class of AM-Compact Operators on Banach Lattices We prove the converse of a theorem of Zaanen about the duality problem of positive AM-compact operators. Keywords:AM-compact operator, order continuous norm, discrete vector latticeCategories:46A40, 46B40, 46B42
18. CMB 2007 (vol 50 pp. 105)
Klep, Igor
On Valuations, Places and Graded Rings Associated to $*$-Orderings We study natural $*$-valuations, $*$-places and graded $*$-rings associated with $*$-ordered rings. We prove that the natural $*$-valuation is always quasi-Ore and is even quasi-commutative (\emph{i.e.,} the corresponding graded $*$-ring is commutative), provided the ring contains an imaginary unit. Furthermore, it is proved that the graded $*$-ring is isomorphic to a twisted semigroup algebra. Our results are applied to answer a question of Cimpri\v c regarding $*$-orderability of quantum groups. Keywords:$*$--orderings, valuations, rings with involutionCategories:14P10, 16S30, 16W10
19. CMB 2005 (vol 48 pp. 161)
Betancor, Jorge J.
Hankel Convolution Operators on Spaces of Entire Functions of Finite Order In this paper we study Hankel transforms and Hankel convolution operators on spaces of entire functions of finite order and their duals. Keywords:Hankel transform, convolution, entire functions, finite orderCategory:46F12
20. CMB 2004 (vol 47 pp. 530)
Iranmanesh, A.; Khosravi, B.
A Characterization of $PSU_{11}(q)$ Order components of a finite simple group were introduced in [4]. It was proved that some non-abelian simple groups are uniquely determined by their order components. As the main result of this paper, we show that groups $PSU_{11}(q)$ are also uniquely determined by their order components. As corollaries of this result, the validity of a conjecture of J. G. Thompson and a conjecture of W. Shi and J. Bi both on $PSU_{11}(q)$ are obtained. Keywords:Prime graph, order component, finite group,simple groupCategories:20D08, 20D05, 20D60
21. CMB 2003 (vol 46 pp. 310)
Wang, Xiaofeng
Second Order Dehn Functions of Asynchronously Automatic Groups Upper bounds of second order Dehn functions of asynchronously automatic groups are obtained. Keywords:second order Dehn function, combing, asynchronously automatic groupCategories:20E06, 20F05, 57M05
22. CMB 2003 (vol 46 pp. 268)
Puls, Michael J.
Group Cohomology and $L^p$-Cohomology of Finitely Generated Groups Let $G$ be a finitely generated, infinite group, let $p>1$, and let $L^p(G)$ denote the Banach space $\{ \sum_{x\in G} a_xx \mid \sum_{x\in G} |a_x |^p < \infty \}$. In this paper we will study the first cohomology group of $G$ with coefficients in $L^p(G)$, and the first reduced $L^p$-cohomology space of $G$. Most of our results will be for a class of groups that contains all finitely generated, infinite nilpotent groups. Keywords:group cohomology, $L^p$-cohomology, central element of infinite order, harmonic function, continuous linear functionalCategories:43A15, 20F65, 20F18
23. CMB 2000 (vol 43 pp. 397)
Bonato, Anthony; Cameron, Peter; Delić, Dejan
Tournaments and Orders with the Pigeonhole Property A binary structure $S$ has the pigeonhole property ($\mathcal{P}$) if every finite partition of $S$ induces a block isomorphic to $S$. We classify all countable tournaments with ($\mathcal{P}$); the class of orders with ($\mathcal{P}$) is completely classified. Keywords:pigeonhole property, tournament, orderCategories:05C20, 03C15
24. CMB 1999 (vol 42 pp. 478)
Pruss, Alexander R.
A Remark On the Moser-Aubin Inequality For Axially Symmetric Functions On the Sphere Let $\scr S_r$ be the collection of all axially symmetric functions $f$ in the Sobolev space $H^1(\Sph^2)$ such that $\int_{\Sph^2} x_ie^{2f(\mathbf{x})} \, d\omega(\mathbf{x})$ vanishes for $i=1,2,3$. We prove that $$\inf_{f\in \scr S_r} \frac12 \int_{\Sph^2} |\nabla f|^2 \, d\omega + 2\int_{\Sph^2} f \, d\omega- \log \int_{\Sph^2} e^{2f} \, d\omega > -\oo,$$ and that this infimum is attained. This complements recent work of Feldman, Froese, Ghoussoub and Gui on a conjecture of Chang and Yang concerning the Moser-Aubin inequality. Keywords:Moser inequality, borderline Sobolev inequalities, axially symmetric functionsCategories:26D15, 58G30
|
2013-12-09 00:23:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8710871934890747, "perplexity": 993.1638796347278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163837349/warc/CC-MAIN-20131204133037-00067-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/a-cool-easy-problem-9/
|
A cool easy problem 9
In the electric field of charge q, another charge is taken from A to B , A to C , A to D and A to E , then the work done will be?
×
|
2017-10-21 12:30:09
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.924397885799408, "perplexity": 370.5734998077571}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824775.99/warc/CC-MAIN-20171021114851-20171021134851-00788.warc.gz"}
|
https://physics.stackexchange.com/questions/538726/why-are-diffraction-rings-closer-together-when-electrons-travel-at-a-greater-spe
|
Why are diffraction rings closer together when electrons travel at a greater speed in electron diffraction?
I know that at higher speeds the de Broglie wavelength decreases so the electrons diffract less, but does the fact electrons repel affect it in any other way?
What I was thinking was that since electrons now reach the screen in less time because of the greater speed, they have less time to repel and thus repel less so rings are closer together. Is this a correct reason and does this relate in any way to the wavelength reason?
These images are from Electron diffraction.
A beam of electrons is accelerated in an electron gun to a potential of between 3500 V and 5000 V and then allowed to fall on a very thin sheet of graphite (see diagram above). The electrons diffract from the carbon atoms and the resulting circular pattern on the screen (see diagrams below) is very good evidence for the wave nature of the electrons.
The diffraction pattern observed on the screen is a series of concentric rings. This is due to the regular spacing of the carbon atoms in different layers in the graphite. However since the graphite layers overlay each other in an irregular way the resulting diffraction pattern is circular. It is an example of Bragg scattering.
What I was thinking was that since electrons now reach the screen in less time because of the greater speed, they have less time to repel and thus repel less so rings are closer together. Is this a correct reason and does this relate in any way to the wavelength reason?
Short Answer is no. You would get the exact same pattern if you shot the electrons one by one and not in a beam where they could theoretically interact with each other via coulomb repulsion. Note that the pattern tells you something about the distribution of the "impact positions of the particles".
Stating in the graph here it is purely particle-like and here it is purely wave-like is oversimplifying. You should think of the particles as something different, which happens to behave like waves in one limit and like particles in the other, and how it behaves in between is more complex and described by shrödinger equation.
• I understand that electrons would form the same pattern if they were fired one by one, but why would the electrostatic repulsion reason be wrong? It sounds logical, and there must be some repulsion at least? – XXb8 Mar 30 at 9:48
• I would assume it is not significant here. I'm a bit lazy, but you can calculate the deviation due to classical coulomb during the flight time. You can assume it is a particle, calculate how long it stays in flight, and just assume there is a particle in another trajectory leading to the other diffraction ring. It should be dramatically less than the diffraction length scale. My physical intuition speaking here, though. – fruchti Mar 30 at 9:57
Assuming that we are dealing with parallel electron beams, this can be explained through Bragg's Law, $$n\lambda = 2d sin\theta$$, with $$2\theta$$ the angle between the incident and diffracted ray. As $$\lambda$$ is decreased, $$sin\theta$$ should decrease for the same value of d, hence the rings are now closer to the central spot.
• I started to draw this but then remembered drawing Bragg diffraction takes longer in PowerPoint than I wanted to invest and it never looks convincing when you do manage to do it right :-) commons.wikimedia.org/wiki/File:Bragg_legea.jpg and youtube.com/watch?v=Cjce4QumZNk – uhoh Mar 30 at 4:01
• @uhoh These answers make sense, thank you. I was looking for a more intuitive explanation, such as the fact electrons repel or the de Broglie wavelength explanation. How would you go about this explanation? – XXb8 Mar 30 at 8:30
• @XXb8 well $\]ambda$ in Bragg's law is the de Broglie wavelength of the electron. If this were Bragg scattering in X-ray diffraction then we'd just call it the wavelength. The article mentions that the energy is 3500 to 5000 eV, which makes the wavelength between 0.21 and 0.17 Angstroms and the distance between planes of graphene atoms is about 3.35 Angstroms. – uhoh Mar 30 at 9:25
• @XXb8 The electrons really scatter off of the atoms which have both repulsive electrons and attractive nuclei, so you can think of each atom as a source of spherically shaped but forward peaked waves just like the diagram in your question shows. – uhoh Mar 30 at 9:25
The momentum of an electron $$p$$ increases with energy.
Let's say the lattice spacing of a material is $$a$$. Then we can assign a de Broglie momentum to the lattice (of sorts anyways) that is $$q=\hbar \frac{2\pi}{a}$$.
We can think of the crystal/material as giving a kick off momentum to the electron that is of order $$q$$, so that the electron exits with momentum $$\mathbf{p}+\mathbf{q}$$. If $$q$$ is perpendicular to $$p$$, then we have the exiting angle is approximately $$\theta\sim q/p$$. Thus, for larger electron energy $$p$$ goes down and so does the scattering angle $$\theta$$.
This behavior is ompletely identical to light passing through a diffraction grating. As you decrease the wavelength (larger momentum), the angular spacing between the diffracted beams becomes smaller.
|
2020-06-03 12:46:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7184413075447083, "perplexity": 272.13039170232094}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347434137.87/warc/CC-MAIN-20200603112831-20200603142831-00065.warc.gz"}
|
https://indico.cern.ch/event/801886/contributions/3590497/
|
# 4th ComHEP: Colombian Meeting on High Energy Physics (Barranquilla, Colombia)
2-6 December 2019
America/Bogota timezone
## Majoron contribution to the invisible Higgs decays
4 Dec 2019, 16:20
5m
Teatrino 1 (Centro Cultural, Universidad del Atlántico)
### Teatrino 1
#### Centro Cultural, Universidad del Atlántico
Universidad del Atlántico Carrera 30 No. 8-49 Puerto Colombia, Atlántico
Poster
### Speaker
Moises Zeleny Mora (BUAP)
### Description
Nowadays, neutrino oscillations and Higgs boson existence has been confirmed. On the other hand, the possibility of extended scalars sectors as well neutrino mass origin are broad areas of research. Majoron minimal model consider both topics, which adds a complex singlet to Standard Model, $\sigma = (f + \sigma^{0} + i J)/\sqrt{2}$ that carries a $B-L$ charge 2. The pseudoscalar majoron $J$, $f$ being the expectation value of $\sigma$, and $\sigma^0$ as the heavy CP-even majoron partner, are the new physics particles. In addition, three right-handed neutrinos are added, then, the Dirac and majorana neutrino mass term are allowed thanks to Higgs doublet and complex singlet, respectively. We are interested on the invisible Higgs decays probability to majorons and its posible detection in LHC.
### Presentation Materials
There are no materials yet.
|
2021-01-25 05:39:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6378962397575378, "perplexity": 7309.785873909047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703564029.59/warc/CC-MAIN-20210125030118-20210125060118-00317.warc.gz"}
|
https://policeteststudyguide.com/quiz/police-math-quiz/
|
### Welcome back to Your Learning Portal!
We recommend studying each learning topic before starting our
full-length practice exams.
Welcome to your Police Math Quiz
1. The odometer on a squad car shows that the police officer traveled 1,478 miles over the past 12 days. On average, how many miles are covered in 5 days?
2. A driver was found speeding and, as the vehicle stopped, it skidded 10 feet and 14 inches. The accident report states that only inches should be listed. How many inches was the vehicle speeding for?
3. As of 2019, San Antonio Police Department covers an area of 465.4 square miles, with a population of approximately 1.5 million. On average, how many people live in 12 square miles of San Antonio Police Department?
4. In the last 3-months, Judge Morris jailed 7 people for aggravated theft. They were jailed for 2, 1, 3, 4, 2, 2, and 5-years, respectively. Over this 3-month period, what is the median jail sentence that Judge Morris applied?
5. A flying object has traveled a total distance of 1,800 miles in 20 days flying 12 hours each day. What is the average speed that the flying object was traveling?
6. A squad car chased a target vehicle for 160 miles at a speed of 80 miles per hour. Upon reprimanding the suspect, they returned to the police station at 40 miles per hour. In total, what is the average speed per hour that the squad car traveled?
7. Los Angeles Police Department has, on review, 10 male officers to every 4 female officers. What is the reduced ratio of male:female officers in the department?
8. The average salary of a police officer in Arizona is $52,674, which is 11 percent more than the national average. What is the national average? 9. Two-fifths of police officers at New York Police Department are female. If there are 36,000 police officers, how many officers are male? 10. A victim of theft was lucky to have 40% of her stolen money returned. If she received$60, how much was originally stolen from her?
|
2021-03-01 04:20:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.331400066614151, "perplexity": 2300.4851616254496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361849.27/warc/CC-MAIN-20210301030155-20210301060155-00147.warc.gz"}
|
https://gmatclub.com/forum/on-3-sales-tom-has-received-commissions-of-250-95-and-175-and-ha-256231.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 19 Jun 2018, 20:55
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
### Show Tags
27 Dec 2017, 01:10
00:00
Difficulty:
15% (low)
Question Stats:
93% (00:50) correct 7% (01:17) wrong based on 43 sessions
### HideShow timer Statistics
On 3 sales Tom has received commissions of $250,$95, and $175, and has two additional sales pending. If Tom is to receive an average (arithmetic mean) commission of exactly$150 on the 5 sales, then the average (arithmetic mean) of the final two sales must be
A. 60
B. 80
C. 115
D. 230
E. 280
_________________
examPAL Representative
Joined: 07 Dec 2017
Posts: 418
Re: On 3 sales Tom has received commissions of $250,$95, and $175, and ha [#permalink] ### Show Tags 27 Dec 2017, 01:58 Bunuel wrote: On 3 sales Tom has received commissions of$250, $95, and$175, and has two additional sales pending. If Tom is to receive an average (arithmetic mean) commission of exactly $150 on the 5 sales, then the average (arithmetic mean) of the final two sales must be A. 60 B. 80 C. 115 D. 230 E. 280 We'll take a shortcut to calculation based on properties of the average. This is a Logical approach. Our target is to find two numbers that combine with$250, $95,$175 to give an average of $150. These three numbers have a difference of 100, -55, 25 from 150 for a combined difference of 100-55+25=70. So our remaining two numbers need to compensate for this with -70, which is -70/2 = -35 each. Therefore they must have an average of 150 - 35 = 115. (C) is our answer. Note that the calculations here involve much smaller numbers than $$\frac{(150*5 - (250+175+95))}{2}$$ which is the straightforward, Precise, approach. _________________ David Senior tutor at examPAL Signup for a free GMAT course We won some awards: Join our next webinar (free) Save up to$250 on examPAL packages (special for GMAT Club members)
Board of Directors
Status: QA & VA Forum Moderator
Joined: 11 Jun 2011
Posts: 3509
Location: India
GPA: 3.5
### Show Tags
28 Dec 2017, 03:33
Bunuel wrote:
On 3 sales Tom has received commissions of $250,$95, and $175, and has two additional sales pending. If Tom is to receive an average (arithmetic mean) commission of exactly$150 on the 5 sales, then the average (arithmetic mean) of the final two sales must be
A. 60
B. 80
C. 115
D. 230
E. 280
$$Average=\frac{Total.Sales}{Number.Sales}$$, and 2 more sales = $$2x$$
So $$150=\frac{(250+95+175+2x)}{5}$$, $$150(5)=(520+2x)$$, $$x=\frac{750-520}{2}=115$$
|
2018-06-20 03:55:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5167289972305298, "perplexity": 12482.164299763523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863411.67/warc/CC-MAIN-20180620031000-20180620051000-00498.warc.gz"}
|
http://astronomy.stackexchange.com/tags/gravity/new
|
# Tag Info
-1
It is not only about up and down, there are deeper dimensions to this. Like infront and behind. The Moon seems to be "behind" us, as in expressions like: "Going BACK to the Moon". Although it does go all around and never gets anywhere, much like ones own bottom, actually. Even Ptolemy didn't argue with that fact. Might this be a clue to this geometric ...
3
Most of the 60 moons in the Saturn system are far away from the rings and very small, so their effect on the rings is negligible. But larger ones that are closer in (Enceladus) do have a rather significant effect on the rings, but as the gravitational pull of these moons is radially outward, it is hardly visible. On the other hand, small moons inside the ...
0
I think that one thing confusing you, is that up/down is a 2 dimensions concept (just like your image by the way), but the Universe is in 3 physical dimensions (let's not talk about time here). So when you say « going up from Earth is extracting from it's gravity », it can be in many direction. If two persons are in North Pole and South Pole of Earth, and ...
0
Your confusion is that you are treating gravity from all those objects as a bunch of discrete 'pulls' - but it doesn't manifest itself like that. Gravity from all bodies in the universe effects you as one force (I'm simplifying and excluding getting close to black holes etc where you have dramatically changing gravitational potential over the length of your ...
2
I think what you have established here is just that $\rho$ tends to increase with mass. The density of planets isn't constant. Let $\rho = \rho_0 (M/M_{earth})^{\alpha}$, so that $M = (4/3)\pi R^{3} \rho_0 (M/M_{earth})^{\alpha}$ Then $$g = \frac{GM}{R^2} = \frac{4\pi G}{3} R \rho$$ Replace $R$ with $(3M/4\pi \rho)^{1/3}$ so that $$g = \frac{4\pi G}{3} ... 1 For planets of constant mean density you have:$$ M=\rho \times 4\pi r^3 $$and the surface value of g is:$$ g(r)=\frac{GM}{r^2}=G \times \rho\times 4 \pi \times r So for bodies of constant density the surface gravity is proportional to the radius, and the slope as $r \to 0$ tells you the density. So for bodies of equal density $\log(g(r)) \to -\infty$ ...
Top 50 recent answers are included
|
2015-05-30 02:29:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9404608607292175, "perplexity": 633.1364455470316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930866.66/warc/CC-MAIN-20150521113210-00259-ip-10-180-206-219.ec2.internal.warc.gz"}
|
http://www.physicsforums.com/showthread.php?s=7e7080d13bb4f7a93918047127a3c7a9&p=4062977
|
## Divide number into 3 parts with each part being 1.6 times greater than the last
A friend posed this just for fun but now its really annoying me.
how do you divide 90 into three parts so that each part is 1.6 times greater than the last.
i.e: the second value should be 1.6 times greater than the first and the third value should be 1.6 times greater than the second?
Im confusing myself with this.
Thanks
Recognitions: Science Advisor Hi CF.Gauss. Just write it as $x + 1.6 x + 1.6^2 x = 90$ and then factorize out the "x" and simplify.
Recognitions:
Gold Member
Quote by uart Hi CF.Gauss. Just write it as $x + 1.6 x + 1.6^2 x = 90$ and then factorize out the "x" and simplify.
Hm ... I thought the goal here was to help people think and understand things, not spoon-feed them answers. Have I got that wrong?
Recognitions:
Science Advisor
## Divide number into 3 parts with each part being 1.6 times greater than the last
Quote by phinds Hm ... I thought the goal here was to help people think and understand things, not spoon-feed them answers. Have I got that wrong?
Actually I did not give the answer, I left the factorizing and the following arithmetic for the OP to do. This is in effect the first line of what probably be a three line derivation for the OP.
I agree though that it does give away a large part of the overall solution. Sometimes with such a simple question it's hard to know how to give the OP a "start" without giving away too much.
BTW. To me this looked more like a curiosity question than homework anyway, though of course I don't know that for sure.
Recognitions:
Gold Member
Quote by uart Actually I did not give the answer, I left the factorizing and the following arithmetic for the OP to do. This is in effect the first line of what probably be a three line derivation for the OP. I agree though that it does give away a large part of the overall solution. Sometimes with such a simple question it's hard to know how to give the OP a "start" without giving away too much. BTW. To me this looked more like a curiosity question than homework anyway, though of course I don't know that for sure.
Yeah, I can't argue w/ that. Still, I was going to try to lead him to an equation rather than give it to him.
Thread Tools
Similar Threads for: Divide number into 3 parts with each part being 1.6 times greater than the last Thread Forum Replies Precalculus Mathematics Homework 1 General Math 6 General Math 2 General Discussion 4 General Math 24
|
2013-05-21 20:54:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5282109975814819, "perplexity": 573.8529533890735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700563008/warc/CC-MAIN-20130516103603-00075-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://proofwiki.org/wiki/Cotangent_Function_is_Odd
|
# Cotangent Function is Odd
## Theorem
Let $x \in \R$ be a real number.
Let $\cot x$ be the cotangent of $x$.
Then, whenever $\cot x$ is defined:
$\map \cot {-x} = -\cot x$
That is, the cotangent function is odd.
## Proof
$\displaystyle \map \cot {-x}$ $=$ $\displaystyle \frac {\map \cos {-x} } {\map \sin {-x} }$ Cotangent is Cosine divided by Sine $\displaystyle$ $=$ $\displaystyle \frac {-\sin x} {\cos x}$ Cosine Function is Even and Sine Function is Odd $\displaystyle$ $=$ $\displaystyle -\cot x$ Cotangent is Cosine divided by Sine
$\blacksquare$
|
2019-12-13 05:55:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975315928459167, "perplexity": 632.6281507198466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548544.83/warc/CC-MAIN-20191213043650-20191213071650-00033.warc.gz"}
|
https://gdeq.org/Samokhin_A._Gradient_catastrophes_for_Burgers_equation_on_a_finite_interval._Numerical_and_qualitative_study,_talk_Workshop_Geom._of_PDEs_and_Integrability,_14-18_Oct_2013,_Teplice_nad_Becvou,_Czech_Rep._(abstract)
|
# Samokhin A. Gradient catastrophes for Burgers equation on a finite interval. Numerical and qualitative study, talk Workshop Geom. of PDEs and Integrability, 14-18 Oct 2013, Teplice nad Becvou, Czech Rep. (abstract)
Speaker: Alexey Samokhin
Title: Gradient catastrophes for Burgers equation on a finite interval. Numerical and qualitative study
Abstract:
We consider initial value-boundary problem (IVBP) for the Burgers equation
${\displaystyle u_{t}(x,t)=u_{xx}(x,t)+2\eta u(x,t)u_{x}(x,t)}$
on a finite interval:
${\displaystyle u(x,0)=f(x),\quad u(\alpha ,t)=l(t),\quad u(\beta ,t)=r(t),\quad x\in [\alpha ,\beta ]}$.
The case of constant boundary conditions ${\displaystyle u(\alpha ,t)=A}$, ${\displaystyle u(\beta ,t)=B}$ and its asymptotics is of special interest here. For such a IVBP viscosity usually produces asymptotic stationary solution which is invariant for some subalgebra of the full symmetry algebra of the equation. But the evolution may also result in a stable gradient catastrophe.
Slides: Samokhin A. Gradient catastrophes for a generalized Burgers equation on a finite interval (presentation at The Workshop on Geometry of PDEs and Integrability, 14-18 October 2013, Teplice nad Becvou, Czech Republic).zip
|
2021-06-25 13:07:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5558791756629944, "perplexity": 3316.67159267278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630175.17/warc/CC-MAIN-20210625115905-20210625145905-00465.warc.gz"}
|
https://sites.google.com/site/nygrouptheory/fall2018
|
### Fall2018
All the talks are 4:00pm-5:00pm in Room 5417 at the CUNY Graduate Center.
Wine and cheese are served afterwards in the math lounge on the 4th floor.
Sep. 7
### Tullio Ceccherini-Silberstein (Universita degli Study del Sannio)
TITLE: Garden of Eden Theorems: from Symbolic Dynamics to Algebraic
Dynamical Systems.
ABSTRACT: The Garden of Eden Theorem is a central result in the theory of
cellular automata. Given a finite alphabet set A and a group G, a
continuous G-equivariant (w.r. to the G-shift) map $\tau \colon A^G \to A^G$ is called a cellular automaton. The GOE theorem states that a
cellular automaton is surjective if and only if it is pre-injective (a
weaker condition than injectivity).
It was proved by Moore and Myhill in 1963 with G = Z^d the free abelian
group of rank d, and it was extended to all amenable groups
(Ceccherini-Silberstein, Machi, and Scarabotti 1999; Gromov 1999).
It was later shown that it fails to hold for any nonamaneble group
(Bartholdi, 2010), thus yielding a new characterization of amenability.
Following a suggestiion by Gromov, namely that the Garden of Eden theorem
could be extended to dynamical systems with a suitable hyperbolic flavor,
a Garden of Eden type theorem was proved for Anosov diffeomorphisms on
tori (Ceccherini-Silberstein and Coornaert, 2015) and for principal algebraic
dynamical systems satisfying a weak form of expansivity
(Ceccherini-Silberstein, Coornaert, and Li, 2018).
The talk will be completely self-contained.
Sep. 14 Martin Kreuzer
"Computing the Canonical Decomposition of a Finite Z-Algebra".
Abstract: In this joint work with Alexei Miasnikov and Florian Walsh,
we present an efficient method for computing the canonical decomposition
of a finite Z-algebra R into irreducible factors. Although the existence
of this decomposition has been known for along time, its equational
definability
is a more recent result. The first step is to compute the maximal ring
of scalars S(R). This is a finite commutative Z-algebra with an
explicitly computable
presentation. Then we use strong Groebner bases to calculate the fundamental
idempotents of S(R),. They allow us to compute the desired canoncial
decomposition in the final step. All steps of the procedure have a
polynomial
complexity, except for the strong Groebner basis calculation which
has a singly exponential bound.
Sep.28 Vladimir Shpilrain, Complexity in SL_2(Z) and SL_2(Q)
Abstract. We reflect on how to define complexity of a matrix and how to sample a random invertible matrix. We also discuss a related issue of complexity of algorithms in matrix groups, focusing on computational complexity of the subgroup membership problem for some important special subgroups of SL_2(Z) and SL_2(Q).
The talk is based on joint work with Anastasiia Chorna, Katherine Geller, Lisa Bromberg, and Alina Vdovina.
Oct.5 Alexei Miasnikov
Malcev's Problems and First-Order Paradise in Groups
Oct.12 Ilya Kapovich (Hunter College, CUNY)
Title: Counting conjugacy classes of fully irreducibles: double exponential growth
Abstract: A 2011 result of Eskin and Mirzakhani shows that for a closed hyperbolic surface S of genus $g\ge 2$, the number $N(L)$ of closed Teichmuller geodesics of length $\le L$ in the moduli space of $S$ grows as $e^{hL}/(hL)$ where $h=6g-6$. The number $N(L)$ is also equal to the number of conjugacy classes of pseudo-Anosov elements $\phi\in MCG(S)$ with $\log\lambda(\phi)\le L$, where $\lambda(\phi)$ is the dilatation" or stretch factor" of $\phi$. We consider an analogous problem in the $Out(F_r)$ setting for the number $N_r(L)$ of fully irreducible elements $\phi\in Out(F_r)$ with $\log\lambda(\phi)\le L$. We prove, for $r\ge 3$, that $N_r(L)$ grows doubly exponentially in $L$ as $L\to\infty$, in terms of both lower and upper bounds. These bounds reveal behavior not present in classic hyperbolic dynamical systems. The talk is based on a joint paper with Catherine Pfaff.
Oct.19 Daniel Studenmund (U of Notre-Dame)
Title: Commensurability growth of nilpotent groups
Abstract: A classical area of study in geometric group theory is subgroup growth, which counts the number of subgroups of a given group Gamma as a function their index. We will study a richer function, the commensurability growth, associated to a subgroup Gamma in an ambient group G. The main results of this talk concern arithmetic subgroups Gamma of unipotent groups G, following subgroup growth results by Grunewald, Segal, and Smith. We start with the simplest example of the integers in the real line. This is joint work with Khalid Bou-Rabee.
Oct.26 Lam Pham (Yale University)
Title: On Uniform Kazhdan Constants for Finitely Generated Linear Groups.
Abstract: If $G$ is a finitely generated group and $(\pi,\mathcal{H})$ is a unitary representation of $G$ on a Hilbert space $\mathcal{H}$ without $G$-invariant vectors, it is of interest to know if $\pi$ has a spectral gap; when all such representations $(\pi,\mathcal{H})$ have a spectral gap, $G$ is said to have Kazhdan's Property $(T)$. in general these spectral gaps depend on the choice of the generating set $S$ of $G$, and an important question is whether this dependence on $S$ can be removed. It is an open problem to determine if $\mathrm{SL}(3,\mathbb{Z})$ is uniform Kazhdan (i.e., the Kazhdan constant is independent of the choice of generators of $\mathrm{SL}(3,\mathbb{Z})$). In this talk, I will: (1) give an overview of the literature on explicit Kazhdan constants of finitely generated groups since the first explicit computation due to Burger (1991), and (2) present some new results on uniform spectral gaps for actions of the affine group over the integers.
Nov.2 Ben Fine's 70 conference
Nov.9 Catherine Pfaff (Queen's University at Kingston)
Title: Random automorphisms of free groups and what happens when you iterate them.
Abstract:
Two of the most natural and interesting questions one can ask about an automorphism group is what a random element of the automorphism group looks like and what happens as one repeatedly applies the automorphism to an element of the group (the asymptotic conjugacy class invariants). In the mapping class group circumstance, these questions (and their intersection) have been thoroughly studied with results dating back to Nielsen and Thurston, and then more recently with Dahmani, Horbez, Maher, Rivin, Sisto, Tiozzo, etc. While some is known in the outer automorphism group of the free group setting, little to nothing has been known about the most basic questions in the intersection of the main classes of questions, i.e. understanding the asymptotic conjugacy class invariants of random (outer) automorphisms of free groups. Together with Ilya Kapovich, Joseph Maher, and Samuel Taylor, we give a fairly detailed answer to this question.
Nov.16 Ben Steinberg
Nov.23 Thanksgiving
Nov.30
Dec.7 Henry Bradford, (Georg-August Universität Göttingen).
|
2018-10-23 03:32:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6830728650093079, "perplexity": 799.9568899133008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516003.73/warc/CC-MAIN-20181023023542-20181023045042-00461.warc.gz"}
|
https://solvedlib.com/n/a-ball-that-is-thrown-upwards-from-the-ground-will-eventuallyreach,17702189
|
# A ball that is thrown upwards from the ground will eventuallyreach its highest point and fall back to the ground.
###### Question:
A ball that is thrown upwards from the ground will eventually reach its highest point and fall back to the ground. Which one of the following vector quantities is always directed in the same direction as the ball travels along this up and down path? The ball's velocity. The ball's acceleration. Both of these. None of these.
#### Similar Solved Questions
##### Evaluate each sum using a formula for $S_{n}$. $\sum_{i=1}^{7}(-2 i+7)$
Evaluate each sum using a formula for $S_{n}$. $\sum_{i=1}^{7}(-2 i+7)$...
##### Assume that blood pressure readings are normally distributedwith μ = 111 and σ = 7. A researcher wishes to select people for astudy but wants to exclude the bottom 25%. Find the highest bloodpressure reading a person can have and be in the lowest 25%. Note:x = μ + zσ
Assume that blood pressure readings are normally distributed with μ = 111 and σ = 7. A researcher wishes to select people for a study but wants to exclude the bottom 25%. Find the highest blood pressure reading a person can have and be in the lowest 25%. Note: x = μ + zσ...
|
2022-07-06 00:40:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6775455474853516, "perplexity": 970.2598702441982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104655865.86/warc/CC-MAIN-20220705235755-20220706025755-00411.warc.gz"}
|
https://brilliant.org/problems/triplet-twins/
|
# Triplet Twins
How many ordered sets of primes $$\left(p,q,r\right)$$ are there such that $$p<q<r$$ and $$r-q = q-p = 2$$?
|
2017-01-23 00:30:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9978408217430115, "perplexity": 291.4344974302092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00338-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://www.khanacademy.org/math/geometry-home/geometry-area-perimeter/geometry-unit-squares-area/v/creating-rectangles-with-a-given-area-2-math-3rd-grade-khan-academy
|
# Creating rectangles with a given area 2
CCSS Math: 3.MD.C.7a
## Video transcript
- So, here is our given rectangle, and we want to draw a rectangle with the same area, the same area, so what is the area of this rectangle? Area is the amount of space a shape covers, so how much space, or how many square units does this shape cover, does our rectangle cover? Each of these is one square unit, so our rectangle covers one, two, three, four, five, six, seven, eight square units. It has an area of eight square units. So, we want to draw another rectangle that also covers eight square units. If it covers eight square units, than it has an area of eight square units, but we can't just draw the identical rectangle, because we're also told that it should have, our rectangle should have no side lengths the same, so what are the side lengths of our rectangle? Over here on the left, it's one unit long, and going across the top is eight units long. This rectangle had eight square units, and they were broken up into one row of eight, so we need to think of another way that we can break up eight square units. One idea would be two rows of four, 'cause two rows of four would also cover eight, so let's try that. Let's create a rectangle here, two rows of four, and we can just spread this out a little bit so it covers the whole square units, and so this rectangle also covers one, two, three, four, five, six, seven, eight square units, so the given rectangle, and our rectangle have the same area because they cover the same amount of space, but they have different side lengths, because our new rectangle is, has a side length of two over here on the side, it's two units long, and going across the top is four units long, so it has new side lengths, so here's one way that we could draw a rectangle with the same area, but different side lengths.
|
2019-10-19 15:11:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8258374333381653, "perplexity": 329.87026481726036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986696339.42/warc/CC-MAIN-20191019141654-20191019165154-00158.warc.gz"}
|
https://greenemath.com/College_Algebra/50/Bill-and-Coin-Word-ProblemsLesson.html
|
Lesson Objectives
• Demonstrate an understanding of how to translate phrases into algebraic expressions and equations
• Demonstrate an understanding of the six-step method used for solving applications of linear equations
• Learn how to solve coin word problems
## How to Solve Bill and Coin Word Problems
In our last lesson, we reviewed the six-step method used to solve a word problem that involves a linear equation in one variable.
### Six-step method for Solving Word Problems with Linear Equations in One Variable
1. Read the problem and determine what you are asked to find
2. Assign a variable to represent the unknown
• If more than one unknown exists, we express the other unknowns in terms of this variable
3. Write out an equation which describes the given situation
4. Solve the equation
5. State the answer using a nice clear sentence
6. Check the result
• We need to make sure the answer is reasonable. In other words, if asked how many students were on a bus, the answer shouldn't be (-4) as we can't have a negative amount of students on a bus.
### Solving Bill and Coin Word Problems
Essentially this type of problem will give us a value for all of the coins or bills in the problem. Our goal will be to find the individual number of coins or bills involved. Let's look at an example.
Example 1: Solve each word problem
At the end of Jessica's shift, she counted down her register. She had a total of $14.25 in change (coins). This change consisted of nickels, dimes, and quarters only. There were three times as many quarters as dimes and two-thirds the number of nickels as quarters. How many of each type of coin did Jessica have in her register? Step 1) After reading the problem, it is clear that we want to find the number of nickels, dimes, and quarters that were in Jessica's register Step 2) When we have more than one unknown, we can let a variable represent one of the unknowns and then model the other unknowns based on our variable. In this case, quarters are used in both comparisons. let x = # of quarters that were present in Jessica's register Then (1/3)x = # of dimes that were present in Jessica's register (since the number of quarters was 3 times more than the number of dimes) Then (2/3)x = # of nickels that were present in Jessica's register (since the number of nickels was 2/3 of the number of quarters) Step 3) To make an equation, we have to think about the information given. Many people make the mistake of trying to just add the amounts we just came up with (1/3)x, (2/3)x, and x and set this equal to 14.25. This will not work as we are comparing a number to a value. In other words,$14.25 is the value of the coins, whereas the sum of (1/3)x, (2/3)x, and x would be a number of coins. In this case and with most coin or denomination of money problems, we have to take an extra step and multiply each coin by its value. Let's look at the information organized in a chart:
Coin Number of Coins Value of Each Coin Total Value
Nickel(2/3)x.05(1/30)x
Dime(1/3)x.10(1/30)x
Quarterx.25.25x
To make things easier, we can use 1/20 in the place of .05, 1/10 in the place of .1 and 1/4 in the place of .25.
Total Value:
Nickels: $$\frac{1}{20}\cdot \frac{2}{3}x=\frac{1}{30}x$$ Dimes: $$\frac{1}{10}\cdot \frac{1}{3}x=\frac{1}{30}x$$ Quarters: $$\frac{1}{4}x$$ Let's set up our equation: $$\frac{1}{30}x + \frac{1}{30}x + \frac{1}{4}x=14.25$$ Step 4) Solve the equation: $$\frac{1}{30}x + \frac{1}{30}x + \frac{1}{4}x=14.25$$ $$\frac{2}{30}x + \frac{1}{4}x=14.25$$ $$\frac{1}{15}x + \frac{1}{4}x=14.25$$ Clear the fractions, multiply each side by 60: $$4x + 15x=855$$ $$19x=855$$ $$x=45$$ Step 5) Since x represented the number of quarters, this tells us Jessica had 45 quarters. She had 2/3 the number of nickels as quarters. This means she had 30 nickels (45 • 2/3). Lastly, she had 1/3 the number of dimes as quarters. This means she had 15 dimes (45 • 1/3). We can state our answer as:
Jessica had 30 nickels, 15 dimes, and 45 quarters in her register.
Step 6) We can check to see if the value matches.
Nickels: 30 • .05 = 1.5
Dimes: 15 • .1 = 1.5
Quarters: 45 • .25 = 11.25
Sum the amounts:
1.5 + 1.5 + 11.25 = 14.25
14.25 = 14.25
We can also check the number of coins: we are told there are three times as many dimes as quarters:
3 • 15 = 45
45 = 45
We are also told there are two-thirds the number of nickels as quarters:
45 • 2/3 = 30
30 = 30
#### Skills Check:
Example #1
Solve each word problem.
Megan, a cashier at a local retail store has a total of $210 in her register. That amount comes from 5 dollar bills and 20 dollar bills only. If the number of 5 dollar bills is 2 more than the number of 20 dollar bills, how much of each type of bill is in her register? Please choose the best answer. A 10:$5 bills, 8: $20 bills B 12:$5 bills, 6: $20 bills C 9:$5 bills, 4: $20 bills D 12:$5 bills, 10: $20 bills E 12:$5 bills, 14: $20 bills Example #2 Solve each word problem. Charlotte has a piggy bank with nickels, quarters, and dimes only. The piggy bank has a total of$61.50. The number of dimes is 60 less than the number of nickels. Additionally, if the number of quarters was reduced by 10, the number of dimes would be three times the amount of quarters. How much of each type of coin is in Charlotte’s piggy bank?
A
n: 455, d: 100, q: 19
B
n: 300, d: 240, q: 90
C
n: 500, d: 440, q: 176
D
n: 132, d: 72, q: 54
E
n: 292, d: 177, q: 55
Example #3
Solve each word problem.
Chloe works at the local fair. In order to play the games, customers must exchange money for one of two coin types, a 20 unit coin, or a 3 unit coin. At the end of Chloe’s shift, her coin bag has a total value of 2057 units. If she has 11 more 3 unit coins than 20 unit coins, how much of each coin type does she have in her bag?
A
20 unit: 10, 3 unit: 21
B
20 unit: 88, 3 unit: 99
C
20 unit: 101, 3 unit: 112
D
20 unit: 50, 3 unit: 61
E
20 unit: 70, 3 unit: 81
|
2021-04-17 05:41:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6078293919563293, "perplexity": 1386.8733874614686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038101485.44/warc/CC-MAIN-20210417041730-20210417071730-00397.warc.gz"}
|
http://zanj.nauticalsas.it/align-table-in-latex.html
|
Align Table In Latex
Here are some external resources for finding less commonly used symbols: Detexify is an app which allows you to draw the symbol you'd like and shows you the code for it! MathJax (what allows us to use on the web) maintains a list of supported commands. Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. If I copy the code into a {article} class,. Note that math mode ignores whitespace, in fact, this whole code could have been put on one line and still would have compiled correctly. Moving a contiguous row or column—i. If the tip contains good advice for current Vim, remove the {{review}} line. where in our document the table should be positioned and whether we want it to be displayed centered. The align* environment is used like the displaymath or equation* environment:. LaTeX doesn't have a specific matrix command to use. Single formulas must be seperated with two backslashes \\ Use the matrix environment to typeset matrices. In order to center the equation number, surround the array environment with an equation environment \begin{align} \dfrac{\partial^2 T \left( x, t. This gives the author the ability to have. If your table don't take all available space and you want to put text next or before it, is possible with the package wrapfig. You can change the fonts, numbering style, alignment and format of the captions and the caption labels. Click a cell, or press Ctrl + A to select all cells. Arguments x. Most likely your other tables are initially hidden. format: A character string. More than 20 years of research have gone into our 24/7 Digestive Support formula. Updated daily | Make-Table-Align-Top-Latex. Beamer Tutorial About Beamer Advantages of Beamer The standard commands of LATEX also work in Beamer. de and here on matheplanet. putting the equation on the far left side of page, and aligning subsequent lines). This method works with multiple lines of text and the container div will grow dynamically with the content. You can then type latex paper. , points, inches, etc. recommended this. align and align* The align and align* environments, available through the amsmath package, are used for arranging equations of multiple lines. For whatever reason, should you wish to alter the justification of a paragraph, there are three environments at hand, and also LaTeX command equivalents. LaTeX or PDF via LaTeX" the output is correct. The solution I found in the end, was to use the align environment: \begin{align} E &= mc^2 onumber \\ E^2 &= m^2p^2 + m^2c^4 onumber \end{align}. LaTeX is an editing tool that takes care of the format so you only have to worry about the contents of your document. We have already seen in the last section that it is often useful to enclose a tabular environment in a table environment. Add suitable categories so people can find the tip. You will be able to change the vertical alignment for any number of cells that you have currently selected in the table. Finding Other Symbols. Align to fist command; Extend defaults aligment chars list. select and copy (Ctrl+C) a table from the spreadsheet (e. LaTeX arrows. For example if you are using article documentclass then you may want to use the following command. Select a column, or row. Table des matières - Généralités - Premiers pas - Structure du document - Gestion de la bibliographie - Tableaux - Images - Éléments flottants et figures - Mise en forme du texte - Choix de la police - Mise en page - Mathématiques - Gestion des gros documents - Faire des présentations - Arts et loisirs - Dessiner avec LaTeX - Créer une. The most flexible alignment plugin for Sublime Text 3. COMPLETE Shopify Tutorial For Beginners 2020 - How To Create A Profitable. The table can be converted to latex or html. Very Basic Mathematical Latex A document in the article style might be entered between the nmaketitle and nendfdocumentgsections below. There are different way of placing figures side by side in Latex, subcaption, subfig, subfigure, or even minipage. If graphics or graphics and text had to be set side-by-side they would often be placed inside minipage environments. LEFT Flush left with the left text margin. Must fit on one page. This is done by adding l’s (align left), c’s (align center) and r’s (align right) in addition to |’s (vertical bar for cell separation) as the argument of the command tabular. Notice the \label {tab:somelabel} inside the \caption. Default value: alignment. To create the table of contents is straightforward, the command \tableofcontents does the job: \documentclass{ article } \usepackage[utf8]{ inputenc } \title{ Sections and Chapters } \author{ Gubert. Hi Team, I have a table in the dashboard, wherein i want first column to be left aligned and rest all the columns to be center aligned as shown in the below image. Then you also have to provide the width of the column, for example you could use \begin{tabular}{cp{5cm}}. To align a column to the left use l in place of c, and to align it to the right use r. Greek letters []. This package simplifies the way to manipulate the HTML or 'LaTeX' codes generated by 'kable ()' and allows users to construct complex tables and customize styles. We have already seen in the last section that it is often useful to enclose a tabular environment in a table environment. The solution I found in the end, was to use the align environment: \begin{align} E &= mc^2 \nonumber \\ E^2 &= m^2p^2 + m^2c^4 \nonumber \end{align}. digits: Maximum number of digits for numeric columns, passed. Tables can be used as formatting instrument, but consider using a multi column list instead. In a LaTeX document the table of contents can be automatically generated, and modified to fit a specific style, this article explain how. flush with both the left and right margins. tex and the typesetting program will run on your file of commands, producing a file ending in. How to Change a Table’s Horizontal Alignment. An integrated writing environment for creating LaTeX documents #272 Alignment of tables and equations at ampersand Anonymous Private: No It would be very helpful when TeXstudio could align content of math and table environments to the ampersand character (&) so that you can overview your content much better. The following figure shows where the Align buttons are on the (Table Tools) Layout tab and how these options align text in a table. ↓ has to be loaded in the LaTeX-preamble with the line [R] [R] When text is colored somewhere in the document with a predefined color, LyX loads the LaTeX-package color automatically. Registration is quick, simple and absolutely free. Changed category to Formatters. sty) when exporting to LaTeX without messing the basic org-mode style table for other exports? Thanks. 04089 20 640. [code]\documentclass[fleqn]{article} [/code]This will left align all the equation in the article. \documentclass{article} %\addtolength{\textheight}{+. LaTeX normally sets the width of the tabular environment to "natural" width, i. The default column types (left-aligned l; center-aligned c; and right-aligned r) adjust to the text size, rather than wrapping text automatically. Function 'kable()' is a light weight table generator coming from 'knitr'. The Alignment toolbox has nine buttons for aligning text in a table in Microsoft Word. For whatever reason, should you wish to alter the justification of a paragraph, there are three environments at hand, and also LaTeX command equivalents. Mostly the binary operators (=, > and ) are the ones which are aligned in order to make document beautiful. This is my table:. With no options, a very wide table goes off the page on the right. l - A column of left-aligned items. I use text-align:right because I like the numbers aligned on their right side when double or triple digits are introduced. Ahhhh tables — one of the infuriating things with tables in LaTeX, is that sometimes they're so wide that they extend into the right margin, or even off the page. However this doesn't work as expected as in the following MWE-screencast:. Make Whole Table Align Top Latex: A claw hammer, wood chisel set, a hand saw, a miter box with a saw (for cutting angles), a coping saw, finish punches, flat and straight tip screwdrivers, a rubber mallet (for tapping pieces together while not damaging the wood), woodworking clamps, a wood vise, a bench plane, a rasp, a tape measure, a 12 steel rule, a 6 steel square, and dont forget the wood. The problem occured after I changed the left margin of the page from 1. where in our document the table should be positioned and whether we want it to be displayed centered. Skip ahead to live broadcast. To produce (horizontal) blank space within a paragraph, use \hspace, followed by the length of the blank space enclosed within braces. So table is out of the question (unless you know a fix). This is just indented with indent-region. [code]\documentclass[fleqn]{article} [/code]This will left align all the equation in the article. LaTeX->PDF I've found that the best way to convert from latex to PDF is to use "pdflatex. Word offers nine ways to align text. The above example would align the top of the tabular with the text baseline, a b would align the bottom. GFM Markdown table syntax is quite simple. These are the traditional values for text-align: left - The default value. From left to right, and top to bottom, the buttons let you align text to the right, and top, center and top, and left and top. select and copy (Ctrl+C) a table from the spreadsheet (e. Platinum Product Expert. Take a look at how a normal table looks like when it is displayed using a variable width font:. Floats are there to deal with the problem of the object that won't fit on the present page, and to help when you really don't want the object here just now. Align contains the probiotic bacteria Bifidobacterium 35624™ More than 20 years of research have gone into our formula. Original Poster. This is an online LaTex table generator and editor, which can generate LaTex tables from excel, csv, json, html table, markdown table, etc. (That is, if the cell is a box, I want there to be white space above and below the text, if there is some other cell with more text. Escape special LaTeX symbols. Basic Idea and general commands []. The 'tabular*' environments allows for setting a width; however, it is necessary to have rubber space between columns that can expand to the specified. Using a dedicated package for ToC customization might be better, though. Arguments x. table Roman Tsegelskyi, Gergely Daróczi 2016-05-13. Possible values are latex, html, markdown, pandoc, and rst; this will be automatically determined if the function is called within knitr; it can also be set in the global option knitr. With no options, a very wide table goes off the page on the right. Floating tabular Edit. Beamer Tutorial About Beamer Advantages of Beamer The standard commands of LATEX also work in Beamer. A frequently seen mistake is to use \begin {center} … \end {center} inside a figure or table environment. What is the answers to module 18 foolproof. Here are some external resources for finding less commonly used symbols: Detexify is an app which allows you to draw the symbol you'd like and shows you the code for it! MathJax (what allows us to use on the web) maintains a list of supported commands. Table 2: Table in agreement of the general typeset rules. Free Download. LaTeX is a high-quality open source typesetting software that produces professional prints and PDF files. one is liable to encounter errors that complain about a "misplaced \noalign" or "extra alignment tab", or the like. Added spaces to make the content fit nicely in the code can also be omitted. However, if you want to vertically align text inside a text box or align text around a circle, these. Alignment, handling of long headers The use of light gray to further divide the tables Horizontal lines provide readability under denser packing and when lots of numbers are organized Sans serif fonts are preferrable for readability; of course, if you need math symbols and use latex, then stick with roman. Possible values are latex, html, markdown, pandoc, and rst; this will be automatically determined if the function is called within knitr; it can also be set in the global option knitr. Online LaTeX editor with autocompletion, highlighting and 400 math symbols. odd>td { text-align: center!important; } #. Because of how LaTeX deals sensibly with logical structure, it will automatically keep track of the numbering of figures, so you do not need to. \begin{align} e^x&=\sum_{k=0}^{\infty}\frac{x^k}{k!} \end{align} LaTeX on Pundit does not process changes to the \arraystretch variable - so arrays cannot be stretched or contracted. \begin{figure} and \begin{table} span only one column. Boxed equations in LaTeX. For narrow tables it is sometimes more pleasing to make them wider. Documents like " Obsolete packages and commands " ("l2tabu") address the need of up-to-date information. stargazer is a new R package that creates LaTeX code for well-formatted regression tables, with multiple models side-by-side, as well as for summary statistics tables. org and needs general review. Render Latex equations into plain text ASCII to insert as comments in source-code, e-mail, or forum. Can I include html code in an RMarkdown document?. Core functionality of pander is centered around pandoc. The steps in this article are going to show you how to select the vertical alignment for data that is entered into a cell in a table of your document. Information and discussion about graphics, figures & tables in LaTeX documents. I've generally determined that I want to set the [fleqn] option in the amsmath package to left a. Rather than\\begin{eqnarray*} x^2+ y^2&=& 1\\\\ y&=&\\sqrt{1. Floating tabular Edit. Embed formulas in your text by surrounding them with dollar signs The equation environment is used to typeset one formula. The reasoning for LaTeX is this: align does not say if this aligns columns or the caption or the table as a whole (all of which can be specified separately in LaTeX, although the astropy writer currently only supports col_align through a keyword [there are other ways to put a \begin{center} \ end{center} round your data using the latex_dict]). LaTeX table manipulation using Emacs/AUCTeX+align. This method works with multiple lines of text and the container div will grow dynamically with the content. However this doesn't work as expected as in the following MWE-screencast:. Nevertheless, sometimes a better control of floating elements is necessary. If you want to use them in text just put the arrow command between two like this example: $\uparrow$ now you got an up arrow in text. Added spaces to make the content fit nicely in the code can also be omitted. The steps in this article are going to show you how to select the vertical alignment for data that is entered into a cell in a table of your document. , determined from the contents of the columns. انعطاف و چسبندگی بسیار خوب برای کاربردهای روزمره. the various ‘chunks’ of a table align. 620 MazdaRX4Wag 21. This article explains how to insert spaces of different lengths in mathematical mode. align There's a lot of freely available documentation for LaTeX, but there's a pitfall: some documents that are still online are outdated and therefore contain obsolete information. 8-4 Date 2019-04-08 Title Export Tables to LaTeX or HTML Maintainer David Scott. How do I reference my LaTeX tables or equations? How do I convert my LaTeX document to Word? Sometimes a long equation needs to be broken over multiple lines, especially if using a double column export style. ) for each column separately. in my apa6 class options. The alignment in org-mode tables is handled assuming that the content of the table is shown using a fixed width font. There are three similar constructs for doing tables in LaTeX. , determined from the contents of the columns. A frequently asked question is how to get top alignment, like here on mrunix. Describe the bug If the kable align argument is set to something other than l, c, or r, adding a footnote using kableExtra::footnote would create a Latex table improperly displayed in the middle of the page. [code]\documentclass[fleqn]{article} [/code]This will left align all the equation in the article. LaTeX arrows. JSON 2d array editor and generator. txt files, respectively, using the out argument. For example if you are using article documentclass then you may want to use the following command. Hi Team, I have a table in the dashboard, wherein i want first column to be left aligned and rest all the columns to be center aligned as shown in the below image. Paragraph alignment []. If you want to change the way data appears in a cell, you can rotate the font angle, or change the data alignment. CMO Australia addresses the unique marketing, technology and leadership challenges chief marketers face as they look to align their own practices and insights with those of the business. This plugin is inspired by the excellent VIM plugin, tabular. I will give a small example code to create a table of contents first: After compiling the. Warning: Possible column misalignment. Fortunately, there's a tool that can greatly simplify the search for the command for a specific symbol. Latex-Suite does this via the global variable g:Tex_IgnoredWarnings. (This may have been changed in the newest versions of Matlab. Board index LaTeX Templates Theses, Books, beautiful template, thx!! I have an issue with my tables though. LaTeX is a high-quality open source typesetting software that produces professional prints and PDF files. A character string. انعطاف و چسبندگی بسیار خوب برای کاربردهای روزمره. I'm trying to change the alignment of the TOC of this document in order to make a long title, which occupies more than one line, to break to the next line before they are vertically aligned to the. Online LaTeX editor with autocompletion, highlighting and 400 math symbols. LaTeX by default recognizes "table" and "figure" floats, but you can define new ones of your own (see Custom floats below). Re: Number Alignment in Data Table Post by hugovdberg » Wed Feb 05, 2014 7:06 pm That third column is actually centered on the decimal point, I'm not sure what you want it to be centered on, but you might want to take a look at the siunitx manual, section 5. To have a paragraph style text and vertical alignment both at the same time, use ">{\centering\arraybackslash}m{3cm}" to specify cell format at the top and use \multicolumn{1}{m}{Text} in the table entry:. They are: Table A floating "thing" in a paper. LaTex Tutorial 8: Advanced Tables Thomas J. The outer pipes can also be left out. The align* environment is used like the displaymath or equation* environment:. Latex provides a huge number of different arrow symbols. It can be frustrating trying to get your figures and tables to appear where you want them in a LaTeX document. sty) when exporting to LaTeX without messing the basic org-mode style table for other exports? Thanks. The LaTeX export back-end can handle complex documents, incorporate standard or custom LaTeX document classes, generate documents using alternate LaTeX engines, and produce fully linked PDF files with indexes, bibliographies, and tables of contents, destined for interactive online viewing or high-quality print publication. 2 values ‘T’ and ‘B’ additionally which align the subfigureat the very top resp. This document attempts to illustrate various methods for centering tables, either by using CSS, or by using html methods in the context of a transitional doctype declaration. You only need three dashes beneath your first row in order to define the table. To insert a figure in a LaTeX document, you write lines like this: \begin{figure} \centering \includegraphics[width=3. It's not that I want the text of the tables center aligned, I want the table physically in the center of the slide. LaTeX files usually have a. Latex Tutorial 3 of 11: Table of Contents and Front Matter - Duration: 12:04. Alignment, handling of long headers The use of light gray to further divide the tables Horizontal lines provide readability under denser packing and when lots of numbers are organized Sans serif fonts are preferrable for readability; of course, if you need math symbols and use latex, then stick with roman. The align* environment is used like the displaymath or equation* environment:. This could lead to extra wide tables or to a bad looking if the width differs between the rows. Since the vertical-align property works with table cells we set the parent div to be a css table and we set the child div as a table cell. Welcome to LinuxQuestions. Width of columns determined automatically. Click a cell, or press Ctrl + A to select all cells. The table-layout property defines what algorithm the browser should use to lay out table rows, cells, and columns. In RTF mode string should be one of l , c , r , and j. T he captions for figures, tables, subfigures and subtables in LaTeX can be customized in various ways using the caption and subcaption packages. Then right-click and go to Table properties > Cell vertical alignment and change the alignment to "Top. Render Latex equations into plain text ASCII to insert as comments in source-code, e-mail, or forum. The title of the section should be surrounded by braces and placed immediately after the relevant command. LaTeX doesn't have a specific matrix command to use. Fortunately, there's a tool that can greatly simplify the search for the command for a specific symbol. Draw border. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. A first approach. Export (png, jpg, gif, svg, pdf) and save & share. h (Here) - at the position in the text where the table environment appears. It is advised that you use lyx from the beginning rather than import a half-done latex file. The argument where specifies the allowed locations for the table. ↓ has to be loaded in the LaTeX-preamble with the line [R] [R] When text is colored somewhere in the document with a predefined color, LyX loads the LaTeX-package color automatically. The align* version is the same as align except that it doesn't produce equation numbers. LaTeX is the de facto standard for the communication and publication of scientific documents. You can add negative as well as positive space with an \hspace command. First table is only 100 pixel width in any changes in browser window state, while other table will always stretch the full width of the window it is viewed in, that is the table automatically expands as the user changes the window size when you set width in %. I've generally determined that I want to set the [fleqn] option in the amsmath package to left a. Basic horizontal alignment options, like left, right or centered, are under the Home menu. should you make a real page, you would separate the css from html, by putting a link elements in. Make the first table a 2-column table and the second a 4-column table. Here are some of the other commands that are available in this environment: \+ Causes left margin of subsequent lines to be indented one tab stop to the right, just as if a \> command were added to the beginning of subsequent lines. That has saved me a lot of time, but setting everything up took a. latex documentation: Coloring Table. Latex Table Editor Columns: Rows: | Borders: No borders Borders around table Vertical borders Borders around cells | First row is the header | Align Columns | Clear Table. Tables can be created on Wikipedia pages using special wikitext syntax, and many different styles and tricks can be used to customise them. Adding negative space is like backspacing. The outer pipes can also be left out. , points, inches, etc. table-chrome. Latex provides a huge number of different arrow symbols. Controlling figure and table placement in LaTeX. Step-By-Step Ideas. Tables in LaTeX can be created through a combination of the table environment and the tabular environment. I believe these buttons were available in Word 1997 and 2000. We have already seen in the last section that it is often useful to enclose a tabular environment in a table environment. LaTeX Code-Snippet for tables,horizontal-alignment,vertical-alignment,rotating. Is there a way to specify in an org-mode document that tables should use a particular format/structure (maybe in an external. ) for each column separately. Skip ahead to live broadcast. recommended this. There are four places where LaTeX can possibly put a float: h Here - at the position in the text where the table environment appears. dvi , which is the. By default Latex justifies all your text so that it lines up on both the left and right margins. frame objects produce a table of the entire data frame. Center-Aligned Figures in LaTeX Tables. When used in table cells, vertical-align does what most people expect it to, which is mimic the (old, deprecated) valign attribute. Mostly the binary operators (=, > and ) are the ones which are aligned in order to make document beautiful. [code]\documentclass[fleqn]{article} [/code]This will left align all the equation in the article. This could lead to extra wide tables or to a bad looking if the width differs between the rows. table-chrome. The word command may sound scary. When scrolling is enabled in DataTables through the use of the scrollX or scrollY parameters, it will split the table into two or three individual HTML table elements; the header, the body and, optionally, the footer. \begin{align} \phi & = a + b + c. Enclosing the table environment by an equation environment (or vice versa) doesn't work either. How do I align things in the center or increase the margin sizes so that stuff doesn't look so tight to the left? Note, I'm not talking about using the align argument to xtable(). Draw border. ) Arrays are very flexible, and can be used for many purposes, but we shall focus on matrices. Side-by-Side Tables in Rmarkdown and Latex. This is a json converter that provides the ability to convert excel, csv, markdown table, and html table. Width of columns determined automatically. If format is a function, it must return a character string. Vertical alignment of graphics. t - align on top row b - align on bottom row cols Specifies the column formatting. Tables can be used as formatting instrument, but consider using a multi column list instead. Caution: This Product Contains Natural Rubber Latex Which May Cause Allergic Reactions. The document will not look anything like the goal document yet, but don't worry about that at this time. centering a table > footnote within a table fixed width How to create a table in LaTeX with fixed width? generate LaTeX tables with fixed width online Problem If the default tabular environment is used, every colunm get the width they need. Note: vertical lines are just for checking the alignment - generally they are not recommendable. No CC Required. Addition-ally, the package allows de ning command se-quences to be executed before or after the con-tents of a column. First table is only 100 pixel width in any changes in browser window state, while other table will always stretch the full width of the window it is viewed in, that is the table automatically expands as the user changes the window size when you set width in %. If I copy the code into a {article} class,. The behavior of the following browsers was noted: Microsoft Internet Explorer v. The outer pipes can also be left out. t - align on top row b - align on bottom row cols Specifies the column formatting. Tabbing Power similar to tabular. They should have the same intent with "Page" text in Table of Contents. Vertical alignment within cells: \ documentclass {article} \ usepackage {array} \begin {document} \begin {tabular}{@ {} lp {1. surroundSpace. The optional argument [placement] determines where LaTeX will try to place your table. Then right-click and go to Table properties > Cell vertical alignment and change the alignment to "Top. vertical-align in table cells. If you want to change the way data appears in a cell, you can rotate the font angle, or change the data alignment. This is an online LaTex table generator and editor, which can generate LaTex tables from excel, csv, json, html table, markdown table, etc. So I decide to put it here for a little reminder. This package simplifies the way to manipulate the HTML or 'LaTeX' codes generated by 'kable ()' and allows users to construct complex tables and customize styles. should you make a real page, you would separate the css from html, by putting a link elements in. This is a list of patterns, which can be used to filter out (or ignore) some or the warnings and errors reported by the compiler. Note that the LaTeX export is not 100% guaranteed. If format is a function, it must return a character string. It can be frustrating trying to get your figures and tables to appear where you want them in a LaTeX document. First we must take a quick look at LaTeX syntax. Ommit last line of selection if empty. Hi Team, I have a table in the dashboard, wherein i want first column to be left aligned and rest all the columns to be center aligned as shown in the below image. Besides specifying label and caption (see Images and tables), the other valid LaTeX attributes include: :mode The LaTeX export back-end wraps the table differently depending on the mode for accurate rendering of math symbols. The title of the section should be surrounded by braces and placed immediately after the relevant command. A frequently asked question is how to get top alignment, like here on mrunix. A Google search revealed the LaTeX Wikibook, which suggests a few methods to force figures to vertically align according to their center. So, I always look for this when I need it. This is done by adding l’s (align left), c’s (align center) and r’s (align right) in addition to |’s (vertical bar for cell separation) as the argument of the command tabular. The TeX macro for horizontal alignment (ie, tables) is less easy to use than the LaTeX tabular environment, but it is correspondingly more powerful. eqnarray vs. If you want to avoid that just use \centering instead like in this example: The additional space of the center environment is caused by a trivlist environment. Editable cells in a grid are indicated with a pencil icon which disappears once in edit mode. Occasionally you may want to have text right-aligned in a LaTeX document. Build complex HTML or 'LaTeX' tables using 'kable()' from 'knitr' and the piping syntax from 'magrittr'. Maybe it would be a good idea to align by default the table content to the left. the various ‘chunks’ of a table align. Hi, I want to change the alignment in Jupyter notebooks. This is just indented with indent-region. To align a column to the left use l in place of c, and to align it to the right use r. I have a latex table which contains some long commands like includegraphics or href which makes the table very nasty to read. Floats are containers for things in a document that cannot be broken over a page. LaTeX is an editing tool that takes care of the format so you only have to worry about the contents of your document. You can then type latex paper. Original Poster. Latex Tutorial 3 of 11: Table of Contents and Front Matter - Duration: 12:04. Below is an example of LaTeX source code that creates a table showing the effects of the basic font-size change declarations (Note how they are used). The Best Make Whole Table Align Top Latex Free Download PDF And Video. I have a latex table which contains some long commands like includegraphics or href which makes the table very nasty to read. Click an Align button (you may have to click the Alignment button first, depending on the size of your screen). The following color names are predefined:. A basic article class document has figure and subfigure captions that look like this:. It can be added with the command \\underbrace{text}, or for the added number under the underbrace, \\underbrace{text}_{text}. An R object, typically a matrix or data frame. , somewhere % in the TEXINPUTS path. l - A column of left-aligned items. Platinum Product Expert. LaTeX is an editing tool that takes care of the format so you only have to worry about the contents of your document. This package simplifies the way to manipulate the HTML or 'LaTeX' codes generated by 'kable()' and allows users to construct complex tables and customize styles using a readable syntax. Add context menu option. Vertical alignment of graphics. If you want to avoid that just use \centering instead like in this example: The additional space of the center environment is caused by a trivlist environment. vertical-align in table cells. \documentclass{article} %\addtolength{\textheight}{+. table-chrome. one is liable to encounter errors that complain about a "misplaced \noalign" or "extra alignment tab", or the like. align There's a lot of freely available documentation for LaTeX, but there's a pitfall: some documents that are still online are outdated and therefore contain obsolete information. In addition, if it's possible to adjust only the vertical alignment, I'd like to add the vertical alignment buttons to my ribbon. After a few questions, the code will be generated in another buffer. tabularx typesets a table with xed widths, intro-. 8 4 108 93 3. By a chance I found a small piece of LaTeX code that allows horizontal alignment in matrix-environments of the package "amsmath" (bmatrix, pmatrix, etc. Using the align environment in LaTeX. If you change something in your do-file (e. Ommit last line of selection if empty. It works by setting tab stops and tabbing to them much the way you do with an ordinary typewriter. Below, I surround each \includegraphics{} command with the \parbox{} command, which centers it along 1 unit of measurement, set to 12 pts. LaTeX Code-Snippet for multirow,multicolumn,tabu. Alignment many chars in line to table. Registration is quick, simple and absolutely free. Basic horizontal alignment options, like left, right or centered, are under the Home menu. The Comprehensive LATEX Symbol List Scott Pakin ∗ 19 January 2017 Abstract This document lists 14283 symbols and the corresponding LATEX commands that produce them. Tabbing Power similar to tabular. Finding Other Symbols. table Roman Tsegelskyi, Gergely Daróczi 2016-05-13. T he captions for figures, tables, subfigures and subtables in LaTeX can be customized in various ways using the caption and subcaption packages. Several LaTeX commands can be used to improve the format of a LaTex table. table-striped>tbody>tr. The name "AMS-LaTeX" is used to mean "LaTeX with AMS extensions". From left to right, and top to bottom, the buttons let you align text to the right, and top, center and top, and left and top. Most of the Latex editors provide a macro, which is usually helpful as it directly inserts the basic stub usually including a few rows and columns of the table. The \caption and \label commands can be used in. This is the default setting for ALIGN. Board index LaTeX Templates Theses, Books, beautiful template, thx!! I have an issue with my tables though. People love Align Probiotics. LaTeX provides this functionality with the \hfill keyword. I will give a small example code to create a table of contents first: After compiling the. To make the table float, simply include it in a table environment or a figure environment. However, I always keep forgetting it. adjust() method when table becomes visible to recalculate column widths. Please avoid the discussion page (use the Comments section below for notes). The div must have a width specified, and use auto left and right margins. For narrow tables it is sometimes more pleasing to make them wider. b Bottom - at the bottom of a text page. Align Bulleted Text Vertically in Its Text Box. After a few questions, the code will be generated in another buffer. Tables can be created on Wikipedia pages using special wikitext syntax, and many different styles and tricks can be used to customise them. h (Here) - at the position in the text where the table environment appears. LEFT Flush left with the left text margin. align and align* The align and align* environments, available through the amsmath package, are used for arranging equations of multiple lines. Please, compose a theme for your table by selecting color, border and stripes themes shown below. There are four places where LaTeX can possibly put a float: h Here - at the position in the text where the table environment appears. Document history. Posted on 24/11/2017 by markiewb Tagged latex tabular CommentsNo Comments on Latex: Alignment of a fixed-width column in the tabular-environment Latex: Alignment of a fixed-width column in the tabular-environment. To insert a figure in a LaTeX document, you write lines like this: \begin{figure} \centering \includegraphics[width=3. One easy solution is to override this by adding \renewcommand { umberline} [1] {#1~} to the preamble. June 2019 by tom Leave a Comment. LaTeX provides this functionality with the \hfill keyword. 1 specification, table layout in general is usually a matter of taste and will vary depending on design choices. 10 LaTeX Export. If you change something in your do-file (e. Note that math mode ignores whitespace, in fact, this whole code could have been put on one line and still would have compiled correctly. I used custom CSS as below. I want column Field1 all its data to be left aligned whereas other field(2-6) and its data to be center aligned. surroundSpace : { "colon" : [0, 1], // The first number specify how much space to add to the left, can be negative. If coding tables by hand seems tedious, you can also cheat by converting Excel spreadsheets to LaTeX tables. Example 2:. To have a paragraph style text and vertical alignment both at the same time, use ">{\centering\arraybackslash}m{3cm}" to specify cell format at the top and use \multicolumn{1}{m}{Text} in the table entry:. Then, click on. Org-mode latex table alignment. The Comprehensive LATEX Symbol List Scott Pakin ∗ 19 January 2017 Abstract This document lists 14283 symbols and the corresponding LATEX commands that produce them. A more complex HTML table may also include , , , , , and elements. Vertical alignment within cells: \ documentclass {article} \ usepackage {array} \begin {document} \begin {tabular}{@ {} lp {1. T he captions for figures, tables, subfigures and subtables in LaTeX can be customized in various ways using the caption and subcaption packages. Work in progress. In order to center the equation number, surround the array environment with an equation environment \begin{align} \dfrac{\partial^2 T \left( x, t. Latex-Suite does this via the global variable g:Tex_IgnoredWarnings. The Table Editor is running in your browser, and does not save anything on our servers. Another thing to notice is the effect of the \displaystyle command. It can be added with the command \\underbrace{text}, or for the added number under the underbrace, \\underbrace{text}_{text}. The tabbing environment provides a way to align text in columns. Re: Number Alignment in Data Table Post by hugovdberg » Wed Feb 05, 2014 7:06 pm That third column is actually centered on the decimal point, I'm not sure what you want it to be centered on, but you might want to take a look at the siunitx manual, section 5. Free Download. Splitting and aligning an equation. If your table don't take all available space and you want to put text next or before it, is possible with the package wrapfig. The Latex. Does anyone know the proper way to align a very wide latex table with org mode? I've tried playing with different options in #+ATTR_LATEX like :center and :width but nothing works quite right. Online LaTeX editor with autocompletion, highlighting and 400 math symbols. el+rectangular selection Tables in LaTeX are sometimes somewhat difficult to manage; in particular doing anything with columns---in contrast to most word processors, which allow for the manipulation of columns in certain environments (e. On the right panel, click on Border editor to expand the section. Creating LATEX Arrays, Tables, and Figures B. To align the text in your tables at the top of the cells, drag your cursor through the entire table so all cells are selected. To create colored tables, we need the LaTeX-package colortbl, that is loaded in the preamble with the line \usepackage{colortbl} The color of a column is adjusted with the command \columncolor{name of color} inside the command >{ }. The Best Make Table Align Top Latex Free Download PDF And Video. The default alignment for images and tables is set to left. Welcome to LinuxQuestions. LaTeX source PDF view Whole example. LaTeX doesn't have a specific matrix command to use. Microsoft Word 2013 provides multiple ways to align text inside a document, but these options are not grouped in a single menu or area of the interface. If you want to control the column width of a table, you may want to use p option instead of l , c or r for left, center or right alignment. How do I align things in the center or increase the margin sizes so that stuff doesn't look so tight to the left? Note, I'm not talking about using the align argument to xtable(). Let's examine the contents of a simple LaTeX file which has been used as a first example in this tutorial. This works well as long as the content in each cell is short and of similar length. For example, this WikiTeX article can be converted to LaTeX source without issues using the "Actions->LaTeX Export" top-right menu. Changed category to Formatters. If you DO want line numbering, but do NOT want a specific line numbered, type \nonumber after the. Another option would be to look in "The Comprehensive LaTeX Symbol List" in the external links section below. LaTeX Code-Snippet for tables,horizontal-alignment,vertical-alignment,rotating. Basic Idea and general commands []. Google Docs, LibreOffice Calc, webpage) and paste it into our editor -- click a cell and press Ctrl+V. The different part is that, this time you don't need to pipe kable outputs to another function. #+BEGIN_SRC emacs-lisp :results output silent (org-table-map-tables 'org-table-align) #+END_SRC This block applies in fact what is used in the selected answer. 3 posts • Page 1 of 1. To make the table float, simply include it in a table environment or a figure environment. Then you also have to provide the width of the column, for example you could use \begin{tabular}{cp{5cm}}. Since the vertical-align property works with table cells we set the parent div to be a css table and we set the child div as a table cell. The float environments are figure and table. LaTeX forum ⇒ Graphics, Figures & Tables ⇒ Vertical Alignment for Table Cell Content. LaTeX uses a defined space for entries in the table of contents, which means that long chapter or section numbers sometimes overlap the title. As with matrices and tables, \\ specifies a line break, and & is used to indicate the point at which the lines should be aligned. subcaption. Use fleqn as an option in the document class. Makes it easier to work with tables in Latex. Other times you may want a block of left-aligned text next to a block of right-aligned text. The Alignment toolbox has nine buttons for aligning text in a table in Microsoft Word. For examples of centering other block-level elements such as paragraphs, see the "Centering blocks" page. Make Whole Table Align Top Latex: A claw hammer, wood chisel set, a hand saw, a miter box with a saw (for cutting angles), a coping saw, finish punches, flat and straight tip screwdrivers, a rubber mallet (for tapping pieces together while not damaging the wood), woodworking clamps, a wood vise, a bench plane, a rasp, a tape measure, a 12 steel rule, a 6 steel square, and dont forget the wood. This usually works fine if the minipages should be bottom-aligned. Google Docs, LibreOffice Calc, webpage) and paste it into our editor -- click a cell and press Ctrl+V. l - A column of left-aligned items. Also, patients with spine bifida tend to be more at-risk for latex allergy. \begin {document} View which changes have been added and removed. align There's a lot of freely available documentation for LaTeX, but there's a pitfall: some documents that are still online are outdated and therefore contain obsolete information. I will give a small example code to create a table of contents first: After compiling the. How to align text in columns in LaTeX. A frequently seen mistake is to use \begin {center} … \end {center} inside a figure or table environment. Look for "Detexify" in the external links section below. These are the traditional values for text-align: left - The default value. LaTeX source View 1 View 2 View 3 Whole example. The following figure shows where the Align buttons are on the (Table Tools) Layout tab and how these options align text in a table. Word offers nine ways to align text. I use estout to generate tables of summary statistics and regression results that can be easily imported into LaTeX. LaTeX is a high-quality typesetting system; it includes features designed for the production of technical and scientific documentation. If the tip contains good advice for current Vim, remove the {{review}} line. and, as I wrote before, it will put the content of a cell in differents rows of the table automatically if the size of the text is two big. A basic article class document has figure and subfigure captions that look like this:. 0 (MSIE7), Opera v. Look for "Detexify" in the external links section below. LaTeX Counters Everything LaTeX numbers for you has a counter associated with it. If you change something in your do-file (e. Have you tried to align decimal in LaTex before? Did you do like t…. This could lead to extra wide tables or to a bad looking if the width differs between the rows. With no options, a very wide table goes off the page on the right. Alexander Baran-Harper 106,375 views. On the right panel, click on Border editor to expand the section. 1 How to input the code L A TEX does not require the columns (i. Work in progress. A much-demanded addition to version 4. To be able to use all commands explained in this section, the LaTeX-package color [Q] [Q] The LaTeX-package color is part of every LaTeX standard installation. Make Whole Table Align Top Latex: A claw hammer, wood chisel set, a hand saw, a miter box with a saw (for cutting angles), a coping saw, finish punches, flat and straight tip screwdrivers, a rubber mallet (for tapping pieces together while not damaging the wood), woodworking clamps, a wood vise, a bench plane, a rasp, a tape measure, a 12 steel rule, a 6 steel square, and dont forget the wood. Warning: Possible column misalignment. It is advised that you use lyx from the beginning rather than import a half-done latex file. If a table is hidden when initialized the height / width of the table elements will be unavailable. Export (png, jpg, gif, svg, pdf) and save & share. This usually works fine if the minipages should be bottom-aligned. The align environment is a souped up version of the equation environment. \begin{align} \phi & = a + b + c. Let me start with the table-stub and later on explain how to fill in the actual table content. COMPLETE Shopify Tutorial For Beginners 2020 - How To Create A Profitable. As with matrices and tables, \\ specifies a line break, and & is used to indicate the point at which the lines should be aligned. Rivera Simple and Fancy Table in LaTeX (Latex Tutorial Adjusting Brackets, Matrices and Align Environment - Duration: 14:09. LaTeX is the de facto standard for the communication and publication of scientific documents. Use fleqn as an option in the document class. Vertical alignment of graphics. The problem occured after I changed the left margin of the page from 1. Content aligns along the left side. ST2 support is deprecated but however, it is still possible to install AlignTab on ST2 via Package Control. de and here on matheplanet. 1Introduction This document gives a gallery of tables which can be made using the xtable package to create LATEX output. 8-4 Date 2019-04-08 Title Export Tables to LaTeX or HTML Maintainer David Scott. The specific way each feature is presented and the material covered in these sites are the best reason for downloading Make Table Align Top Latex. With no options, a very wide table goes off the page on the right. It can be added with the command \\underbrace{text}, or for the added number under the underbrace, \\underbrace{text}_{text}. vertical-align in table cells. Customized table cell format. It does not allow row or cell spanning as well as putting multi-line text in a cell. the various ‘chunks’ of a table align. A character string. So I decide to put it here for a little reminder. This center environment can cause additional vertical space. tags in a wiki. In order to vertically align the top of the image with the top of the text in each table row, the image baseline must be adjusted. You can add negative as well as positive space with an \hspace command. D Pu uu β G f (in) (lbs) (in) (psi·in) 5 269. LaTeX doesn't have a specific matrix command to use. drmath Posts: 2 Joined: Wed Jun 08, 2011 1:58 am. The align environment is a souped up version of the equation environment. Table 2: Table in agreement of the general typeset rules. 16000 Woodworking Plans Get Make Whole Table Align Top Latex: Get Free & Instant Access To Over 150 Highly Detailed Woodworking Project Plans. digits: Maximum number of digits for numeric columns, passed. This referencing capability lets you easily give readers the exact number of a figure, or tell them what page number a figure is located on with the use of a few simple. Possible values are latex , html, markdown, pandoc, and rst; this will be automatically determined if the function is called within knitr; it can also be set in the global option knitr. You have to tell Latex in the beginning how many columns you will be using. The Comprehensive LATEX Symbol List Scott Pakin ∗ 19 January 2017 Abstract This document lists 14283 symbols and the corresponding LATEX commands that produce them. Example 2:. A Google search revealed the LaTeX Wikibook, which suggests a few methods to force figures to vertically align according to their center. Use the align environment in order to print the equation with the line number. Sometimes, they just seem to float off onto another page of their own accord. A frequently seen mistake is to use \begin {center} … \end {center} inside a figure or table environment. Tables in LaTeX can be created through a combination of the table environment and the tabular environment. LaTeX is an editing tool that takes care of the format so you only have to worry about the contents of your document. 04089 20 640. (That is, if the cell is a box, I want there to be white space above and below the text, if there is some other cell with more text. The underbrace symbol will put a flipped { sign under the bracketed text. There are at least three options for tables with aligned decimal points. Pandoc can convert between numerous markup and word processing formats, including, but not limited to, various flavors of Markdown, HTML, LaTeX and Word docx. To insert a figure in a LaTeX document, you write lines like this: \begin{figure} \centering \includegraphics[width=3. Added spaces to make the content fit nicely in the code can also be omitted. Latex Center Table On Page Vertically. Here is how LaTeX typesets the above source file. Your data is just as safe as anything else on your computer. Floats are there to deal with the problem of the object that won't fit on the present page, and to help when you really don't want the object here just now. That has saved me a lot of time, but setting everything up took a. It consists of a sequence of the following specifiers, corresponding to the sequence of columns and intercolumn material. % % Place this file in a directory accessible to LaTeX (i. Fixed alignment to 2 chars string. table-striped>tbody>tr. 320 Hornet4Drive 21. D Pu uu β G f (in) (lbs) (in) (psi·in) 5 269. 875 Datsun710 22. A frequently seen mistake is to use \begin {center} … \end {center} inside a figure or table environment. Possible values are latex, html, markdown, pandoc, and rst; this will be automatically determined if the function is called within knitr; it can also be set in the global option knitr. The array environment is basically equivalent to the table environment (see tutorial 4 to refresh your mind on table syntax. Work in progress. It is always good practise to add a caption to any figure or table. As with matrices and tables, \\ specifies a line break, and & is used to indicate the point at which the lines should be aligned. This works well as long as the content in each cell is short and of similar length. In a text box in Word, you can align text horizontally or vertically, and you can adjust the margins to be narrower or wider. the tabular* environment Basic LaTeX can make the gaps stretch: the tabular* environment takes an extra argument (before the clpr layout one) which takes a length. The rest of the page extends beyond the header line to the right or left. I use estout to generate tables of summary statistics and regression results that can be easily imported into LaTeX. This center environment can cause additional vertical space. To align the text in your tables at the top of the cells, drag your cursor through the entire table so all cells are selected. Click "Generate" button to see the. The same problem occurred for "List of Tables", and "List of Figures" pages too, but they became normal (right aligned) after I updated the content of them. How do I align things in the center or increase the margin sizes so that stuff doesn't look so tight to the left? Note, I'm not talking about using the align argument to xtable(). Creating a simple table in LaTeX. A basic article class document has figure and subfigure captions that look like this:. Original Poster.
|
2020-09-18 12:46:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8798063397407532, "perplexity": 2196.908517546522}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400187899.11/warc/CC-MAIN-20200918124116-20200918154116-00094.warc.gz"}
|
https://www.infoq.com/news/2008/01/helander-on-domain-model-mgmt/
|
# Article: Take care of your domain model
| by Niclas Nilsson 0 Followers on Jan 03, 2008. Estimated reading time: 2 minutes |
Today, many projects focus on Domain-Driven Design, but it is not always easy. One of the most important things are to separate the domain code from the code that only exists for technical reasons.
Mats Helander has written an InfoQ article about how to manage domain models with a concept he calls Domain Model Management. In the article, Mats leads the reader step by step though common problems regarding design and separation of concerns when trying to implement a domain model, explains ways to solve the problems and manages to teach aspect oriented-programming, a couple of design patterns and some things about object/relational mappers in the process.
In this excerpt from the article introduction, Mats reasons about where to put the infrastructure code:
As the amount of infrastructure code grows, finding a good architecture for dealing with it all becomes increasingly important. A big part of the question is - are we allowed to put some infrastructure code in our domain model classes or is that something which should be avoided at all cost?
The argument for avoiding infrastructure code in the domain model classes is strong: The domain model is supposed to represent the core business concepts that our application is dealing with. Keeping these classes clean, lightweight and maintainable is an excellent architectural goal for an application that intends to make heavy use of its domain model.
On the other hand, as we will go on to see, taking the extremist route of keeping our domain model classes completely free of infrastructure code - often referred to as using a Plain Old Java/CLR Objects (POJO/POCO) domain model - can also prove problematic. It will often result in clunky, less efficient workarounds - and some features just won’t be possible to implement at all that way.
This seems to indicate, as is so often the case, that we have a trade-off situation on our hands where we should try to put only just as much infrastructure code in our domain model classes as absolutely needed, but no more. We trade off some bloat in the domain model for some improved efficiency or enablement of some required Domain Model Management feature that wouldn’t work otherwise. Negotiating good trade-offs is, after all, a large part of what software architecture is all about.
Lean back and enjoy the full article about Aspects of Domain Model Management, and the accompanying source code.
Style
## Hello stranger!
You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.
## Get the most out of the InfoQ experience.
### Tell us what you think
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
Email me replies to any of my messages in this thread
Bugs in article
Even though I proof-read the article several times before submission, I'm still able to spot a few bugs in it now. I will continue to update a blog post for this article at the following address with bugs as I find them:
www.matshelander.com/wordpress/?p=80
I will still list the bugs I have found so far at the end of this comment.
Also, while this may be my mistake, I don't see the link to the code? At any rate, you can download the Visual Studio project with the code for the article here:
www.matshelander.com/infoq/Infoq.AspectsOfDMM.zip
Thanks,
/Mats
List of Bugs:
List 2
The Persistence component should not contain any dirty flags.
Figure 6 (text above)
The text states that the person class inherits a dirty flag it doesn’t need, but it should be that it inherits a lazy flag that it doesn’t need.
Figure 7, 10, 11, 12
The EmployeeProxy class should have getName and setName methods (just like the PersonProxy class) overriding the ones on the Employee class in order to provide the interception.
List 4
The first property in the EmployeeProxy class misses its name. Currenlty it just says “public override string”, it should be “public override string Name”.
List 5
In the setter method for the Salary property in the EmployeeProxy class, the call to the OnPropertySet method on the dirtyInterceptor sends “this.name” as the second parameter – it should be “this.salary”.
All the calls to the lazyInterceptor send along a second parameter with the property name – but the interceptor method only accepts one parameter (and doesn’t need a second one). Thus all the calls to the lazyInterceptor should be changed to: “lazyInterceptor.OnPropertyGetSet(this);”
List 7
The Person class should no longer inherit from the DomainBase class (which has been deleted).
Nice article !
Hi Mats,
Very nice article, describing well a way to get a clean OO design, and introducing how AOP can help !
I have spotted some mistakes (no big deal, the article is easy to read anyway), concerning list 5 code not completely faithful to figure 11.
Figure 11 / List 5
Well as I said, no big deal, just some "lazy" mistakes ;o)
Regards,
Joel
Re: Nice article !
Hi Joel,
Thanks for the kind words and for the bug spottings! :-)
/Mats
Re: Bugs in article
Hi Mats
i've added a link to the code at the end of the article.
diana
Great article!
Mats:
This was one of the most useful articles I've come across in a while. I've circulated it amongst my peers. There is a real need for people like you who can communicate these critical ideas so concisely and clearly. In our shop we basically live and breathe Evans, Fowler and Nilsson. Looks like we'll be adding Helander to that list!
Happy New Year!
Steve
Re: Great article!
Wow...thanks Steve! I really don't know what to reply to such kind words, I'm all embarrased now :-P
Reading your comment certainly made a great start on my new year! A very Happy New Year to you as well! :-)
/Mats
Good article
Hi Mats,
The article [imho] is about 4 times longer than it should be based on the content. I also couldn't help thinking "omg, this guy's crazy" at least 5 times during reading the article (especially around introducing subclass hell with code examples and claiming it's a good approach). If the article's title or introduction warned me about showing evolutionary thinking to introduce AOP, you could have spare this :) My bet would be that many readers left in the middle of the article thinking you have no idea at all about OO design.. Anyway, nice work indeed!
Well Done!
Thanks for taking the time to write this great article, which lays a good foundation for AOP concepts while showing the problem it solves.
One comment: In the real world the Dirty and Lazy mixins would almost certainly need to be thread safe.
--
Perry
Re: Good article
Hi Kristof,
Thanks for the feedback,
Sorry I wasted your time...I assure you that if I knew how to write more succinctly, I would! ;-) But even though the title perhaps didn't reflect this, I think I did explain in the introduction what would come (a refactoring to aspects)? But you are certainly right that it's a long text and if I didn't do a good enough job of giving the reader accurate expectations of the content from the introduction, then that is indeed a severe issue, so thanks for pointing it out!
I googled "subclass hell" to see if this was a widespread opinion with its own term, but only got one hit with that usage. I'm not sure I know what you mean by the term but I assume you mean "getting an explosion of subclasses" kind of thing? Again, as was the point of the article, this is only an issue (if even then) should you decide to code your subclasses manually, which is not necessary at all - they can be easily generated at design time or runtime. But I would love some more info on this perspective - for example, if a bunch of subclasses is considered more "hellish" than an obese domain model (bloated domain classes) or having to use inefficient workarounds such as using original values to get dirty status - and if so, why? Do you have any links to discussions attacking the architecture I suggest on the basis of the proliferation of subclasses?
"My bet would be that many readers left in the middle of the article thinking you have no idea at all about OO design"
Given your objections above ^^ (subclassing and some other instances of thinking I'm crazy (that I'd love to know more about as well)) I didn't quite understand if this was your opinion as well (that I have no idea about OOD) or if you think the suggestions I make in the article aren't actually crazy - only somewhat poorly explained?
/Mats
Re: Well Done!
Hi Perry,
Thanks for the nice words. :-)
You're absolutely right about the thread safety of course and I should have pointed it out in the article - great point, thanks!
/Mats
Re: Good article
Hi Mats,
you didn't waste my time, I just thought this content can be explained in a more compact form :) I don't think 'subclass hell' is a wide-spread known term but I guess you got what I meant: the sperodic grow of number of subclasses around a project without a business reason. Generally I believe good OO design mostly prefers composition in favour of subclassing. (I guess it's worth to mention that by 'subclassing' I mean the 'manual' way, subclasses generated by an AOP framework wouldn't count this way (it's rather a technical solution not subclass by design). At last, after finishing reading the article, I *do* think you understand OOD.. sorry if I wasn't clear about that.
Kristof
Excellent article!
Hi Mats,
Really nice article that pulls together a lot of the OO patterns cleanly. This is certainly one of the better coverages of AOP that I have seen. It certainly de-mystifies AOP and makes it attainable for the masses.
Nicely done!
Intercepting Domain Objects
Nice article Mats,
IHMO it is more academic then applicable for real applications. Many java enterprise applications apply technologies like Spring and Hibernate that makes it rather difficult to maintain and test Domain Objects that have been instrumented with proxies or AOP. You might not be able even to plug your AbstractFactory implementation into the persistence layer.
Although the DOs can be instrumented by AOP this looks like breaking their light-weight nature and making the testing dependent of heavy frameworks.
Much content, little sense
This article starts by describing how to re-write hibernate and then Spring.
Important challenges for domain-driven design are design-related, not technical.
Persistence is never part of domain-driven design. A domain model solves a specific problem in the problem domain. How state is loaded from and saved to the database is not part of that problem.
In the end, persistence is a sub-problem of how to convert between different domain models in the same system. The domain model used for persistence is hopefully a different one from that used to solve hard domain problems.
Why not make use of context maps?
Erik Evans says: not everything is a large system will be well-designed, so let's try to create a good design for at least the core.
Introducing ORM invariably hurts the design of your domain model and those classes that are affected should best be considered as part of the not-well-designed-part of a system. It's up to each one of us to figure out ways to cope with that.
It would assume however that AOP would seldom be a solution. AOP typically adds behavior where it's not yet available. By modifying your core domain classes with AOP you run the risk of moving from the well-designed area to a less-well-designed area.
Is that what you want? Is persistence so critical that it's allowed to hurt the most fundamental domain classes?
Good and clear !
Hi Mats,
Very nice and clear ! I don't think the article is too long, because you explain in details the concepts and problems so clearly, that it's very nice and easy to read.
Just one question : I have tested "PostSharp Laos" AOP solution, and it's very easy to use (especially for people who begins with AOP). Have you already used it ?
Thanks !
Nicolas
Re: Good article
Hi Kristof,
"Generally I believe good OO design mostly prefers composition in favour of subclassing"
I couldn't agree more! In fact that position is pretty much the underlying premise of the whole article! We refactor from using inheritance where reusable behavior is placed in a common base class to using composition where the behavior is placed in mixins. The subclasses are just the "glue" - their only purpose is to implement the composition! And the only reason we put this composition code in a subclass is so that we can keep the domain class free of infrastructure. We /could/ skip the subclass and put the composition code directly in the domain class, in which case it is really clear that the core message of this article is to use composition over inheritance. Taking that message "all the way", in my opinion, leads to AOP.
Hmmm. I guess you were right I could have explained that in a more compact way than in the article. Seems about seven sentences could have cut it ;-)
Thanks again for the feedback! :-)
/Mats
Re: Excellent article!
Aslam, Christian and Nicolas - thanks guys! :-)
Christian,
Yes you have a very good point - it ties in a bit to my reply to Steven, but in essence what you address is the next thing I would have turned to in my article if it haden't already been way too long...in fact I wrote a first version of the article covering /only/ that, but it became hard too understand for anyone not already intimately familiar with the subject. Starting to write about the necessary background for understanding that article, I ended up with the current article.
Anyway, without repeating the whole first version of the article in the comments, my opinion is that this is a really interesting topic that will become increasingly important in the future: Which component gets to control object creation and configuration? And furthermore - could components share a set of common infrastructural aspects? Could, say, Hibernate use some standard lazy loading aspect from an aspect library instead of providing its own? I will be returning to this topic in my blog shortly, if you're interested in more of my ramblings on the subject. ;-)
Nicolas,
Nope I haven't tried PostSharp Laos but I will now! :-)
/Mats
Re: Much content, little sense
Thank you, Steven Devijver. You are absolutely correct. The article and code ignores the value of a layered architecture and dependency inversion. This article is design pattern code smell as described in www.relevancellc.com/2007/5/17/design-patterns-...
The author has made the design pattern into a recipe.
Re: Much content, little sense by Aidan O
Nothing in the article comes close to rewriting any part of Hibernate! Also, it's written in a fairly language/platform agnostic way, so while reading it I assumed I'd be using Spring in my own implementation for the AOP related stuff.
A very well written article IMO.
Java Listings available
I put together a really quick set of Java equivalent listings of the C# lists in the article. For AOP, I used SpringFramework 2.5 and AspectJ. You can download the zip from my blog.
Use the sources at your own risk. It compiles cleanly but I have not proofed it for correctness. The intention is just to give a view from Java-land within the perspective of this article. The classes and interfaces for each listing are contained in their own package, viz. infoq.dmm.list1, ..list2, etc.
/Aslam
Re: Java Listings available
Hi Aslam,
The Domain Objects being advised (e.g. Person, Employee) are not Spring beans created though an application context. Instead the instances are created at run-time through the AbstractFactory's. Therefore the tricky part with the above java translation is that it requires a full-blown AspectJ through the Load-time weaving with AspectJ in the Spring .
Although valid this approach violates the simple POJO development style. You are no longer able to write simple JUnit tests because your domain objects depend on Spring, ApsectJ and LTW.
/Chris
Re: Much content, little sense
Hi Steven,
Thank you so much for your excellent feedback! I will try to address your points as best I can:
"Important challenges for domain-driven design are design-related, not technical."
Completely agreed - as far as it concerns the design of the domain model, all design desicions should be motivated by the shape of the domain, not by infrastructural concerns.
That's just what this article is about - finding ways of moving all the pesky, inevitable infrastructure out of the domain model classes so that they can remain clean and domain focused. This article looks at how to find an architecture that will help deal with the infrastructure - it does not go into the topic of how to design your domain models to make them appropriately represent the domain (more so than noting that it will probably be easier for you to do that activity if the domain model design can be kept free from infrastructural concerns).
"Persistence is never part of domain-driven design."
Agreed - persistence should not influence the design of the domain model.
But my focus is not on persistence. In fact, the article tries to take the opposite perspective.
While today many may only use aspects on their domain model unwittingly if their O/R Mapper uses aspects under the hood to provide features such as lazy loading (the way for example N/Hibernate and NPersist does) the main point of the first version of this article (that I mentioned in my reply to Christian) was that this shouldn't be a necessary state of affairs. Indeed, many of the aspects an O/R Mapper might apply (such as inverse property synchronization) could make sense to use even if the domain model didn't have to be persisted at all (something I also touch more upon in the blog post about DMM, linked above at the end of the article).
So while I agree with you that most of the stuff I describe in my article is probably /currently/ academic (as Christian suggested) to anyone who isn't building an O/R Mapper, I feel that it would be to waste a lot of opportunities to continue to see it this way. I believe I see why you associate this to an article about "building Hibernate" (and to some degree Spring) - and I think it is a very sharp observation, it is just that I would argue that in the long run, seeing this as something that has to do just with persistence would mean passing on the opportunity to exploit the same structures that O/R Mappers often use to deal with this particular type of infrastructural demands.
"Why not make use of context maps?"
Because the article addresses how to manage a domain model within one context - I fully agree with you that if the model required for persistence looks very different from the domain model you want to use for your business logic then splitting them into two separate contexts with one model each and using a context map to keep track and transform between them is a great option. But as I said above this article isn't about persistence - even though it talks about the same kind of aspects that a persistence component would often be the primary (but not necessarily exclusive) consumer of.
But let's say we have a Domain Context and a Persistence Context - each with its own version of the domain model - and a transformer that can transform between the two models. I think what you're saying is that in that case, only the model in the persistence context would need the kind of things I'm talking about in this article. In that case, I would disagree - I think there are many different components, living in many different contexts, that could make use of aspects on their models.
But I do agree that currently we don't see a lot of this outside the world of O/R Mapping...again, if the article had not been so darn long already, this is exactly what I would have liked to go on to discuss, so I can't tell you how happy I am about the critical feedback from you and the others here in the comments section! :-))
"Erik Evans says: not everything is a large system will be well-designed, so let's try to create a good design for at least the core. "
"Introducing ORM invariably hurts the design of your domain model and those classes that are affected should best be considered as part of the not-well-designed-part of a system. It's up to each one of us to figure out ways to cope with that. "
An interesting point when you make it that general and one that I would love to discuss with you (preferrably over beer) :-)
I disagree with you to some extent - well, only on the "invariable hurts" part, really...Nonetheless, I would argue that it isn't what this article is about, as explained above. However, since I've written a few O/R Mappers I'd love to discuss this with you in some other thread! But perhaps you only mentioned it as a segway into:
"It would assume however that AOP would seldom be a solution. AOP typically adds behavior where it's not yet available. By modifying your core domain classes with AOP you run the risk of moving from the well-designed area to a less-well-designed area."
This certainly has very much to do with the article - and here we disagree completely. I think AOP helps move infrastructure (which is what so much of the cross cutting concerns in an application is about) out of the domain model, with two distinct wins:
1) The domain model can be kept clean and domain focused
2) The infrastructure code can easily be written in a modular, generic and reusable way.
"Is that what you want? Is persistence so critical that it's allowed to hurt the most fundamental domain classes?"
On the contrary - I don't think neither persistence nor /any other components/ that might need the types of cross cutting, interception and introduction based infrastructural features (and I would argue that persistence only broke this ice because persistence needed these features the most, first) should be allowed the domain model! This whole article is about attempting to find ways of solving that very issue.
But I think the core of the point you make here is that you're saying that AOP would hurt the design of the target (model) classes...I don't really see why this would be the case? Even with the architecture suggested in this article before an AOP framework is used, while it is of course inconvenient to write the subclasses manually, I don't see how the surrounding infrastructure hurts the domain model, which is kept completely unwitting of the whole thing.
If you could please elaborate on how you see AOP as potentially hurting the design of the target classes I think that would be enlightening to the discussion?
Otherwise, if you agree with me that any such damage is really at worst pretty minimal (such as the need to use AbstractFactory if we're using a runtime subclasser) do you also agree that in that case using AOP doesn't actually conflict with the goals of DDD?
/Mats
Re: Much content, little sense
Hi Greg,
Thanks for the feedback.
I do agree to some extent with you and others who say that design patterns often represent shortcomings in the programming language. I do mention in the article that some languages, such as Ruby, support features that makes it even easier to implement the core concepts that I'm trying to advocate in this article - reusable infrastructure in the form of mixins and interceptors (favoring composition over inheritance) applicable in any of several different ways including manual subclasses, AOP frameworks and indeed features of some very dynamic languages such as Ruby.
It may be that we agree on the principles this article is actually about - finding ways to make infrastructure code reusable and accessible to many parts of the application without unduely increasing the coupling - but that you react on the admittedly kludgy implementation required in languages such as Java and C#, which does indeed depend on a couple of well known design patterns that are targeted for just these types of languages but aren't really needed in other languages.
I won't argue with you there. But on the other hand, part of the idea behind the article is to show that these concepts are in fact compatible with stongly typed OO - even if the implementation isn't as slick as in Ruby.
However, I'm not sure I follow what you mean by: "The article and code ignores the value of a layered architecture and dependency inversion."
Since I'm not sure what your argument is here I realize this doesn't answer it, but I will note that I'm using the architecture suggested in the article together with a layered architecture and dependency injection, so as far as I know it doesn't ignore nor is it incompatible with with these things...but I'd love if you could elaborate a bit on this.
/Mats
Re: Much content, little sense
Hi John,
Thanks! :-) Indeed the things I actually try to advocate in the article (reusable DMM infrastructure using mixins and interceptors) are supposed to be platform agnostic, even though the discussion about how to apply these ideas in a strongly typed OO language such as java/C# is of course not really equally agnostic (as noted by Greg).
/Mats
Re: Java Listings available
Chris,
"You are no longer able to write simple JUnit tests because your domain objects depend on Spring, ApsectJ and LTW."
Assuming that the behavior under test is depending non those aspects being present - which it typically shouldn't be, if we're able to keep our domain model classes free of infrastucture concerns (that are put in aspects). Likewise, the mixins and interceptors that form the aspects should preferably also be easily testable in separation. The test you mention would be the test checking that everything worked together.
/Mats
Re: Java Listings available
Hi Chris
Yes, your observation is, to a certain extent, valid. However, the intention was not to get too involved with the SpringFramework but to stick to the theme of the article; i.e. to produce domain classes with minimal infrastructure code, and delegate infrastructure code to other classes.
For me, one of the more important side effects of this article is that you can use simple POJO-based unit tests to test your domain classes and domain rules and logic. Likewise, the infrastructure code is then also easily testable, independent of the domain classes. Just what we want we're aiming for. Like the article suggests, AOP is just a convenience mechanism that does the proxying for you so that you don't have to write all those proxy classes by hand.
Bottom line: make sure that we don't litter our domain classes with infrastructure code from the proxy classes, factory classes, etc, and vis versa.
The kind of testing that you refer to deals more with, what I call, integration testing, i.e. to make sure everything hangs together correctly. In this case, Spring offers convenient ApplicationContext aware JUnit test classes that can be extended for such integration testing.
/Aslam
Typing
But on the other hand, part of the idea behind the article is to show that these concepts are in fact compatible with stongly typed OO - even if the implementation isn't as slick as in Ruby.
And Mats of course means /statically/ typed, not /strongly/ typed when he talks about differences between Java/C# and Ruby, right Mats? ;-)
Kind regards
Niclas
Re: Typing
Hi Niclas,
Yes I do, thanks for the correction!
The point is how to bind the domain object to the mgmt infrastructure...
... and you are right that AOP is a nice way to extract out this cross-cutting concern. However, the cost of using AOP is either:
• a more complex build process, or
• having to manage load-time weaving (problematic at best when coupled with Eclipse OSGi), or
• if using Spring AOP, then having to register every domain object class as a prototype so that Spring can do its magic (unless 2.5 has some sort of wildcarding, does anyone know?)
I read your article from the viewpoint of how Naked Objects Framework accomplishes this same requirement (my background being that I was part of the team that delivered a 700+ user enterprise system running on Naked Objects). The standard approach for NOF does require a small amount of "obeseness" in the POJOs. Specifically:
import org.nakedobjects.applib.DomainObjectContainer;public class Customer { private String firstName; public String getFirstName() { getContainer().resolve(); return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; getContainer().objectChanged(); } private DomainObjectContainer container; public DomainObjectContainer getContainer() { return container; } public void setContainer(DomainObjectContainer container) { this.container = container; }}
The responsibility of the Customer domain object is to (a) ask the DomainObjectContainer to lazily load its properties and (b) to inform the DOC when it has changed. The DOC itself is injected into the DO when it is initially pulled back from the RDBMS. As you probably have realized, the DOC is just an interface that is implemented by the NOF.
My view is that this is a pretty minimal dependency. Of course, you can imagine it's easy enough to write an aspect to completely remove the requirement for the resolve() and objectChanged() calls if desired. Another alternative would be to use CGLib to enhance the DO's bytecode, in the same way that Hibernate does. So long as the DOC gets called, mission accomplished.
As a side note on implementation, the NOF itself wraps each DO in an instance of NakedObject. It is this that manages the lazy and dirty states. Note though that the NakedObject isn't a proxy, rather it is completely generic. The NOF viewers access the metamodel (to paint the generic UI) via this NakedObject wrapper.
Cheers
Dan Haywood
Keeping persistence out of the domain
There is a common view, reinforced by this article, that a domain model should be completely ignorant of persistence. Taken to it's logical extreme it results in the convolutions introduced by Mats in this article.
• A domain object is allowed to know that it is a persistent object
• A domain object is allowed to control the business logic that related to persistence (e.g. A Customer can refuse to be deleted if there are outstanding Invoices; an Invoice can ensure that all InvoiceLines are deleted before being deleted)
These responsibilities sit comfortably in the domain model. To put these responsibilities elsewhere results in the distribution of business rules among artificial constructs such as "repositories", "Data Access Objects", etc.. The proxies, awkward inheritance hierarchies, etc. that Mats proposes are more such artificial constructs.
• Use POJO's
• Each domain object has save and remove methods
public void remove() { PersistenceHelper.remove(this); }
• Each domain class has methods (static methods in Java terms) for doing such things as findById, getAll, etc.
public static Person findById(final Long id) throws NotFoundException { return PersistenceHelper.findById(Person.class, id); }
• All these persistence methods delegate to a persistence infrastructure (e.g. Hibernate, JPA)
• Have no direct dependency on a concrete persistence mechanism in the domain objects (no SQL, no JDBC, no imports of Hibernate classes)
• The actual persistence implementation is supplied or found at runtime (in my case via a PersistenceHelper class)
Dirty Tracking only on Setters? That won't work!
First of all, let me commend you on the thorough article. I really enjoyed it.
Now on to the meat...
As we've discussed in the past, handling concurrency requires fetching an object before changing it, just as a reminder, here's my post on the subject: Realistic Concurrency.
In this model, no setters are called on domain objects. Instead, when a method is called, the method changes the fields on the object.
If you were to do dirty tracking by keeping a copy of the object, you'd know that the object was changed. Since you cannot assume that every method changes an object, I don't see how you could use the interception technique to have it handle this scenario.
Re: Keeping persistence out of the domain
• A domain object is allowed to know that it is a persistent object
• A domain object is allowed to control the business logic that related to persistence (e.g. A Customer can refuse to be deleted if there are outstanding Invoices; an Invoice can ensure that all InvoiceLines are deleted before being deleted)
I'd agree with those, and this is exactly what Naked Objects does. For example, one can annotate properties and actions with @Disabled(When.UNTIL_PERSISTENT).
• Each domain object has save and remove methods
public void remove() { PersistenceHelper.remove(this); }
In Naked Objects terms this would be getContainer().disposeInstance(this);
• Each domain class has methods (static methods in Java terms) for doing such things as findById, getAll, etc.
public static Person findById(final Long id) throws NotFoundException { return PersistenceHelper.findById(Person.class, id); }
Well, Naked Objects used to work this way. However, the trouble is that one cannot override or mock such methods. In NOF 3.0 we now use repositories which are registered with the framework and which are instantiated as singletons. These are injected into every domain object that need them, and are accessible (if appropriate) directly from the UI. Putting this functionality as instance methods means that we can then override or provide different implementations (eg running against an in-memory DB rather than a fully fledged RDBMS).
public class PersonRepositoryImpl implements PersonRepository { public Person findById(final Long id) throws NotFoundException { return HibernateHelper.findById(Person.class, id); }
• All these persistence methods delegate to a persistence infrastructure (e.g. Hibernate, JPA)
• Have no direct dependency on a concrete persistence mechanism in the domain objects (no SQL, no JDBC, no imports of Hibernate classes)
• The actual persistence implementation is supplied or found at runtime (in my case via a PersistenceHelper class)
Yup, this is exactly what Naked Objects does. Hibernate is used for the back-end persistence.
Re: Dirty Tracking only on Setters? That won't work!
Hi Udi,
Thanks, glad you liked it! :-)
>In this model, no setters are called on domain objects. Instead, when a method is called, the method changes the fields on the object.
Yes, this is an issue - you have to code your methods to be stringent in always going by the getters/setters rather than accessing the fields directly, since the field access can't be intercepted. Again, I would love to see real AOP features in the platforms (Java/C#) but since we don't have that, we'll have to use patterns and procedures to achieve a properly modular design of the infrastructure code.
So, to directly answer your question, you'd have to modify the methods so that they updated the objects using the setters (or write them thusly from scratch, not necessarily a bad idea...what if you wanted to extend your class with one that did something kewl in the setter and then the setter wasn't called when the object was updated by your method?).
/Mats
Re: Keeping persistence out of the domain
Hi John,
Well, there are many ways to skin a cat! :-)
Much of what you talk about can be solved using Inversion of Control, such that you get events when objects are persisted or deleted and can hook into your own methods for performing validation or additional cleanup. On the Active Record approach (Save() methods on the domain objects) I agree with you that it can be a very comfortable API to be able to find the persistence methods on the objects themselves - but I would prefer to mix those methods onto the objects using the approach described in this article! ;-)
But bottom line is I don't disagree with you. If you can get away without cluttering your domain model classes more than in your example, I agree it's certainly no big deal - definitely something that you can live with and indeed nothing that motivates the kind of architecture discussed in this article. I'd wholeheartedly agree with your recommendation to use this approach whenever it was certain that the minimal "bloat" you describe was all we would end up with.
My article attempts to examine a long term strategy for dealing with the particular kind of infrastructural complexity that doesn't have the decency to stop at littering your domain model classes with a few persistence related methods. I hasten to add that I'm not suggesting your applications nor the infrastructure for them wouldn't be complex - I'm saying /if/ you find yourself in a situation where the particular types of infrastructure demands of your application pushes you towards a domain model that is increasingly bloated by infrastructural concerns - and you find that moving the infrastructure ocde out of the domain model entirely would result in less efficient workarounds - then using the approach described in this article can provide a way to refactor back to a clean domain model. Some very complex applications don't run into this issue, for sure, but for those that do I think the article provides sound advice on how to deal with it.
Thanks for the great feedback! :-)
/Mats
Re: Keeping persistence out of the domain
Hi Dan,
Yes, as long as we don't have good, native support for AOP in the platforms (Java/C# or even JVM/CLR) I agree that each way to write an AOP framework carries its own drawbacks. Personally I think the main drawback with runtime subclassing - relying on the Factory - isn't too bad, since you may well want to have a place to inject a reference to the container/context (just as you mention).
As I said to John, I see no problem with the idea of using AOP to apply interfaces+mixins with persistence methods onto the domain objects. In fact I think it can be a great idea, since it allows all parts of the application to easily access the persistence mechanism - including other aspects.
I think your comment underlines the premise of the article, which is that it can be very convenient to be able to have your infrastructure related features accessible on the objects themselves - including persistence methods! I couldn't agree more with this - it's what the article's all about. :-) In fact, the article looks at what happens when we're so in love with this idea that it starts to seriously threaten the maintainability of our domain model classes...and how we can deal with it.
/Mats
Re: Java Listings available
thanks a lot for useful knowledge
www.vitamins5.com
Re: Dirty Tracking only on Setters? That won't work! by Jon B
I have to agree that "static" dirty tracking (such as with an attribute or something) doesn't work in general. For example, what if the setter is implemented like this:
set
{
if (!String.IsNullOrEmpty(value))
{
m_name = value;
}
}
You only want the dirty flag to be set if the m_name is actually changed. Or what happens if you do validation and throw exceptions in the setter. The only way around this that I can see is to do data comparisons to determine the "dirtiness", which has it's own set of (potentially major) issues.
About abstract factory and domain logic
Regarding Abstract Factory pattern, it hides one effective technique which is rarely used. Domain model may contain if-then-else or switch logic to determine which concrete product to use in further execution. The decision is typically based on some domain data, e.g. create discount if price > 100.
Such domain logic can be moved to a concrete factory like in this article www.codinghelmet.com?path=howto/reduce-cyclomat...
In simpler cases, we can even use lambdas to replace abstract factories like here: www.codinghelmet.com/?path=howto/reduce-cycloma...
The point is that concrete factory then becomes part of the domain model. Its purpose is to wrap creation of one concrete product, but also to act as a kind of strategy. Net result is that domain class can be simplified and doesn't have to contain switching logic anymore.
Close
#### by
on
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
Email me replies to any of my messages in this thread
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
Email me replies to any of my messages in this thread
38 Discuss
Login to InfoQ to interact with what matters most to you.
|
2017-08-21 06:31:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29852697253227234, "perplexity": 1501.300528080166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107720.63/warc/CC-MAIN-20170821060924-20170821080924-00280.warc.gz"}
|
http://hmj2.math.sci.hokudai.ac.jp/page/36-2/564.html
|
# Hokkaido Mathematical Journal
## Hokkaido Mathematical Journal, 36 (2007) pp.245-282
PDF
### Abstract
In this paper, the weak $L\log L$ estimates for the commutators of a class of Littlewood-Paley operators with real parameter are established by using a technique of the sharp function.
MSC(Primary) 47G10 42B25 parameterized Littlewood-Paley operators, commutators, sharp function, Young function, Luxemburg norm, weak $L\log L$ estimates
|
2018-01-22 14:24:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6815304756164551, "perplexity": 5060.358055587872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891377.59/warc/CC-MAIN-20180122133636-20180122153636-00712.warc.gz"}
|
https://math.stackexchange.com/questions/2987679/a-functional-recurrence-relation-with-differentiation-closed-form
|
# A functional recurrence relation with differentiation, closed form
I'm interested in the way of solving the following recurrence relation:
$$a_{n+1}=a_n'+a_1 a_n-b_1 b_n \\ b_{n+1}=b_n'+b_1 a_n+a_1 b_n$$
Where all $$a_n,b_n$$ are functions of $$x$$, and $$'$$ means derivative w.r.t. $$x$$. The initial functions $$a_1$$ and $$b_1$$ are known. Also, assume we have closed forms the derivatives: $$a_1^{(n)}$$ and $$b_1^{(n)}$$ in terms of $$n$$.
I expect there to be a closed form solution for this case, however, I have trouble deriving it.
One idea I had was to use operator notation and matrix notation, then we have:
$$\begin{bmatrix} a_{n+1} \\ b_{n+1} \end{bmatrix}= \begin{bmatrix} D+a_1 & -b_1 \\ D+b_1 & a_1 \end{bmatrix} \begin{bmatrix} a_n \\ b_n \end{bmatrix}$$
Which obviously can be written as:
$$\begin{bmatrix} a_{n+1} \\ b_{n+1} \end{bmatrix}= \begin{bmatrix} D+a_1 & -b_1 \\ D+b_1 & a_1 \end{bmatrix}^n \begin{bmatrix} a_1 \\ b_1 \end{bmatrix}$$
However, the $$n$$th power of a matrix is found through its eigenvalues, and I'm not sure how to find eigenvalues for the operator matrix.
We can also rewrite this in a more clear way:
$$\begin{bmatrix} a_{n+1} \\ b_{n+1} \end{bmatrix}= \left(\begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix} \frac{d}{dx} +\begin{bmatrix} a_1 & -b_1 \\ b_1 & a_1 \end{bmatrix} \right)^n \begin{bmatrix} a_1 \\ b_1 \end{bmatrix}$$
And use binomial sum? Not sure. How to correctly write binomials sum with a differential operator inside?
Or maybe there's a more simple way to obtain the closed form?
I would prefer to use strictly real methods if possible, thought I kind of doubt it could work without complex numbers.
By letting $$z_n = a_n + i b_n$$ the given equations can be coupled as
$$z_{n+1} = z_n' + z_1 z_n = (D+z_1)z_n = (D+z_1)^n z_1$$
$$z_2=z_1'+z_1^2$$ $$z_3=z_1''+3 z_1 z_1'+z_1^3$$ $$z_4=z_1'''+3 z_1'^2 + 4 z_1 z_1''+6 z_1' z_1^2 + z_1^4$$ and the general form for $$z_n$$ is given by Faà di Bruno's formula.
|
2019-04-19 05:09:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9591118693351746, "perplexity": 92.0276014169453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527135.18/warc/CC-MAIN-20190419041415-20190419063415-00434.warc.gz"}
|
https://www.isaacslavitt.com/posts/quantified-driving/
|
### Looking at (small) data from five years of driving
I finally got rid of my car earlier this year. It served me well, but I currently live in the city and take the subway most of the time. Now that it's gone, it's time to take a look at the data.
From when I bought the car used in 2009 until I sold it this past summer, I made a log entry every time I refueled the car. (I realize this is not normal.)
The very last page... :.(
I noted the date, mileage on the odometer, number of gallons of gas I bought, and the price of gas. Every now and then I'd remember to bring in the log and update my spreadsheet on Google Docs. Here's what the data looks like after being downloaded as a csv.
In [1]:
!head data/jeep.csv
"date","miles","gal","price"
"5/8/2009","35,666",15.9,2.09
"5/8/2009","36,052",17,2.05
"5/9/2009","36,433",17.1,2.19
"5/16/2009","36,774",16.7,2.45
"5/22/2009","37,095",16.1,2.45
"6/7/2009","37,434",17.1,2.59
"6/19/2009","37,774",17.2,2.69
"6/21/2009","38,179",17.6,2.49
"6/23/2009","38,561",17.3,2.64
Let's read it in and get going.
In [2]:
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
plt.style.use('fivethirtyeight')
import numpy as np
import pandas as pd
df = pd.read_csv('data/jeep.csv', parse_dates=['date'])
Out[2]:
date miles gal price
0 2009-05-08 35,666 15.9 2.09
1 2009-05-08 36,052 17.0 2.05
2 2009-05-09 36,433 17.1 2.19
3 2009-05-16 36,774 16.7 2.45
4 2009-05-22 37,095 16.1 2.45
In [3]:
df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 171 entries, 0 to 170
Data columns (total 4 columns):
date 171 non-null datetime64[ns]
miles 171 non-null object
gal 171 non-null float64
price 171 non-null float64
dtypes: datetime64[ns](1), float64(2), object(1)
memory usage: 6.7+ KB
Looks like the miles column was read in as strings (its dtype is object), so we just need to fix that:
In [4]:
# remove commas and cast to int type
df.miles = df.miles.str.replace(',', '').astype(int)
df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 171 entries, 0 to 170
Data columns (total 4 columns):
date 171 non-null datetime64[ns]
miles 171 non-null int64
gal 171 non-null float64
price 171 non-null float64
dtypes: datetime64[ns](1), float64(2), int64(1)
memory usage: 6.7 KB
## Miles driven¶
Let's look at the mileage over time. I bought the car used, so the mileage starts near 40,000.
In [5]:
# use friendly commas for thousands
from matplotlib.ticker import FuncFormatter
commas = FuncFormatter(lambda x, p: format(int(x), ','))
plt.plot(df.date, df.miles)
plt.xlabel('date')
plt.ylabel('mileage')
plt.gca().get_yaxis().set_major_formatter(commas)
plt.show()
## Mileage per gallon¶
One quantity of interest to car buyers is fuel efficiency. Without knowing much about cars, I'd assume that as components age the mileage per gallon (MPG) will decline. Let's see if that's the case. The first thought on how to do this would be
$$\frac{\textrm{miles driven since last fill-up}}{\textrm{gallons purchased last time}}$$
but the fill-ups came at irregular intervals and weren't always up to a full tank, so these fractions could be all over the place. Instead, we can divide the cumulative sum of miles driven by the cumulative amount of gasoline purchased. The first few numbers will be crazy but it will pretty quickly converge to a reasonable estimate.
In [6]:
# add a miles per gallon series
df['mpg'] = df.miles.diff().cumsum() / df.gal.cumsum()
# plot the points
plt.scatter(df.miles, df.mpg, c='#30a2da')
# plot the exponentially weighted moving average of the points
plt.plot(df.miles, pd.ewma(df.mpg, 5), lw=2, alpha=0.5, c='#fc4f30')
# ignore the first handful of these
plt.xlim(45000, df.miles.max())
plt.ylim(17, 21)
plt.xlabel('mileage')
plt.ylabel('mpg')
plt.show()
A couple of interesting notes here:
• The dots that are unusually high probably correspond with long highway drives.
• As for the discontinuity around 54,000 miles, I figured out by looking through my car folder that this was when I replaced the tires. It's possible that the new tires had higher drag causing lower fuel efficiency.
## Fitting a simple model¶
It could be interesting to quantify how fuel efficiency changed as the car aged.
In [7]:
import statsmodels.formula.api as smf
# create a new view into the old dataframe for fitting
# (throw away the first 30 points)
model_df = df.ix[30:, ['miles', 'mpg']].copy()
# divide the new miles by 10k so the regression coefficients aren't tiny
model_df.miles /= 1e4
# fit the model
results = smf.ols('mpg ~ miles', data=model_df).fit()
# print out results
results.summary()
Out[7]:
Dep. Variable: R-squared: mpg 0.943 OLS 0.942 Least Squares 2285. Thu, 25 Dec 2014 3.55e-88 20:26:13 101.49 141 -199.0 139 -193.1 1
coef std err t P>|t| [95.0% Conf. Int.] 22.1292 0.060 368.805 0.000 22.011 22.248 -0.4399 0.009 -47.807 0.000 -0.458 -0.422
Omnibus: Durbin-Watson: 0.313 0.784 0.855 0.124 0.063 0.94 3.072 40.1
It looks like my fuel efficiency went down about 0.44 miles/gal for every 10,000 miles of wear and tear.
## Miles driven per year compared to US average¶
First, we'll sum up the data grouped by year.
In [8]:
driven_per_year = df.set_index('date').miles.diff().resample('1A', how='sum')
driven_per_year
Out[8]:
date
2009-12-31 11303
2010-12-31 14645
2011-12-31 8935
2012-12-31 7206
2013-12-31 3503
2014-12-31 1487
Freq: A-DEC, Name: miles, dtype: float64
Next, we'll grab annual driving stats from the Federal Highway Administration, an agency of the U.S. Department of Transportation.
In [9]:
# use read_html to get the <table> element as a DataFrame
us_avg_miles = parsed_tables.pop().set_index('Age')
us_avg_miles
Out[9]:
Male Female Total
Age
16-19 8206 6873 7624
20-34 17976 12004 15098
35-54 18858 11464 15291
55-64 15859 7780 11972
65+ 10304 4785 7646
Average 16550 10142 13476
In [10]:
driven_per_year / us_avg_miles.ix['20-34', 'Male']
Out[10]:
date
2009-12-31 0.628783
2010-12-31 0.814697
2011-12-31 0.497052
2012-12-31 0.400868
2013-12-31 0.194871
2014-12-31 0.082721
Freq: A-DEC, Name: miles, dtype: float64
So it looks like I drive less than average. This makes sense — from 2008 until mid-2010, I spent about half the year at sea with my car sitting on the pier. In 2010, I moved back to Boston and my primary mode of transportation was walking or taking the T.
## Gas prices¶
Even without driving that much, it's possible to spend quite a bit on gas.
In [11]:
plt.plot(df.date, (df.gal * df.price).cumsum(), label='cumulative dollars spent on gas')
plt.plot(df.date, df.gal.cumsum(), label='cumulative gallons gasoline bought')
plt.xlabel('date')
plt.gca().get_yaxis().set_major_formatter(commas)
plt.legend()
plt.show()
Thanks to the Energy Information Administration, an entity within the U.S. Department of Energy, it's also possible to compare the prices at which I bought gas to the national average price over time.
In [12]:
!wget -O data/gas.xls http://www.eia.gov/dnav/pet/xls/PET_PRI_GND_DCUS_NUS_W.xls
--2014-12-25 20:26:15-- http://www.eia.gov/dnav/pet/xls/PET_PRI_GND_DCUS_NUS_W.xls
Resolving www.eia.gov (www.eia.gov)... 205.254.135.7, 2607:f368:1000:1001::1007
Connecting to www.eia.gov (www.eia.gov)|205.254.135.7|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 328192 (320K) [application/vnd.ms-excel]
Saving to: ‘data/gas.xls’
100%[======================================>] 328,192 419KB/s in 0.8s
2014-12-25 20:26:15 (419 KB/s) - ‘data/gas.xls’ saved [328192/328192]
It's an Excel file, unfortunately, but pandas makes it easy to get at the data. I opened Libreoffice just to see how it's laid out, then read it in with the read_excel method.
In [13]:
gas = pd.read_excel('data/gas.xls', sheetname='Data 1', header=2)
gas.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1271 entries, 0 to 1270
Data columns (total 16 columns):
Date 1271 non-null datetime64[ns]
Weekly U.S. All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon) 1134 non-null float64
Weekly U.S. All Grades Conventional Retail Gasoline Prices (Dollars per Gallon) 1048 non-null float64
Weekly U.S. All Grades Reformulated Retail Gasoline Prices (Dollars per Gallon) 1048 non-null float64
Weekly U.S. Regular All Formulations Retail Gasoline Prices (Dollars per Gallon) 1265 non-null float64
Weekly U.S. Regular Conventional Retail Gasoline Prices (Dollars per Gallon) 1265 non-null float64
Weekly U.S. Regular Reformulated Retail Gasoline Prices (Dollars per Gallon) 1048 non-null float64
Weekly U.S. Midgrade All Formulations Retail Gasoline Prices (Dollars per Gallon) 1048 non-null float64
Weekly U.S. Midgrade Conventional Retail Gasoline Prices (Dollars per Gallon) 1048 non-null float64
Weekly U.S. Midgrade Reformulated Retail Gasoline Prices (Dollars per Gallon) 1048 non-null float64
Weekly U.S. Premium All Formulations Retail Gasoline Prices (Dollars per Gallon) 1048 non-null float64
Weekly U.S. Premium Conventional Retail Gasoline Prices (Dollars per Gallon) 1048 non-null float64
Weekly U.S. Premium Reformulated Retail Gasoline Prices (Dollars per Gallon) 1048 non-null float64
Weekly U.S. No 2 Diesel Retail Prices (Dollars per Gallon) 1084 non-null float64
Weekly U.S. No 2 Diesel Ultra Low Sulfur (0-15 ppm) Retail Prices (Dollars per Gallon) 412 non-null float64
Weekly U.S. No 2 Diesel Low Sulfur (15-500 ppm) Retail Prices (Dollars per Gallon) 96 non-null float64
dtypes: datetime64[ns](1), float64(15)
memory usage: 168.8 KB
From here, we'll narrow the national gas data down to just the part we care about (regular gasoline) and plot it alongside what I paid.
In [14]:
# slice off the data we care about
regular = gas[['Date', 'Weekly U.S. Regular All Formulations Retail Gasoline Prices (Dollars per Gallon)']]
regular.columns = ['date', 'usa_avg_price']
# plot the two trends
plt.plot(df.date, df.price, label='dollars per gallon paid')
plt.plot(regular.date, regular.usa_avg_price, lw=2, label='national avg')
# annotate the figure
plt.xlabel('date')
plt.ylabel('gas price per gallon (USD)')
plt.xlim(df.date.min(), df.date.max())
plt.legend()
plt.show()
I was usually filling up in New England which is probably a little more expensive than the national average.
## In closing¶
Not exactly the most hard hitting blog entry of all time, but I spent a bunch of time over the years writing it all down so... may as well make some plots!
Aside: for anyone considering keeping a log in your car, keep in mind that you will take a lot of flak from passengers. Think about it.
Any comments or suggestions? Let me know.
|
2020-10-26 12:31:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.266642689704895, "perplexity": 14082.246789343799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891228.40/warc/CC-MAIN-20201026115814-20201026145814-00046.warc.gz"}
|
https://socratic.org/questions/there-are-two-distinct-round-tables-each-with-5-seats-in-how-many-ways-may-a-gro
|
# There are two distinct round tables, each with 5 seats. In how many ways may a group of 10 be seated?
6048
#### Explanation:
There's a few ways to approach this - let me show you one.
Let's first notice that when dealing with seating at tables, we'll have to work with the lack of definite seat numbers (as opposed to a row of seats that has definite end seats). So what I want to do is break down the problem into the numbers of ways I can have 10 people sit at the two tables, then deal with actual seating.
Putting people at the tables
If we take Table 1 and choose 5 people to be at it, we'll naturally deal with Table 2 (if the people aren't at Table 1, they're at Table 2).
So let's take 10 choose 5:
C_(n,k)=((n),(k))=(n!)/((k!)(n-k)!) with $n = \text{population", k="picks}$
$\left(\begin{matrix}10 \\ 5\end{matrix}\right) = 252$
One thing to notice - when I have persons 1, 2, 3, 4, 5 at Table 1 and 6, 7, 8, 9, 10 at Table 2, it's the same as having persons 6, 7, 8, 9, 10 at Table 1. So to get rid of the duplicates, let's divide by 2:
$\frac{252}{2} = 126$
Seating at the tables
For each table, we have 5 people sitting. If we were dealing with rows of chairs, we'd have 5! =120 ways to seat the people. But since these are round tables, we have to divide through by the number of seats to get rid of the duplicates (seating 1, 2, 3, 4, 5 is the same as seating 2, 3, 4, 5, 1), which leaves 4! =24 per table.
There are two tables, so we multiply by 2:
$24 \times 2 = 48$
Putting it together
There are 126 ways to divide the people up into distinct groups at the tables and 48 ways to seat people per group, giving:
$126 \times 48 = 6048$ different ways to seat people.
|
2019-10-19 05:03:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7948210835456848, "perplexity": 462.29527883723364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986688826.38/warc/CC-MAIN-20191019040458-20191019063958-00528.warc.gz"}
|
http://experiment-ufa.ru/sec(5x)-2=0
|
# sec(5x)-2=0
## Simple and best practice solution for sec(5x)-2=0 equation. Check how easy it is, and learn it for the future. Our solution is simple, and easy to understand, so dont hesitate to use it as a solution of your homework.
If it's not what You are looking for type in the equation solver your own equation and let us solve it.
## Solution for sec(5x)-2=0 equation:
Simplifying
sec(5x) + -2 = 0
Remove parenthesis around (5x)
ces * 5x + -2 = 0
Reorder the terms for easier multiplication:
5ces * x + -2 = 0
Multiply ces * x
5cesx + -2 = 0
Reorder the terms:
-2 + 5cesx = 0
Solving
-2 + 5cesx = 0
Solving for variable 'c'.
Move all terms containing c to the left, all other terms to the right.
Add '2' to each side of the equation.
-2 + 2 + 5cesx = 0 + 2
Combine like terms: -2 + 2 = 0
0 + 5cesx = 0 + 2
5cesx = 0 + 2
Combine like terms: 0 + 2 = 2
5cesx = 2
Divide each side by '5esx'.
c = 0.4e-1s-1x-1
Simplifying
c = 0.4e-1s-1x-1`
|
2017-09-19 11:44:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18542520701885223, "perplexity": 2397.6949951317183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685129.23/warc/CC-MAIN-20170919112242-20170919132242-00120.warc.gz"}
|
https://web2.0calc.com/questions/number-of-unique-triad-combinations-equals-528
|
+0
# number of unique triad combinations equals 528.
+5
192
1
number of unique triad combinations equals 528.
What is the number of unique items that makes a total of 528 unique combinations (order does not count as a combination, so ABC is the same combination as CAB)?
I found a formula for finding the number of combinations for a specified number of unique items when the number of items in a set is also specified, however that formula uses factorials and I don't know how to work around that when I don't know one of the variables.
Guest Mar 17, 2017
Sort:
#1
+6587
+3
I think this is basically your question:
$$528=\frac{n!}{3!(n-3)!}$$
And you're trying to solve for n.
I would just try plugging in different values for n until you find the right one.
First I tried 5, that gave me 10.
Next I tried 15, that gave me 455.
Next I tried 16, that gave me 560.
....Well... I don't think that n should have a fraction....I don't know, but maybe this helps some anyway.
hectictar Mar 17, 2017
### 20 Online Users
We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. See details
|
2018-03-22 00:34:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7444810271263123, "perplexity": 799.1115980099472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647707.33/warc/CC-MAIN-20180321234947-20180322014947-00166.warc.gz"}
|
https://docs.insurace.io/landing-page/documentation/protocol-design/staking
|
Mining
Stake to Earn rewards
Participants who stake tokens such as ETH, DAI, USDT and other eligible tokens into the platform will gain \$INSUR tokens as incentives. This process, which is also known as mining, has been adopted by many other DeFi projects such as Curve, YFI, Sushi Swap and more.
As InsurAce.io offers both Insurance and Investment services, allowing customers to stake capital on both sides, the INSUR tokens will be mined together according to the following equation.
$Speed(Investment)+Speed(Insurance)=C$
where C is a constant determined by the token economy adjusted over time.
This equation will ensure that a delicate balance is maintained between the Insurance and Investment functions. When the insurance capital pool faces insufficiency, thereby posing higher risks and raising the premium, the mining speed on the Insurance side can be increased to attract more capital to the insurance pool. Similarly, once the capital pool on the Insurance side is sufficient, the mining speed on the Investment side can be increased to attract more investment funds. This balance is driven by the SCR mining mechanism at a lower level.
SCR mining is used to dynamically adjust the mining speed among the insurance capital pools according to the capital sufficiency status represented by the SCR ratio, incentivizing more capital staking to the less staked pools represented by the SCR ratio. This will help to reduce the premium on those new or high-risk protocols as a whole. The mining speed will return to normal when the SCR ratio is equal to or above the platform-defined SCR ratio.
Specifically, assume
$S_i$
is the number of tokens staked in an insurance capital pool at time
$t$
,
$S_\mathrm{max}$
is the number of tokens staked in the largest pool at
$\mathit{t-1}$
whose mining speed is
$\mathit{Speed}_\mathrm{min}$
, then the mining speed for pool
$i$
will be calculated by,
$Speed_i = \begin{cases} Speed_\mathit{min} &\text{if } S_i\ge S\mathit{max} \\ Speed_\mathit{min} \times \lambda(1-S_i/S_\mathit{max}) &\text{if } S_i
where λ is the speed scale, e.g if λ = 2, the maximum mining speed will increase by 200% from standard speed.
|
2022-01-23 15:47:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4965800642967224, "perplexity": 4279.95882023021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304287.0/warc/CC-MAIN-20220123141754-20220123171754-00066.warc.gz"}
|
http://www.kinberg.net/wordpress/stellan/a-paradigm-shift/
|
I am a Christian and I believe in Jesus, Son of God, his miracles, his resurrection and ascent. I have experienced several paradigm shift in my life, even painful.
But I liked the paradigm shift that was depicted in the film “Man to Earth”. I liked it as I understand the pain and shock Johns listener experienced. I like it also as I have always seen the similarities between Jesus message, the message of Zarathushtra and Siddharta Gautama. You may also know about Srinagar, In India were Indians claim they have the grave of Jesus. (Read more at http://news.bbc.co.uk/2/hi/programmes/from_our_own_correspondent/8587838.stm ) You may also know that the Quran say that Jesus never died on the cross. A son of God and virgin Mary can’t be killed?
Well. Theauthor of the book ( Emerson Bixby and his father, science fiction writer Jerome Bixby ) certainly knew about this and more.
Man from Earth has other dimensions. the film also raises the issue ethernal live that follows the resurrection of the dead. How is to live hudnreds of years? This film makes you think about that.
That part is available in Yotube . I add the English subtitle text below wo you can get it translated to your language.
Subtitle with Johns words:
“The entire Bible is mostly myth and allegory
with maybe some basis in historical events.
You were part of that history?
Yes.
Moses. Moses was based on Misis, a Syrian myth,
and there are earlier versions. All found floating on water,
the staff that changed to a snake, waters that were parted so followers could be led to freedom
and even receive laws on stone or wooden tablets.
One of the apostles? They weren’t really apostles.
They didn’t do any real teaching.
Peter the fisherman learned a little more about fishing.
The mythical overlay is so enormous… and not good.
The truth is so, so simple.
All right, John,
hit us with the short form.
He met the Buddha, liked what he heard,
thought about it for a while –
say 500 years, while he returned
to the Mediterranean, became an Etruscan.
Seeped into the Roman empire.
He didn’t like what they became – A giant killing machine.
He went to the Near East thinking,
“Why not pass the Buddha’s So he tried.
One dissident against Rome? Rome won. The rest is history.
Lot of fairy tales mixed in.
they pinned on Jesus to fulfill prophecy.
The crucifixion.
He blocked the pain as he had learned to doin Tibet and India.
He also learned to slow his body processes down to the point where they were undetectable.
They thought he was dead. So his followers pulled him
from the cross, placed him in a cave…
His body normalized as he had trained it to.
He attempted to go away undetected,
but some devotees were standing watch. Tried to explain.
They were ecstatic.Thus, I was resurrected,
and I ascended to central Europe to get away as far as possible.”
## A agnostic pluralist seeker
Insert math as
$${}$$
|
2020-04-07 04:06:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22290050983428955, "perplexity": 9056.828561749955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371665328.87/warc/CC-MAIN-20200407022841-20200407053341-00479.warc.gz"}
|