text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
This example is devoted to the MeshWorker framework and the discontinuous Galerkin method, or in short: DG method. It includes the following topics. The particular concern of this program are the loops of DG methods. These turn out to be especially complex, primarily because for the face terms, we have to distinguish the cases of boundary, regular interior faces and interior faces with hanging nodes, respectively. The MeshWorker. The model problem solved in this example is the linear advection equation \[ \nabla\cdot \left({\mathbf \beta} u\right)=0 \qquad\mbox{in }\Omega, \] subject to the boundary conditions \[ u=g\quad\mbox{on }\Gamma_-, \] on the inflow part \(\Gamma_-\) of the boundary \(\Gamma=\partial\Omega\) of the domain. Here, \({\mathbf \beta}={\mathbf \beta}({\bf x})\) denotes a vector field, \(u\) the (scalar) solution function, \(g\) a boundary value function, \[ \Gamma_-:=\{{\bf x}\in\Gamma, {\mathbf \beta}({\bf x})\cdot{\bf n}({\bf x})<0\} \] the inflow part of the boundary of the domain and \({\bf n}\) denotes the unit outward normal to the boundary \(\Gamma\). This equation is the conservative version of the advection equation already considered in step-9 of this tutorial. In particular, we solve the advection equation on \(\Omega=[0,1]^2\) with \({\mathbf \beta}=\frac{1}{|x|}(-x_2, x_1)\) representing a circular counterclockwise flow field, and \(g=1\) on \({\bf x}\in\Gamma_-^1:=[0,0.5]\times\{0\}\) and \(g=0\) on \({\bf x}\in \Gamma_-\setminus \Gamma_-^1\). We(). The first few files have already been covered in previous examples and will thus not be further commented on: Here the discontinuous finite elements are defined. They are used in the same way as all other finite elements, though – as you have seen in previous tutorial programs – there isn't much user interaction with finite element classes at all: they are passed to DoFHandler and FEValues objects, and that is about it. We are going to use the simplest possible solver, called Richardson iteration, that represents a simple defect correction. This, in combination with a block SSOR preconditioner (defined in precondition_block.h), that uses the special block matrix structure of system matrices arising from DG discretizations. We are going to use gradients as refinement indicator.. Like in all programs, we finish this section by including the needed C++ headers and declaring we want to use objects in the dealii namespace without prefix. First, we define a class describing the inhomogeneous boundary data. Since only its values are used, we implement value_list(), but leave all other functions of Function undefined. Given the flow direction, the inflow boundary of the unit square \([0,1]^2\) are the right and the lower boundaries. We prescribe discontinuous boundary values 1 and 0 on the x-axis and value 0 on the right boundary. The values of this function on the outflow boundaries will not be used within the DG scheme. start with the constructor. The 1 in the constructor call of fe is the polynomial degree. In the function that sets up the usual finite element data structures, we first need to distribute the DoFs. We start by generating the sparsity pattern. To this end, we first fill an intermediate object of type DynamicSparsityPattern with the couplings appearing in the system. After building the pattern, this object is copied to sparsity_pattern and can be discarded. To build the sparsity pattern for DG discretizations, we can call the function analogue to DoFTools::make_sparsity_pattern, which is called DoFTools::make_flux_sparsity_pattern: Finally, we set up the structure of all components of the linear system. Here we see the major difference to assembling by hand. Instead of writing loops over cells and faces,_-}\): For this simple problem we use the simplest possible solver, called Richardson iteration, that represents a simple defect correction. This, in combination with a block SSOR preconditioner, that uses the special block matrix structure of system matrices arising from DG discretizations. The size of these blocks are the number of DoFs per cell. Here, we use a SSOR preconditioning as we have not renumbered the DoFs according to the flow field. If the DoFs are renumbered in the downstream direction of the flow, then a block Gauss-Seidel preconditioner (see the PreconditionBlockSOR class with relaxation=1) does a much better job. Here we create the preconditioner, then assign the matrix to it and set the right block size: After these preparations we are ready to start the linear solver. We refine the grid according to a very simple refinement criterion, namely an approximation to the gradient of the solution. As here we consider the DG(1) method (i.e. we use piecewise bilinear shape functions) we could simply compute the gradients on each cell. But we do not want to base our refinement indicator on the gradients on each cell only, but want to base them also on jumps of the discontinuous solution function over faces between neighboring cells. The simplest way of doing that is to compute approximative gradients by difference quotients including the cell under consideration and its neighbors. This is done by the DerivativeApproximation class that computes the approximate gradients in a way similar to the GradientEstimation described in step-9 of this tutorial. In fact, the DerivativeApproximation class was developed following the GradientEstimation class of step-9. Relating to the discussion in step-9, here we consider \(h^{1+d/2}|\nabla_h u_h|\). Furthermore we note that we do not consider approximate second derivatives because solutions to the linear advection equation are in general not in \(H^2\) but in \(H^1\) (to be more precise, in \(H^1_\beta\)) only. The DerivativeApproximation class computes the gradients to float precision. This is sufficient as they are approximate and serve as refinement indicators only. Now the approximate gradients are computed and they are cell-wise scaled by the factor \(h^{1+d/2}\) Finally they serve as refinement indicator. The output of this program consists of: And finally we show a plot of a 3d computation. In this program we have used discontinuous elements. It is a legitimate question to ask why not simply use the normal, continuous ones. Of course, to everyone with a background in numerical methods, the answer is obvious: the continuous Galerkin (cG) method is not stable for the transport equation, unless one specifically adds stabilization terms. The DG method, however, is stable. Illustrating this with the current program is not very difficult; in fact, only the following minor modifications are necessary: While the 2d solution has been shown above, containing a number of small spikes at the interface that are, however, stable in height under mesh refinement, results look much different when using a continuous element: In refinement iteration 5, the image can't be plotted in a reasonable way any more as a 3d plot. We thus show a color plot with a range of \([-1,2]\) (the solution values of the exact solution lie in \([0,1]\), of course). In any case, it is clear that the continuous Galerkin solution exhibits oscillatory behavior that gets worse and worse as the mesh is refined more and more. There are a number of strategies to stabilize the cG method, if one wants to use continuous elements for some reason. Discussing these methods is beyond the scope of this tutorial program; an interested reader could, for example, take a look at step-31. Given that the exact solution is known in this case, one interesting avenue for further extensions would be to confirm the order of convergence for this program. In the current case, the solution is non-smooth, and so we can not expect to get a particularly high order of convergence, even if we used higher order elements. But even if the solution is smooth, the equation is not elliptic and so it is not immediately clear that we should obtain a convergence order that equals that of the optimal interpolation estimates (i.e. for example that we would get \(h^3\) convergence in the \(L^2\) norm by using quadratic elements). In fact, for hyperbolic equations, theoretical predictions often indicate that the best one can hope for is an order one half below the interpolation estimate. For example, for the streamline diffusion method (an alternative method to the DG method used here to stabilize the solution of the transport equation), one can prove that for elements of degree \(p\), the order of convergence is \(p+\frac 12\) on arbitrary meshes. While the observed order is frequently \(p+1\) on uniformly refined meshes, one can construct so-called Peterson meshes on which the worse theoretical bound is actually attained. This should be relatively simple to verify, for example using the VectorTools::integrate_difference function. A different direction is to observe that the solution of transport problems often has discontinuities and that therefore a mesh in which we bisect every cell in every coordinate direction may not be optimal. Rather, a better strategy would be to only cut cells in the direction parallel to the discontinuity. This is called anisotropic mesh refinement and is the subject of step-30.
https://dealii.org/8.5.0/doxygen/deal.II/step_12.html
CC-MAIN-2018-34
en
refinedweb
Do. - R. Need more Help with R for Machine Learning? Take my free 14-day email course and discover how to use R on your project (with sample code). Click to sign-up and also get a free PDF Ebook version of the course. Start Your FREE Mini-Course Now!. Yet it works after installing ellipse packages Nice! Thanks for the post. I tried Google first when I saw the error, interestingly the 5th search result is the link back to this post. 🙂 It works after installing ellipse package. Thanks Jason for this great learning tutorial! Glad to hear it! the most important piece of information missing in the text above: install.packages(“ellipse”) Thanks Rajendra !!! Thanks for the tip!. Perfect remarks. Always follow the instructions of the tutorial. Great tutorial Jason, as usual of course. Thanks. Thanks for highlighting the problem. True, it was hard to find a solution elsewhere on the Internet! Thanks! Your comment saved me!. Yup .. was solved. Please check in discussion. 1) You have to install ‘ellipse” package. which is missing install.packages(“ellipse”) 2) If you change plot=pairs, you can see output. If you want, ellipse, please install ellipse package.. Hi, I have installed the “caret” package. But after this when i am loading through library(caret), I am getting the below error: Error: package or namespace load failed for ‘ggplot2’ in loadNamespace(j <- i[[1L]], c(lib.loc, .libPaths()), versionCheck = vI[[j]]): there is no package called ‘munsell’ Error: package ‘ggplot2’ could not be loaded I’m sorry, I have not seen this error. Perhaps check on stackoverflow if anyone has had this fault or consider posting the error there. Hi Jason, Post some R&D was able to resolve it. Below are the actions i did. install.packages(“lattice”) install.packages(“ggplot2”) install.packages(“munsell”) install.packages(“ModelMetrics”) library(lattice) library(munsell) library(ggplot2) library(caret) Nice work, glad to hear you figured it out. Hi Jason, Need one help again. Thanks in advance. Since this is my first Data Science Project, so the question. What and how to interpret from the result of BoxPlot. It will be of help if you can kindly explain a bit of the outcome of the BoxPlot. The box plot shows the middle of the data. The box is the 25th to 75th percentile with a line showing the 50th percentile (median). It is a fast way to get an idea of the spread of the data. More here: Hello Dr Brownlee, I am new to machine learning and attempting to go through your tutorial. I keep getting an error saying that the accuracy matrix values are missing for this line: results <- resamples(list(lda=fit.lda, cart=fit.cart, knn=fit.knn, svm=fit.svm, rf=fit)) The accuracy matrix for lad works however cart, knn, svn and rf do not work. Do you have any suggestions for how to fix this? Thanks I’m sorry to hear that. Confirm your packages are up to date. sir, how could i plot this confusionMatrix “confusionMatrix(predictions, validation$Species)”? Looks good. > predictions confusionMatrix(predictions, validation$Species) Error in confusionMatrix(predictions, validation$Species) : object ‘predictions’ not found Could anyone clarify this error ? predictions confusionMatrix(predictions, validation$Species) Error in confusionMatrix(predictions, validation$Species) : object ‘predictions’ not found Could anyone clarify this error ?Earlier I posted something wrong Perhaps double check that you have all of the code from the post? Hi, I am beginner in this so may be the question I am going to ask wont make sense but I would request you to please answer: So when we say lets predict something, what exactly we are predicting here ? In case of a machine (motor, pump etc) data(current, RPM, vibration) what is that can be predicted ? Regards, Saurabh In this tutorial, given the measurements of iris flowers, we use a model to predict the species. set.seed(7) > fit.lda <- train(Species~., data = data, method = "lda", metric = metric, trControl = control) The error i got, and also tried to install mass package but it not getting installed properly and showing the error again and again please help me sir. ERROR:- Error in unloadNamespace(package) : namespace ‘MASS’ is imported by ‘lme4’, ‘pbkrtest’, ‘car’ so cannot be unloaded Failed with error: ‘Package ‘MASS’ version 7.3.45 cannot be unloaded’ Error in unloadNamespace(package) : namespace ‘MASS’ is imported by ‘lme4’, ‘pbkrtest’, ‘car’ so cannot be unloaded Error in library(p, character.only = TRUE) : Package ‘MASS’ version 7.3.45 cannot be unloaded I’m sorry to hear that. Perhaps try installing the MASS package by itself in a new session? Hello Jason, My question is regarding scaling. For some algorithms like adaboost/xgboost it is recommended to scale all the data. My question is how do I unscale the final predictions. I used the scale() function in R. The unscale() function expects the center(which could be mean/median) value of the predicted values. But my predicted values are already scaled. How can I unscale them to the appropriate predicted values. I am referring to prediction on unlabeled data set. I have searched for this in many websites but have not found any answer. Perhaps scale the data yourself, and use the coefficients min/max or mean/stdev to invert the scaling? I am getting an error while summarize the accuracy of models, Error in summary(results) : object ‘results’ not found You may have missed some code? > library(tidyverse) Sir while adding this library in R, I have installed the package then also it is showing following the error: please help me Error in loadNamespace(j <- i[[1L]], c(lib.loc, .libPaths()), versionCheck = vI[[j]]) : there is no package called ‘bindrcpp’ Error: package or namespace load failed for ‘tidyverse’ Sorry, I am not familiar with that package or the error. Perhaps try posting on stackoverflow? Dear Jason, I am not familiar with R tool. When I started reading this tutorial, I thought of installing R. After the installation when I typed the Rcommand, I got the following error message. Please give me the suggestion… > install.packages(“caret”) Installing package into ‘C:/Users/Ratna/Documents/R/win-library/3.4’ (as ‘lib’ is unspecified) — Please select a CRAN mirror for use in this session — trying URL ‘’ Content type ‘application/zip’ length 5097236 bytes (4.9 MB) downloaded 4.9 MB package ‘caret’ successfully unpacked and MD5 sums checked The downloaded binary packages are in C:\Users\Ratna\AppData\Local\Temp\RtmpQLxeTE\downloaded_packages > Great work! Hi Jasson, I tried the following but got the error, > library(caret) Error: package or namespace load failed for ‘caret’ in loadNamespace(i, c(lib.loc, .libPaths()), versionCheck = vI[[i]]): there is no package called ‘kernlab’ > It looks like you might need to install the “kernlab” package. Thanks Jasson!!!! You’re welcome. e1071 error i have installed all packages….. What error did you get? Hi All, When I created the updated ‘dataset’ in step 2.3 with the 120 observations, the dataset for some reason created 24 N/A values leaving only 96 actual observations. Copy and pasted the code from the post above. Any idea what caused or how to fix so that the ‘dataset’ is inclusive of all the training data observations? Doesn’t seem to be anything wrong with the IRIS dataset or either of the validation_index or validation datasets. Perhaps double check you have the most recent version of R? Update to OP, I reran the original commands from that section and was able to pull in all 120 observations for the training data. Not sure why it didn’t fetch all the data the first time but looks ok now. Glad to hear it. Just confirming, the above tutorial is a multiclass problem? Therefore, I should be able to apply the above methodology to a different k=3 problem. Is this correct? Yes. Jason, For my first Machine Learning Project, this was EXTREMELY helpful and I thank you for the tutorial. I had no problems going through the script and even applied to a dummy dataset and it worked great. So thank you. My question is more related to automation. Instead of manually assessing the accuracy of each model to determine which one to use for prediction, is there a way to automatically call the model with the highest accuracy in the “predictions <- predict([best model], validation)" script. Hope to hear from you soon. Well done! Great question. Generally, once we find the best performing model, we can train a final model that we save/load and use to make predictions on new data. This post will show you how: And this post covers the philosophy of the approach: I did not get 100% Accuracy after following the tutorial example. I got : Confusion Matrix and Statistics Reference Prediction Iris-setosa Iris-versicolor Iris-virginica Iris-setosa 10 0 0 Iris-versicolor 0 8 0 Iris-virginica 0 2 10 Overall Statistics Accuracy : 0.9333 95% CI : (0.7793, 0.9918) No Information Rate : 0.3333 P-Value [Acc > NIR] : 8.747e-12 Kappa : 0.9 Mcnemar’s Test P-Value : NA Statistics by Class: Class: Iris-setosa Class: Iris-versicolor Class: Iris-virginica Sensitivity 1.0000 0.8000 1.0000 Specificity 1.0000 1.0000 0.9000 Pos Pred Value 1.0000 1.0000 0.8333 Neg Pred Value 1.0000 0.9091 1.0000 Prevalence 0.3333 0.3333 0.3333 Detection Rate 0.3333 0.2667 0.3333 Detection Prevalence 0.3333 0.2667 0.4000 Balanced Accuracy 1.0000 0.9000 0.9500 > Perhaps try running the example multiple times? sir, i want to learn r programing at vedio based tutorial which is the best tutorial to learn r programming quickly Sorry, I don’t have good advice on how to learn R, I focus on teaching how to learn machine learning for R. For learning R I strongly recommend the Coursera.org “R Programming” certification course, When I took it it was free, now is paid, something around USD 50. Thanks for the tip. Json, nice article. I left working code with minor fixes in this repo, please comment on, thanks, Carlos Thanks for sharing. what if the dataset is used EuStockMarkets, I error continue Sorry, I don’t know about that dataset. successfully done, and got the result.Thanks for the great tutorial. But now i wonder, what to do further, how to use it in a generic manner for any dataset. How to use the created pred.model anywhere. Yes, you can use this process on other datasets. ohk, but to use any dataset we need to make the dataset similar to that of the iris dataset, like 4 numberic columns, and one class. Also, accuracy output is similar over the traning dataset , and the validation dataset, but how does that help me to predict now what type of flower would be next if i provide it the similar parameters. Now, for example i have to create a model which predicts the cpu utilization of the servers in my Vcenter or complete DC, how can i create a model which will take my continious dataset and predict that when the CPU utilization will go high and i can take proactive measures. This process will help you work through your predictive modeling problem systematically: Hello Jason, Thanks for the clear and set by step instructions. But I just want to understand what I need to do after creating the model and calculating its accuracy ? Can you please explain to draw some conclusions/predictions on the iris data set we used ? You can finalize and start using it. See the whole process here: Hi Sir, For the confusionMatrix(predictions, validation$Species) command , I am getting an output as follows: [,1] [,2] [1,] 0 0 [2,] 0 10 I am not getting the same output as you got. Any suggestions on what I may be doing wrong.?The code worked exactly till this command. Perhaps double check that you copied all of the code exactly? And that your Python environment and libraries are up to date? Hello good day Jason. Thank you very much for the tutorial I have been very useful but I have a question, in the section of “print (fit.lda)” does not deploy “Accuracy SD Kappa SD”. What remains of the tutorial if you have given me exact, could you help me with this doubt ?. Greetings. The API may have changed slightly since I wrote the post nearly 2 years ago. Great article for a beginner like me Jason! Appreciate your work in sharing your knowledge and educating. Is there a model fit for ‘multinomial logistic regression’ algorithm? Thank you! There is, but I would not recommend it. Try LDA instead. Upon further reading of other articles written by you, I realize that I may not need to use ‘Regression’. My dataset has category variables as input and category attributes as output as well (having 7 levels). So, it is a classification problem and I’m assuming I can use one of the 5 models/fit you have given as examples here in this Iris project. Can you let me know if this is correct understanding? – Thank you This post may help clear you the difference between classification and regression: It works for me with the iris data. Thanks a lot Jason! But there are no “Accuracy SD Kappa SD ” from the output of the fit models. Should I change some settings to get them? I believe the API may have changed. Dear Jason Brownlee I have a dataset with 36 predictors and one for classes (“1”, “2”, “3”) that I got it through clustering in the previous step. My question is: how can I reduce all my predictors into five variables representing specific dimensions in my study? Should I run PCA separately to produce a new dataset with 5 predictors and one for classes or is there any other ways? Thank you in advance. Yes, you would run dimensionality reduction first to create a new input dataset with the same number of rows. Hi Jason, I am getting the error – Error: could not find function “trainControl” on typing tc<-trainControl(method="cv",number=10). What can be the solution for this? Perhaps caret is not installed or caret is not loaded? Maybe a very stupid question. But I read “Build 5 different models to predict species from flower measurements”. So now I am wondering what the predictions of the model tell me about this, how I can use it. For example I now go to the forest take some measurements, assume that the flower is one of those tested, and want to know which flower it is exactly. In a traditional regression formula it is straightforward as you can put in your measurements in the formula and the calculated estimates and get an outcome. But I don’t know how to use the outcomes in this case. Great question. Once we choose a model and config, we develop a final model trained on all data and save it. I write about this here: You can then load the model feed in an input (e.g. a set of measures) and use it to make predictions for those measures. Does that help? Your Tutorial is just awesome . Thanks its really helpful Thanks, I’m glad to hear that. This tutorial really helpful. Thanks Jason. Thanks, I’m glad to hear that. Hi Json how are ? I am new in machine learning. i want to invent a unique idea and prof about islami banking and conventional banking. how can i do that. if any suggestion please give me and i cant fund any islami banking data set like loan info or deposit bla bla bla. i want your valuable information Sorry, I don’t know about banking. Dear Sir, I am getting the following error Error in [.data.frame(out, , c(x$perfNames, “Resample”)) : undefined columns selected when i execute results <- resamples(list(lda=fit.lda,nb=fit.nb, cart=fit.cart, knn=fit.knn, svm=fit.svm, rf=fit.rf)) What can be the solution for this? Did you copy all of the code from the tutorial? Hi Jason, First of all great work. May God bless you for all your sincere efforts in sharing the knowledge. You are making a big difference to the lives of people. Thank you for that. I have a basic question. Now we have a best fit model – how to use it in day to day usage – is there a way I can measure the dimensions of a flower and “apply” them in some kind of equation which will give the predicted flower name? How to use the results? Kindly advise when you are free. Hi Jason – found another of your post: Thank you. Hi Jason – the post was good in telling what to do. However the how part is still missing. Hence still need help. Thank you. You can make predictions as follows: yhat = predict(Xnew) Where Xnew are new measurements of flowers. Thanks. Also see this post: Dear Jason, Thank you very much for your response. Yes – I was about to post that this link was indeed helpful in operationalizing the results. Thank you very much. Please keep up the great work. Hussain. Glad to hear it. Hi Jason, Thank you for sharing your methods and codes. It was very useful and easy to follow. Could you please share how to score a new dataset using one of the models? For example, in my training, random forest has the best accuracy. Now I want to apply that model on a new dataset that doesn’t have the outcome variables, and make prediction. Thank you Great question, I answer it in this post: Thanks Jason. I read through the link. I already finalized my model, now I need save the model and apply it for operational use. The dataset that I want to score doesn’t have the outcome variable. I am not sure which command I should use to make prediction after I have the final model. Can you suggest R codes to do so? You can use the predict() function to make a prediction with your finalized model. Hi Jason! Amazing post! I have the same doubt @TNguyen did. I Finalized the model and we know that LDA is the best model to apply in this case. How I predict the outcome variables (species) in a new dataframe without this variable? IN summary, how I deploy the model on a new dataset? Sorry, I´m new in this field and I´m learning new things all the time! Good question, I have an answer here that might help: Here is a tutorial for finalizing a model in R: Hey, Thanks for the great tutorial. I have a problem and don’t know what’s wrong in the section 3.1 Dimensions. When I execute dim(datset) I get the answer NULL. Do you know why R Studio doesn’t show me the dimensions of the “dataset”? Best regards Martin Perhaps confirm that you loaded the data? Very nice, Its given overall structure to write the ML in R. Thanks! Hey, I am working on the package called polisci and I am asked to build a multiple linear regression modal. My dependent variable is human development index and my independent variable is economic freedom. Could ou please tell me how can I perform multiple linear regression modal. How do I go about in steps and what is the syntax in R to get to the results and get a graph? Any help would be greatly appreciated. Please help me as I am an undergrad student and I am learning this for the first time Thanks in advance Sorry, I don’t have examples of time series forecasting in R. Here are some resources that you can use: Thanks Jason, Was able to execute the program in one go.. Excellent description Well done! Jason, Thank you very much for you above work. Its Ohsomesss, I am new to data science and want to make my carrier. I found so useful this superb…… You’re welcome, I’m glad it helped. Please suggest me a path to become data scientist step by step, and how to become champion in R and python ?? Sure, start right here: Thanks, Jason! This is a very helpful post. I did exactly as suggested, but when i print(fir.lda), I do not have the accuracy SD or kappa SD. How should I get them? Thanks Perhaps the API has changed. Amazing tutorial! I just need to install 2 packages: e1071 and Ellipse After that, i wrote every single line, and i really appreciate the big effoct you done to explain so clear!!! Thank you I’m glad it helped. Thanks for the great tutorial. I have a problem and don’t know what’s wrong in the section 6. Make predictions . When I execute predictions <- predict(fit.lda, validation) confusionMatrix(predictions, validation$Species) I get the error "error data and reference should be factors with the same levels."like this Do you know why R Studio doesn’t show me the Make predictions of the “dataset”? Perhaps try running the script from the command line? Please check above link ^ When I try to do the featurePlots I get NULL. I installed the ellipse package without error. featurePlot(x=x, y=y, plot=”ellipse”) NULL > # box and whisker plots for each attribute > featurePlot(x=x, y=y, plot=”box”) NULL > # density plots for each attribute by class value > scales featurePlot(x=x, y=y, plot=”density”, scales=scales) NULL everything up to this point worked fine I’m sorry to hear that. Perhaps there is another package that you must install? I was also getting same error. You would like to check below link for the solution: Thanks for sharing. Great tutorial Jason! Inspired me to look up and a learn a bit more about LDA and KNN etc. which is a bonus! Great self-learning experience. I have experience with analytics but am a relative R newbie but I could understand and follow with some googling about the underlying methods and R functions.. so, thanks! One thing… the final results comparison in Section 5.3 are different in my case and are different each time I run through it. Reason is likely that in Step 2.3 there is no set.seed() prior. So, when you create the validation dataset which is internally a random sample in createDataPartition().. results are different in the end? Thanks. Well done. Thanks. Yes, some minor differences should be expected. Jason Brownlee you the real MVP! hanks, I’m glad the tutorial helped. Hello this is very helpful, but i don’t get how i should read the Scatterplot Matrix Each plot compares one variable to another. It can help you get an idea of any obvious relationships between variables. I have problem in this…. #,] 1 2 3 4 5 6 #,] Please help me out What is the problem exactly? it can’t findout the objects….and function also..! what can i do? What objects? Jason, you’re indeed a MVP! Ran this in R 3.5. 1. install.packages(“caret”, dependencies = c(“Depends”, “Suggests”)) ran for almost an hour. May be connectivity to mirrors. 2. install.packages(“randomForest”) & library(“randomForest”) needed Would definitely recommend this to all ML aspirants as a “hello world!” Hearty Thanks! Well done! Thanks for the tips. First I’d like to say THANK YOU for making this available! It has given me the courage to pursue other ML endeavors. The only issue I have is that when summarizing the results of the LDA model using the print(fit.lda), my results do not show standard deviation. Do you know if this is due to a setting in R that needs to be changed? Any help is appreciated! Best, Giovanni Yes, I believe the API changed since I wrote the tutorial. Hi! First of all great tutorial, I followed and achieved the expected results Really helped me overcome ML jitters. Very very grateful to you. But I really wanted to know the mathematical side of these algorithms, what do these do and how? Also, it would be wonderful if you could explain things like “relaxation=free” (What does this mean?) That do not have a straight answer on Google Thanks Regards Thanks for the feedback Shane. We focus on the applied side of ML here. For the math, I recommend an academic textbook. Very nice tutorial. The caret package is a great invent. where can I find a rapid theory of the methods to understand it better? Right here: Thanks, Brownlee. You’re welcome.
https://machinelearningmastery.com/machine-learning-in-r-step-by-step/
CC-MAIN-2018-34
en
refinedweb
Funniest blog thread ever? Sam Ruby: Matrix reloaded The funniest thing I've read in blogland for some time. " /> « March 2003 | Main | May 2003 » Sam Ruby: Matrix reloaded The funniest thing I've read in blogland for some time. I saw the ad, Cog, for the first time this afternoon. I immediately thought - that's how a car made out of software would work. There's no CGI trickery. Apparently it took 607 takes. So let's be honest - a car made out of software would really work like the first 606... Counterpane: Crypto-Gram: April 15, 2003. Anthony Eden: Deployment is a PITA if you want to define a JMS queue you have to modify the file jbossmq-destinations-service.xml. Instead of this, keep your own jms descriptor file in source control and deploy it onto JBoss along with the jar/ear. JBoss will pick up any file that ends in '-service.xml' in the default/deploy directory. Example: <?xml version="1.0" encoding="UTF-8"?> <!-- $Id$ --> <server> <mbean code="org.jboss.mq.server.jmx.Queue" name="jboss.mq.destination:service=Queue,name=iams.reachenvelope.in"> <depends optional- jboss.mq:service=DestinationManager </depends> <depends optional- jboss.mq:service=SecurityManager </depends> <attribute name="SecurityConf"> <security> <role name="guest" read="true" write="true"/> </security> </attribute> </mbean> </server> Granted this is not exactly well-documented (I took a good guess and it worked), but nothing in JBoss is. Can we just expect JBoss to pick up our version of castor over the one that is in the lib directory? No. We must remove the one which is in the lib directory and copy our version into that directory. There are two ways around this, both of which involve Ant. The first is to identify what jar you want gone and write some Ant to delete it as part of deployment. The second is better, but more work. Rewrite the the Jboss startup script in Ant. Yes, I've done this, yes, it works, and no I don't care about any possible weirdness in using Java to start Java. I did this to solve a nasty classpath problem. I extracted the classpth to a .properties file - much easier to work than a .bat file. After that I started up Tomcat with an Ant script, to see if it would work (yes) and just for fun, but now I think people should consider using Ant as the default mechanism for starting and stopping servers - and that means shipping JBoss et al with Ant files, not shell scripts. It would be nice if there was some standard way of saying "execute this SQL code the first time this EAR is run". Again Ant is our friend. Use a SQL task to run against the DB at deployment time. Although if JBoss used Ant, it could provide a system entity hook to do exactly what Anthony wants. It doesn't have to part of a J2EE deployment standard, just easy to do. Later: from the comments, a good idea from Crowbar Tech: ...issuing arbitrary SQL when a bean is deployed. Simple create a trivial MBean that will execute any SQL passed via the configuring XML and include the according jboss-service.xml in the jar you deploy. First of a series on organizing projects and creating a good working environment for writing and managing Java code. Install a JDK to c:/java or /usr/local/java. these will not be the default paths. If you have a jdk somewere else on your machine you want to use, make a note of where it is, but it's better if you uninstall it and start over. Windows users: add these to your environment JAVA_ROOT=c:/java JAVA_HOME=%JAVA_ROOT%/jdk1.3.1_07 Linux users: add these to your .profile JAVA_ROOT=/usr/local/java; export $JAVA_ROOT JAVA_HOME=$JAVA_ROOT/jdk1.3.1_07; export $JAVA_HOME Remember to change JAVA_HOME if you installed the JDK somewhere else - don't change JAVA_ROOT. We'll talk about how to set things up for multiple JDKs in a future installment. Every Java project you work on will depend on these two - they are the most important java projects outside the JDK. Unpack Ant 1.5.2 underneath $JAVA_ROOT. In windows add these envars to your environment: ANT_HOME=%JAVA_ROOT%/apache-ant-1.5.2 PATH=%PATH%;%ANT_HOME%\bin; In Linux add these envars to your .profile: ANT_HOME=$JAVA_ROOT/apache-ant-1.5.2; export $ANT_HOME PATH=$PATH;$ANT_HOME/bin; export $PATH Fire up a command prompt and type 'ant -version': > ant -version Apache Ant version 1.5.2 compiled Copy the following jarfiles into $ANT_HOME/lib - jdepend.jar, junit.jar, catalina-ant.jar, jing.jar The jars are available at . You can test your ant setup by downloading them with this buildfile: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <project name="ibs-third-party" default="get-ant-ext" basedir="."> <property environment="env" /> <target name="get-ant-ext" description="" > <property name="ant.loc" value="${env.ANT_HOME}/lib"/> <property name="ant.href" value=""/> <get src="${ant.href}/jdepend.jar" dest="${ant.loc}/jdepend.jar"/> <get src="${ant.href}/jing.jar" dest="${ant.loc}/jing.jar"/> <get src="${ant.href}/junit.jar" dest="${ant.loc}/junit.jar"/> <get src="${ant.href}/catalina-ant.jar" dest="${ant.loc}/catalina-ant.jar"/> </target> </project> Notice already we are not dependent on the filesystem - this buildfile will work on ether Linux or Windows. Save the file somewhere as build.xml. Cd to that directory and type > ant Unpack Junit3.8 underneath $JAVA_ROOT. Sam Ruby: Experimental Gump RSS feed Here's some ant to GET the Ant task extensions I tend to rely on: <property name="gob.href" value=""/> <property name="gob.href.core" value="${gob.href}/core"/> <property name="ant.href" value=""/> <property name="ant.href.ext" value="${gob.href}/ext"/> <property name="ant.lib.home" value="env.ANT_HOME/lib"/> <property name="depend.where" value="${project.home}/${third.party.lib}"/> <target name="get-ant-extensions" description="" > <get src="${ant.href.ext}/jdepend.jar" dest="${ant.lib.home}/jdepend.jar"/> <get src="${ant.href.ext}/jing.jar" dest="${ant.lib.home}/jing.jar"/> <get src="${ant.href.ext}/junit.jar" dest="${ant.lib.home}/junit.jar"/> <get src="${ant.href.ext}/catalina-ant.jar" dest="${ant.lib.home}/catalina-ant.jar"/> <!-- add more ant extensions here --> </target> Great, I can get a jarfile. But seeing as people like Sam are pushing out RSS build feeds, why not push out release and version data as an RSS feed? Every OS project you depend on - you could examine its RSS feed to check for new versions, bugfixes, or just to find the link to download the jar you want. Even better each version could describe its dependencies in turn in the feed to create a dependency web (this seems handier than reegineering the Jar manifest file format). Then we could hack Jdepend to examine the graph. I wonder how hard it would be to add <getrss/> to Ant that would understand such a feed? Maven is a great code repository, but it has its own XML format you need Maven to understand it or roll your own. Exposing the Maven repository as RSS would be a good start as would getting to Ruper to fix on RSS as a data format. A popular view held by WS and MEST proponents is that for the purposes of message transmission (and typically SOAP messages), TCP and HTTP are equivalent. In other words layers 4 and 7 in the OSI model are no different in Webservices architecture (OSI layers 5 and 6, Session and Presentation we'll ignore, since the Internet does). This is argued as a good thing since we move the processing semantics into SOAP headers which will transcend the mere details of various transports. I should point out that at the W3C at least, the ws-architecture group haven't feel informed by the OSI stack or the Internet subset and have a working definition of transport that is roughly 'everything that delivers SOAP messages'. Which is convenient, though it may be somewhat confusing to those involved in networking or anyone that did distributed systems 101 at college. Before we start, some history. A mistake was made in the past with the once widely held view in distributed systems that location transparency was a good thing - you shouldn't care where an object is on the network. No number of systems and networking engineers waving copies Waldo and Emerson or protesting at the watercooler with the eight fallacies on placards was enough to presuade anyone who made decisions on this stuff that transparency was a bad idea. We had to build out systems and see them fail, just to be sure. We can argue that transport transparency is a similar error - highly desirable, but highly misleading. So *is* there a useful difference? First, we need to be aware that 'application protocol' and 'transport protocol'are not well-defined sets, and what is meant can change depending on who you're talking to. On the other hand this is computing and programming, not science and mathematics; we don't always need or expect precise definitions to make sense to each other. But let's try to be more precise and identify a distinction - Actors. The primary difference between application and transport protocols is that they differ in *intended effect*. The application protocols like FTP, SMTP and HTTP have the rudiments of what are known as "performatives". Now in computing terms, a performative is a highly formalized action word, or verb. 'Highly formalized' in turn means a computer program could make a decison on what to do based on the word along with some surrounding context (such as who said it and when it was said). You use a performative solely to influence another entity to do something on your behalf. The sent message in combination with the performative is designed to influence the sender. Compare that with the notion of a medium. A medium in this sense is that which carries the performative message. Transport protocols aren't like this - they're not messenger boys, they're delivery boys. Transport protocols don't make the same utilization of action words. For example TCP use terms such as SYN, ACK, FIN, which are more like grunting than conversation. Claiming they're the same class of animal as application protocols is like claiming black is the same colour as white by progressing through infinite shades of grey; an interesting but crashingly trivial rhetorical trick. So what makes application protocols distinct from transports (and much more useful and interesting) is that when you use an application protocol your are trying to get another system to do something for you by asking it. In English we do this with action words, or verbs. In application protocols we use action words too, but just a handful. HTTP has 8, but the vast majority of the web is getting things done with just three of them, 'get', 'post' and 'connect'. In the world of computer protocols, application protocols deal with performatives and transports are media. The protocol neutral school of though would have us treat Internet protocols not as protocols, but as media. Application protocols aren't the only things that use a controlled set of well defined action words. SQL is based around a few. Space or tuple based systems such as Javaspaces and Linda are based on action words. Most of the things you do with files and directories on your computer only require a few verbs (new, move, read, write, delete, copy). In Internet protocols you can't just make up new actions words; they're usually a controlled set, and extended carefully. The main reason to add a new verb to an Internet protocol is to avoid corruption of the meaning of an existing one, but that's no guarantee of adoption by implementations. Much of thinking around making the most out of a few verbs is that it induces useful properties in a system, such as scalability and cheap global coordination. This would be very much part of REST doctrine, known there as the 'uniform interface'. Allowing anyone to make up new verbs seems like a good idea initially and may work for local or 'gated' cases (like a software component, an API or a chunk of object middleware), but imposes costs on all the other entities in the system to learn the new verbs and cross reference them with which objects they can be applied to. As the number of possible conversations increases the cost of communications quickly gets out hand and ceases to become sustainable. The counter-argument to uniformity is that no-one ever has to talk to everyone else, so the N-squared problem is theoretical. To which one could counter, no-one ever has to use all the possible 'transports' so SOAP transport transparency is in turn solving a theoretical problem (some study in how actual communications networks emerge and coalesce into hub and spoke models can be revealing here). The point of course is that the problems are not theoretical and that you do not have to reach asymptotic limit in either case to feel a pinch. The main concerns of a WS architect, if we did accept a difference between transport and application, are two-fold. First that the architectural model for Web Services has a serious hole - SOAP does not run over all things equally. Second that the meaning of the SOAP message changes depending on the protocol it runs on, which is disturbing in a Heseinberg Uncertainty kind of way. If we have a payload with a FIPA-ACL 'inform' or getStockQuote stuffed into a SOAP envelope which is in turn stuffed into a HTTP envelope, we don't expect 'inform' or getStockQuote to mean different things depending on where it is. I don't think anyone wants this. Even hard core RESTafarians would probably agree there's value in being able to span systems with the meaning of SOAP headers intact. On the other hand it's not clear it's avoidable any more than probability is avoidable in subatomic physics. Most of the Internet application protocols that one might want to to run SOAP across involve actors in the roles of client and server who are communicating using performatives (or at least a version of perfomatives analogous to grunts and utterances). Webservice and MEST architectures seems intent on modelling systems solely in terms of SOAP actors. There is little chance of giving up on protocol independence - it's too desirable a property. Fair enough. An application protocol changing the meaning of a payload is not a desirable outcome. But that is not to say there is no room for dissonance between the word games played at the SOAP level and what SOAP is being transmitted over. It's engaging in bogosity akin to that of location transparency to pretend that two SOAP actors having conversation 'Play' over UDP will be the same language game as the same SOAP actors having conversation 'Play' over HTTP, because the client and server actors in HTTP are there too, having their /own/ conversation and acting as middlemen. Hamlet just isn't the same play without Rosencrantz and Guildernstern. Zero hassle software - I don't recall any of these crashing: Darren Hobbs: Stop dissing JUnit Well said Darren. Here's my to-die-for Junit reporting tool (these are mock-ups at the moment): all tests pass: failed tests: TheArchitect.co.uk - Jorgen Thelin's weblog: Distribution of RSS Version Usage - March 2003 De facto standards, I like it ;) I re-examined the distribution of RSS version usage for my weblog, following a query on why I see RSS 2.0 as the "natural" version to base any "real RSS standard" on Seems like Jorgen's also put a W3C XML Schema together for RSS2.0. I'm obliged to cook up a RELAX NG schema... Here: They're updating every hour. The scraper script: gpr.py It's pretty ugly. Greenpeace publishes XHTML, so there's room for improvement over regexen. What's being scraped: Greenpeace news I mentioned recently that it would be nice for Amnesty to have an RSS feed. Here we go: They're updating every hour. The scraper script: air.py What's being scraped: Amnesty news library Russell Beattie Notebook - Alex is Sick Russell's kid is sick and can't blow his nose. Here's a related child rearing war story. Don't try this at home :) When she was a baby, my youngest, Dáire, had a very bad cold. Like Alex, she kept panicking about her breathing. So at one point she could barely breath at all. Screaming. Not nice. I had to do *something*. I sucked the snot out of her nose. More than once. Orla (my partner) comes home. I tell her what a wonderful father I'd been. "That's nice, hon. You know you can buy snot-sucker things. They look like a turkey baster. " "You can buy them??" "Yes. I'll get one. Coffee?" "Um, sure." Weblogs Forum - Why is Poorly Documented Code the Industry Standard? Perhaps because industry standard is to barely give programmers enough time to actually write the code itself :) The first thing to do is figure out whether you have a lazy programmer or an overworked one. There will always be some jokers who don't/won't comment their work whether they have too much to do or not, but many programmers are up to their eyeballs in the quick and dirty. They're not burning out on overtime in order to write comments. If you're developing an API, one way to be able to make time to document it is to design a smaller API. You can do this by using Plugins and Service Decorators. One way to comment the code is not to use comments, but method names. A coment+statement can almost always be replaced with a method named after the comment containing the statement in its body. So instead of this idiom: /* what */ how; use this one: what(); ... def what(){how;} [ t e c h n o \ c u l t u r e ] Most of the problems Karlin Lillington seems to have had with Linux is in installing it; that's fair enough, but the real question remains; is it usable? It seems to be, if you're using Redhat or SuSe versions 8. And for what most people use computers for (email, browsing, office, messaging), the difference between OpenOffice+Mozilla+gaim and Explorer+MSOffice+AOLIM is about a year away from becoming irrelevant. Here's the thing; my non-technical partner can use a Linux machine, but I wouldn't ask her to install it or even configure a desktop. These are two different aspects altogether - like cars and engines. The single biggest problem with Linux isn't Linux at all - it's with web sites that won't work with Mozilla or Netscape, because the designers and engineers involved are only partially competent, or the business has decided to optimize the site for Explorer (and very probably Explorer 5+). This is fine in a work environment where you can dictate what runs on a desktop, it's somewhat shortsighted for a public site. In the long run, it's just a matter of a Linux distributor getting deadly serious about going after Windows market share. It's bound to happen, after all there's only so many servers out there, and at some point in the next decade, Linux will saturate that market. Linux on the desktop will start inside firewalls and work its way out - some big companies and nations are already asking whether Linux on clients are a better economic option. You'd have to think that the only way Windows can stop being commoditized, in the way Linux/OS has commoditized the server market, is to optimize the servers for Windows clients to provide superior user experiences. Which is where .NET fits in. And of course, the desktop is becoming a marginalized client - the future of high-volume low-cost software economics is in cheap laptops/wiretops and smaller mobile devices like phones. Microsoft still hasn't figured out a strategy for mobile devices, although it is now officially deadly serious about mobility - which means it's deadly serious about figuring out how not to have the Windows franchise lose out on the mobile market. O'Reilly Network: Daddy, Are We There Yet? A Discussion with Alan Kay Outtakes: On Lisp The greatest single programming language ever designed. [update, sunday: I was wondering when Ted would blog this ; ) ] On Java: Java has a difficult time of adding to itself. On polymorphism: There is an interface algebra that today might be called polymorphism On progress:. On computer literacy: There just aren't any twentieth and twenty-first century toys to play with. Seymore Pappert's LOGO pioneered this and lead kids to real mathematical learning. On our priorities: Simplicity and beauty aren't high on people's requirement lists. ["Daddy, Are We There Yet?" The Computer Revolution Hasn't Happened Yet] Frightening. Wait until The Man figures out how to game this medium. O'Reilly Network: REST vs. SOAP at Amazon [April 05, 2003]. I think a case can be made that it's SOA because it's distributed, loosely coupled, standards-based, and can be conceived of as a "service" providing business value. It's not SOA if you use that term (as we sometimes do in these discussions) as a shorthand for "CORBA-like distributed object systems." The Mountain of Worthless Information Ted Neward gets messaging religion. Brilliant, I want to to review the book. Anyway... A messaging-based system, by virtue of the fact that it intrinsically doesn't try to do request-response, tightly-coupled communication, allows for a tremendous flexibility in enterprise systems. RPC was created because messaging-oriented approaches were considered "too hard", owing to the loosely-typed nature of messages. But with Schema to provide strong-typing rules for XML payloads in SOAP, and with JMS using Java objects (which in turn must be strongly-typed) for Message payloads, this isn't as much of an issue anymore, and I'll argue removes much of the burden of working with messaging-driven systems. The inherent flexibility gained from a messaging-driven system, however, is something that no RPC-based system can ever match. In my experience schema provide little value over an agreed upon XML document. Schema are good to check for a structurally sound document, but there's a bunch of other business level quiddities which require code or a rules language to enforce. This difference is that of a spell checker and peer review. At the moment information sharing using even the basic SOAP/XSD types is something of a black art. It's all highly fragile, and would be a lot worse if it wasn't for some seriously committed people. Though if you are lucky enough to have Java across the enterprise, then using JMS and exposing SOAP doc/lit at the enterprise boundary is going to work well. But how many enterprises are truly homogenous like this? When people first tried to run RPC and protocol tunneling over the web years ago, it was swiflty kicked out into CGI, not built into HTTP. And it seems no-body working on web architecture back then was paying much attention to this. What finally cemented my relationship with messaging approaches, however, was the SOAP 1.2 spec. For those of you who haven't read the spec, you owe it to yourself to do so. Not because it's particularly fascinating reading, but because SOAP 1.2 is quite possibly the best thing to have hit the industry in a number of years. SOAP is at heart an RPC technology with messaging retrofitted and boosted in the last eighteen months (that's being generous, it's only in the last six months people have stopped using getStockQuote to explain or justify SOAP). Maybe this is my recent infatuation with workflow coming through (and workflow and messaging have such a deep relationship with one another that I suspect it's all one big love affair) Don't go there. Build workflow on top of messaging. Munging the two together screws with dataflow and makes things too complicated. Rule of thumb taught to us via banking fulfilment systems - if you can't figure out how to automate a dataflow after a day or so, kick the flow out to manual processing. Messaging should be simple, clean, elegant; workflow is hard, dirty, bloody. I can't imagine building an enterprise system of any large size without building on a messaging-based backbone. To do otherwise would just somehow seem to tie your hands too tightly. Open secret: nobody in their right mind does anything other than messaging or flat data transfer for systems that are meant to be scalable and available. Every attempt to do> otherwise has missed the target. Get your messaging on: Re: SUO: Program Semantics Bill Andersen: The WWW community has taken this view to the limit, proposing a dizzying array of semantically crippled quick-fix languages and claiming to be doing "ontology". John Sowa:. Meet the SUO. This comunity makes makes RDF look about as abstract as Perl. Amnesty International - Working To Protect Human Rights Worldwide Amnesty doesn't have an RSS feed. But the news page looks structured enough to scrape. ideas asylum - Jamie's Weblog Studies at MLE, worked on agentcities, interested in software agents. Tripleplusgood.. I follow a good bit of code on sourceforge. For some reason I can never remember sourceforge's repository structure. Here's a quick fix for the bash shell: dehora: 508$ cd projects/third-party/junit-addons/junit-addons dehora: 509$ Repo=`cat CVS/Root` dehora: 510$ cvs -d$Repo login dehora: 511$ cvs -d$Repo update -P dehora: 522$ cd projects/jas/project dehora: 523$ Repo=`cat CVS/Root` dehora: 524$ cvs z3 -d$Repo update -P Comma Separated markup Language Specification Should put an end to those permathreads as to whether CSV is equivalent to XML... Gerald Pfeifer - Building OpenOffice with GCJ? Seems RMS wants to be able to build OpenOffice with gjc, but some Sun stuff is in the way. This has been picked up on dev@openoffice.org ITworld.com - E-BUSINESS IN THE ENTERPRISE - A study in XML culture and evolution Sean caused a bit of a stir with RDFers recently. And while this article doesn't deal with RDF, the identity and naming critiria he's critical of is at at the heart of RDF naming, much more so that the relational model. The nearest thing to Malinke naming in RDF would be bNodes (or whatever they're called at the moment). Couldn't agree more about namespaces. What were they thinking?
http://dehora.net/journal/2003/04/
CC-MAIN-2017-09
en
refinedweb
About this series This series explores application architectures that use a Service Oriented Architecture (SOA) on the back end implemented with the Grails framework. Explore how much Grails simplifies creating Web applications in general and Web services in particular. This kind of back end can be easily hooked up to any pure client-side application. In Part 1, you will use Adobe Flex to create such an application that leverages the Flash Player. In Part 2, you will use the Google Web Toolkit to create the front end in pure JavaScript. Prerequisites In this article you will build a Web application using Grails and Flex. The Grails framework is built on the Groovy programming language, a dynamic language for the Java™ platform. Familiarity with Groovy is great, but not completely necessary. Knowledge of Java can be a good substitute, or even other dynamic languages like Ruby or Python. Grails 1.0.3 was used with developing this article (see Resources). Grails can work with numerous databases or application servers, but none is needed for this article—Grails comes with both. The front end is built using Flex, an application framework that uses the ActionScript programming language and runs on the Flash Player. Again, it is okay if you are not already familiar with Flex and ActionScript. Familiarity with Java and JavaScript will help in picking up Flex. Flex SDK 3.2 or higher is needed to compile the code in this article (see Resources). To run the application, you need the Flash Player Version 10.0 or higher.. It turns out, however, that these two things work well together. You can use an SOA design to deploy services to your application servers. You can move all of your presentation logic to the client and leverage a powerful front-end technology such as Flex to create an RIA. That is exactly what you will do in this series, and you will start by creating a Web service using Grails. Web services When many developers hear the term Web service, they think of one thing: SOAP (Simple Object Access Protocol). This has a negative connotation with many developers, as they think of SOAP as being a heavy and complicated technology. However, Web services do not have to be that way. REST (Representational State Transfer)-style Web services have gained popularity because of their simple semantics. They are easier to create and to consume. They can use XML, just like SOAP services, but this can be Plain Old XML (POX) with no fancy wrappers and headers like with SOAP. The Grails framework makes it very easy to create these kind of Web services, so let's get started with a Grails domain model. Grails domain model Grails is a general purpose Web development framework. Most Web applications use a relational database to store and retrieve the data used in an application, and thus Grails comes with a powerful Object Relational Modeling (ORM) technology known as GORM. With GORM, you can easily model your domain objects and have them persisted to a relational database of your choice, without ever dealing with any SQL. GORM uses the popular Hibernate library for generating database-specific and optimized SQL, and for managing the life cycles of domain objects. Before getting in to using GORM, let's quickly discuss the application you will create and what you need to model with GORM. In the sample application, you will create a Web application that mimics the functionality of the popular site Digg (see Resources). On Digg, users can submit links to stories (Web pages). Other users can then read these stories and vote for or against them. You will capture all of this basic functionality in your application. It will let people submit and vote for stories anonymously, so you will not need to model users, just stories. With that in mind, here is the GORM model for a story in the example application, shown in Listing 1. Listing 1. The Story model class Story { String link String title String description String tags String category int votesFor int votesAgainst } That is all the code needed to model the domain object. You declare its properties and the types of those properties. This will allow Grails to create the table for you and dynamically create methods for both reading and writing data from that table. This is one of the major benefits of Grails. You only put data modeling code in one place and never have to write any boilerplate code for simple reads and writes. Now that you have the domain model in place, you can create some business services that use this domain model. Business services One of the benefits of an SOA is that it allows you to model your system in a very natural way. What are some of the operations you would like to perform? That is how you define the business services of the application. For example, you will want to be able to browse and search for stories, so you should create a service for this, as shown in Listing 2. Listing 2. Search service class SearchService { boolean transactional = false def list() { Story.list() } def listCategory(catName){ Story.findAllWhere(category:catName) } def searchTag(tag){ Story.findAllByTagsIlike("%"+tag+"%") } } The first thing you might notice is the boolean flag transactional. Grails is built on top of numerous proven technologies, including Spring. It uses Spring's declarative transactions to decorate any service with transactions. If you were performing updates on data, or you wanted this service to be used by other transactional services, then you would want to enable this. You want this service to be a read-only service. It is not going to take the place of data access, since you have a domain model that already handles all of that. The first service operation retrieves all stories. The second retrieves those for a given category. You use GORM's where-style finder for this. This lets you pass in a map of name/value pairs to form a WHERE clause in an SQL query. The third operation allows you to search for stories with a given tag. Here you use GORM's dynamic finders. This lets you incorporate the query clause as part of the name of the finder method. So to query on the tags column, you use findAllByTags. Also notice the like suffix; this is a case-insensitive SQL LIKE clause. This lets you find all stories where the tag parameter occurs as a substring of the tags attribute. For example, if a story had been tagged "BarackObama" it would show up in a search for "obama." You have created three simple but powerful ways to search for stories. Notice how succinct the syntax for this is. If you are a Java programmer, just imagine how much you would normally write to do this. The search service is powerful, but there is nothing to search yet. You need another service for managing individual stories. I have called this the Story service, and it is shown in Listing 3. Listing 3. Story service class StoryService { boolean transactional = true def create(story) { story.votesFor = 0 story.votesAgainst = 0 log.info("Creating story="+story.title) if(!story.save(flush:true) ) { story.errors.each { log.error(it) } } log.info("Saved story="+story.title + " id=" + story.id) story } def voteFor(storyId){ log.info("Getting story for id="+storyId) def story = Story.get(storyId) log.info("Story found title="+story.title + " votesFor="+story.votesFor) story.votesFor += 1 if(!story.save(flush:true) ) { story.errors.each { log.error(it) } } story } def voteAgainst(storyId){ log.info("Getting story for id="+storyId) def story = Story.get(storyId) log.info("Story found title="+story.title + " votesAgainst="+story.votesAgainst) story.votesAgainst += 1 if(!story.save(flush:true) ) { story.errors.each { log.error(it) } } story } } The Story service has three operations. First, you can create a new story. Next, you have operations for voting for and voting against a given story, via the ID of the story. In each case, notice that you have done some logging. Grails makes it easy to log—just use the implicit log object and typical log4j-style methods: log.debug, log.info, log.error, and so on. The story is saved (either inserted or deleted) using the save method on the Story instance. Notice how you can check for errors by examining the errors property of the story instance. For example, if the story was missing a value for a required field, then this would show up as part of story.errors. Finally notice that this service is transactional. This will tell Grails (and thus Spring) to either use an existing transaction (if one is already present), or create a new one whenever any of these operations are invoked. This is particularly important for the voting operations, because in those you have to read the story from the database first and then update a column. Now that you have created your basic business services, expose them as an API by creating a Web service around them, as you'll see next. Exposing the API As developers, we often think of APIs as code that we write that will be called by other developers. In a Service Oriented Architecture, this is still true. However, the API is not a Java interface or something similar; it is a Web service signature. It is the Web service that exposes the API and allows it to be invoked by others. Grails makes it easy to create a Web service—it is just another controller in a Grails application. Take a look at Listing 4 to see the API for the application. Listing 4. The API controller import grails.converters.* class ApiController { // injected services def searchService def storyService // set the default action def defaultAction = "stories" def stories = { stories = searchService.list() render stories as XML } def submit = { def story = new Story(params) story = storyService.create(story) log.info("Story saved story="+story) render story as XML } def digg = { def story = storyService.voteFor(params.id) render story as XML } def bury = { def story = storyService.voteAgainst(params.id) render story as XML } } The first thing you will notice is the presence of the business services you created in the previous section. These services will be automatically injected by Grails. Actually, they were injected by the Spring framework, one of the technologies that Grails is built on. You simply follow the naming convention ( searchService for the instance variable that will be used to reference an instance of the SearchService class and so on), and Grails does everything for you. The API presents four operations (actions) that can be called: stories, submit, digg, and bury. In each case, you delegate to a business service, take the result of that call and serialize it to XML that you send to the client. Grails makes this rendering easy, just using the render function and an XML converter. Finally, notice that the submit, digg, and bury actions all use the params object. This is a hashtable of the HTTP request parameters and their values. For the digg and bury, you simply retrieve the id parameter. For the submit action, you passed the whole params object to the constructor of the Story class. This uses Grails data binding—as long as the parameter names match up to the property names of the class, Grails will set them for you. This is just another case of how Grails makes development easier. You have written a very minimal amount of code, but that was all you needed to create the services and expose them as Web services. Now you can create a rich presentation layer that uses these services. Presentation You have used an SOA for the back end of the application. This will allow you to create many different kinds of presentation layers on top of it. You will first build a Flash-based user interface using the Flex framework. However, you could just as easily use any other client-side presentation technology. You could even create a "thick" desktop client. However, there is no need, as you will be able to get a rich user experience from a Web client. Take a look at some simple use cases below, and build the user interface for them using Flex. Let's get started by listing all of the stories. Listing stories To list the stories, you know what API to call from the back end. Or do you? When you created the back end, did you ever map any particular URLs to your API controller and its methods? Obviously you did not, but once again Grails will make your life easier. It uses convention over configuration for this. So, to call the stories action on the API controller for the digg application, the URL will be http://<root>/digg/api/stories. Since this is a RESTful Web service, this should be an HTTP GET. You can do this in your browser directly and get XML that looks something like the XML shown in Listing 5. Listing 5. Sample XML response <?xml version="1.0" encoding="UTF-8"?><list> <story id="12"> <category>technology</category> <description>This session discusses approaches and techniques to implementing Data Visualization and Dashboards within Flex 3.</description> <link>- 2008-data-visualization-and.php</link> <tags>flash, flex</tags> <title>Data Visualization and Dashboards</title> <votesAgainst>0</votesAgainst> <votesFor>0</votesFor> </story> <story id="13"> <category>technology</category> <description>Make your code snippets like really nice when you blog about programming.</description> <link> syntax-highlighting-and-code-snippets.html</link> <tags>programming, blogging, javascript</tags> <title>Syntax Highlighting and Code Snippets in a blog</title> <votesAgainst>0</votesAgainst> <votesFor>0</votesFor> </story> <story id="14"> <category>miscellaneous</category> <description>You need a get notebook if you are going to take good notes.</description> <link> sweet_decay.html</link> <tags>notebooks</tags> <title>Sweet Decay</title> <votesAgainst>0</votesAgainst> <votesFor>0</votesFor> </story> <story id="16"> <category>technology</category> <description>If there was one thing I could teach every engineer, it would be how to market. </description> <link></link> <tags>programming, funny</tags> <title>The One Thing Every Software Engineer Should Know</title> <votesAgainst>0</votesAgainst> <votesFor>0</votesFor> </story> </list> Obviously, the exact contents depend on what data you have saved in your database. The important thing to note is the structure of how Grails serializes the Groovy objects as XML. Now you are ready to write some ActionScript code for working with the services. Data access One of the nice things about moving all of the presentation tier to the client is that it is much cleaner architecturally. It is easy to follow a traditional model-view-controller (MVC) paradigm now, since nothing gets spread out between the server and client. So you start with a model for representing the data structure and then encapsulate access to the data. This is shown in the Story class in Listing 6. Listing 6. Story class public class Story extends EventDispatcher { private static const LIST_URL:String = ""; [Bindable] public var id:Number; [Bindable] public var title:String; [Bindable] public var link:String; [Bindable] public var category:String; [Bindable] public var description:String; [Bindable] public var tags:String; [Bindable] public var votesFor:int; [Bindable] public var votesAgainst:int; private static var listStoriesLoader:URLLoader; private static var dispatcher:Story = new Story(); public function Story(data:XML=null) { if (data) { id = data.@id; title = data.title; link = data.link; category = data.category; description = data.description; tags = data.tags; votesFor = Number(data.votesFor); votesAgainst = Number(data.votesAgainst); } } } Listing 6 shows the basics of the Story class. It has several fields, corresponding to the data structure you will get from the service. The constructor takes an optional XML object and populates its fields from that object. Notice the succinct syntax for accessing the XML data. ActionScript implements the E4X standard as a simplified way of working with XML. It is similar to XPath, but uses a syntax that is more natural for object-oriented programming languages. Also notice that each property has been decorated with [Bindable]. This is an ActionScript annotation. It allows a UI component to be bound to the field, so that if the field changes, the UI is updated automatically. Finally, notice the static variable, listStoriesLoader. This is an instance of URLLoader, an ActionScript class used for sending HTTP requests. It is used by a static method in the Story class for loading all of the stories via the API. This is shown in Listing 7. Listing 7. List Stories method public class Story extends EventDispatcher { public static function list(loadHandler:Function, errHandler:Function=null):void { var req:URLRequest = new URLRequest(LIST_URL); listStoriesLoader = new URLLoader(req); dispatcher.addEventListener(DiggEvent.ON_LIST_SUCCESS, loadHandler); if (errHandler != null) { dispatcher.addEventListener(DiggEvent.ON_LIST_FAILURE, errHandler); } listStoriesLoader.addEventListener(Event.COMPLETE, listHandler); listStoriesLoader.addEventListener(IOErrorEvent.IO_ERROR, listErrorHandler); listStoriesLoader.load(req); } private static function listHandler(e:Event):void { var event:DiggEvent = new DiggEvent(DiggEvent.ON_LIST_SUCCESS); var data:Array = []; var storiesXml:XML = XML(listStoriesLoader.data); for (var i:int=0;i<storiesXml.children().length();i++) { var storyXml:XML = storiesXml.story[i]; var story:Story = new Story(storyXml); data[data.length] = story; } event.data = data; dispatcher.dispatchEvent(event); } } The list method is what the controller will be able to invoke. It sends off an HTTP request and registers event listeners for when that request completes. This is needed because any HTTP request in Flash is asynchronous. When the request completes, the listHandler method is invoked. This parses through the XML data from the service, once again using E4X. It creates an array of Story instances and attaches this array to a custom event that is then dispatched. Take a look at that custom event in Listing 8. Listing 8. DiggEvent public class DiggEvent extends Event { public static const ON_STORY_SUBMIT_SUCCESS:String = "onStorySubmitSuccess"; public static const ON_STORY_SUBMIT_FAILURE:String = "onStorySubmitFailure"; public static const ON_LIST_SUCCESS:String = "onListSuccess"; public static const ON_LIST_FAILURE:String = "onListFailure"; public static const ON_STORY_VOTE_SUCCESS:String = "onStoryVoteSuccess"; public static const ON_STORY_VOTE_FAILURE:String = "onStoryVoteFailure"; public var data:Object = {}; public function DiggEvent(type:String, bubbles:Boolean=false, cancelable:Boolean=false) { super(type, bubbles, cancelable); } } Using a custom event class is a common paradigm in ActionScript development, because any server interactions have to be asynchronous. The controller can call the method on your model class and register its own event handlers that look for the custom events. You can decorate the custom event with extra fields. In this case you just added on a data field that is generic and can be reused. Now that you have looked at the model for the presentation tier, look at how it gets used by the controller. Application controller The controller is responsible for coordinating calls to the model, providing the model to the view, and responding to events from the view. Take a look at the controller code in Listing 9. Listing 9. DiggController public class DiggController extends Application { [Bindable] public var stories:ArrayCollection; [Bindable] public var subBtnLabel:String = 'Submit a new Story'; public function DiggController() { super(); init(); } private function init():void { Story.list(function(e:DiggEvent):void{ stories = new ArrayCollection(e.data as Array);}); } } The controller extends the core Flex class Application. This is an idiomatic approach to Flex that lets you associate the controller to its view in a simple way, as you will see in the next section. The controller has a collection of stories that is bindable. This will let the collection be bound to UI controls. Once the application loads, it immediately calls the Story.list method. It passes an anonymous function or lambda, as a handler to the Story.list method. The lambda expression simply dumps the data from the custom event into the stories collection. Now take a look at how this is used in the view. The view Earlier I mentioned that the controller extended the Flex Application class. This is the base class of any Flex application. Flex allows you to use MXML (an XML dialect) to declaratively create the UI. The controller is able to leverage this syntax, as shown in Listing 10. Listing 10. User interface <?xml version="1.0" encoding="utf-8"?> <ctrl:DiggController xmlns: <mx:Script> <![CDATA[ import org.developerworks.digg.Story; private function digg():void { this.diggStory(results.selectedItem as Story); } private function bury():void { this.buryStory(results.selectedItem as Story); } ]]> </mx:Script> <ctrl:states> <mx:State <mx:AddChild <works:StoryEditor </mx:AddChild> </mx:State> </ctrl:states> <mx:DataGrid <mx:HBox <mx:Button <mx:Button <mx:Button </mx:HBox> </ctrl:DiggController> Notice that the root document uses the "ctrl" namespace. This is declared to point to the "controllers" folder, and that is where you put your controller class. Thus everything from the controller class is available inside the UI code. For example, you have a data grid whose dataProvider property is set stories. This is the stories variable from the controller class. It directly binds to the DataGrid component. Now you can compile the Flex code into a SWF file. You just need a static HTML file for embedding the SWF. Running it will look something like Figure 1. Figure 1. The Digg application This is the default look and feel of a Flex application. You can easily style the colors using standard CSS like you would use for an HTML application. You can also customize the DataGrid component, specifying the columns, their order, and so on. The "Digg the Story!" and "Bury the Story!" buttons both call functions on the controller class. Also, notice that you wire-up a controller function to the double-click event for the DataGrid. The controller methods all use the model, just as you would expect in an MVC architecture. The final button, "Submit a new Story", uses several key features of Flex. Take a look at how it works. Submit a story As you can see in Listing 10, clicking the "Submit a new Story" button invokes the toggleSubmitStory method on the controller class. The code is shown in Listing 11. Listing 11. The toggleSubmitStory method public class DiggController extends Application { public function toggleSubmitStory():void { if (this.currentState != 'SubmitStory') { this.currentState = 'SubmitStory'; subBtnLabel = 'Nevermind'; } else { this.currentState = ''; subBtnLabel = 'Submit a new Story'; } } } This function changes the currentState property of the application to SubmitStory. Looking back at Listing 10, you can see where this state is defined. States allow you to add or remove components, or to set properties on existing components. In this case you add a new component to the UI. That component happens to be a custom component for submitting a story. It is shown in Listing 12. Listing 12. The StoryEditor component <?xml version="1.0" encoding="utf-8"?> <mx:VBox xmlns: <mx:Form> <mx:FormHeading <mx:FormItem <mx:TextInput </mx:FormItem> <mx:FormItem <mx:TextInput </mx:FormItem> <mx:FormItem <mx:ComboBox </mx:FormItem> <mx:FormItem <mx:TextArea </mx:FormItem> <mx:FormItem <mx:TextInput </mx:FormItem> <mx:Button </mx:Form> <mx:Binding <mx:Binding <mx:Binding <mx:Binding <mx:Binding </mx:VBox> Listing 12 only shows the UI elements for the custom component. It is just a simple form, but notice the binding declarations in it. This allows you to directly bind the UI form elements to a Story instance, called story. That is declared in a script block, as shown in Listing 13. Listing 13. Script for StoryEditor <?xml version="1.0" encoding="utf-8"?> <mx:VBox xmlns: <mx:Script> <![CDATA[ import mx.controls.Alert; import org.developerworks.digg.*; [Bindable] private var story:Story = new Story(); public var successHandler:Function; private function submitStory():void { story.addEventListener(DiggEvent.ON_STORY_SUBMIT_SUCCESS, successHandler); story.addEventListener(DiggEvent.ON_STORY_SUBMIT_FAILURE, errorHandler); story.save(); // reset story = new Story(); } private function errorHandler(e:Event):void { Alert.show("Fail! : " + e.toString()); } ]]> </mx:Script> </mx:VBox> The script block acts as the controller code for the component. You could easily put both into a separate controller class for the component, since both styles are common in Flex. Going back to Figure 1, you can click on the "Submit new Story" button, and it will show the custom component, as shown in Figure 2. Figure 2. Custom component displayed You can add a new story using the component. It will call the back-end service, which will return XML for the story. That is, in turn, converted back to a Story instance and added to the stories collection that is bound to the DataGrid, making it automatically appear in the UI. With that, the presentation code is complete and hooked up to the back-end service. Summary In this article you have seen how easy it is to create a Web service with Grails. You never had to write any SQL to create a database or to read and write from it. Grails makes it easy to map URLs to Groovy code and to invoke services and create XML for the Web service. This XML is easily consumed by a Flex front end. You have seen how to create a clean MVC architecture on the front end and how to use many sophisticated features of Flex, like E4X, data-binding, states, and custom components. In Part 2 of this series, explore using a JavaScript-based UI for the service using the Google Web Toolkit. Downloads Resources Learn - "Apache Geronimo on Grails" (Michael Galpin, developerWorks, July 2008): The application created in this article can be deployed to any application server. See an example in this developerWorks article. - "Mastering Grails: Build your first Grails application" (Scott Davis, developerWorks, January 2008): Get an introduction to creating Grails applications. - "Mastering Grails: GORM: Funny name, serious technology" (Scott Davis, developerWorks, February 2008): See the power of GORM in action. - "RESTful Web services and their Ajax-based clients" (Shailesh K. Mishra, developerWorks, July 2007): Learn about combining RESTful Web services, like the ones developer here, with Ajax applications. - "Introducing Project Zero: RESTful applications in an SOA" (Roland Barcia and Steve Ims, developerWorks, January 2008): See how REST based services fit in perfectly in a SOA system. - Grails manual: Read this for Grails questions. - "Integrating Flex into Ajax applications" (Brice Mason, developerWorks, July 2008): Read about an example of Flex and Ajax working together. - ASDocs for Flex: Check out this language reference for the classes in ActionScript and Flex. - "Mastering Grails: Changing the view with Groovy Server Pages" (Scott Davis, developerWorks, March 2008): Don't like the UI of this app? Learn all the ways to change it. - "Practically Groovy: Reduce code noise with Groovy" (Scott Hickey, developerWorks, September 2006): Are you a fan of the succinctness that Groovy provides? Learn all about its concise syntax. - "Practically Groovy: Mark it up with Groovy Builders" (Andrew Glover, developerWorks, April 2005): Groovy is another promising language that compiles to Java bytecode. Read about creating XML with it in this developerWorks article. - Grails.org: The best place for Grails information is the project's site. - The Grails manual: Every Grails developer's best friend. - A series of rough benchmarks: See how Grails has significant advantages over Rails by advantages being on top of Java, Hibernate, and Spring. - "Understand Geronimo's deployment architecture" (Hemapani Srinath Perera, developerWorks, August 2005): Deploying an application on Geronimo is easy, but there is a lot that goes on. Learn all about what it takes to deploy an application on Geronimo. - "Remotely deploy Web applications on Apache Geronimo" (Michael Galpin, developerWorks, May 2006): Find out how to deploy your Grails applications to remote instances of Geronimo. - "Invoke dynamic languages dynamically, Part 1: Introducing the Java scripting API" (Tom McQueeney, developerWorks, September 2007): Learn about other alternative languages running on the JVM in this developerWorks article. - "Build an Ajax-enabled application using the Google Web Toolkit and Apache Geronimo" (Michael Galpin, developerWorks, May 2007): See how Geronimo can be used with the Google Web Toolkit. - Visit Digg.com. - developerWorks Web development zone: Expand your site development skills with articles and tutorials that specialize in Web technologies. - developerWorks Open source zone: Visit us for extensive how-to information, tools, and project updates to help you develop with open source technologies and use them with IBM products. - developerWorks podcasts: Listen to interesting interviews and discussions for software developers. - developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts. Get products and technologies - Grails: This article uses Grails version 1.0.3 - Get the Flex 3 SDK. - Get Adobe Flash Player Version 10 or later. - Java SDK: This article uses Java SE 1.6_05. -
http://www.ibm.com/developerworks/web/library/wa-riagrails1/
CC-MAIN-2017-09
en
refinedweb
When submitting sensitive data to a server over HTTP using HTTP Post Request it is highly recommended to use encryption or any sort of obfuscation of your own using xor and rotation etc... to encapsulate data inside your http key/value form data. A c++ client can achieve this thanks to the great C++ library Crypto++. There are various encryption type out there some more stronger than other... Which means some can be broken super fast some will require more CPU time. I like asymetric encryption in a sense that the client doesn't store a unique key which has to be hidden inside the client code but use a key pair instead and just the public key is stored inside the client whereas the server will store the private key... Although, I have not found any PHP implementation of RSA so far and I think mcrypt didn't support RSA for copyright issues I guess... I would love to see a complete support of crypto++ written in PHP or available as a PHP module. If you find anything out there let me know. As a great alternative, since mcrypt support ECB a DES_EDE3 encryption solution which is also supported by crypto++ here is a good way to encrypt data in C++ and decrypt data in PHP. Here we go ! Client Side using ECB - DES_EDE3 ================================== #include "cryptlib.h" #include "modes.h" #include "des.h" #include "base64.h" <-- any base64 encoder/decoder i found the usage of crypto++ base64 class a bit hard to use since you have to know how many byte are taken to encode a byte in base 64... #include "hex.h" // Encode the data using the handy Crypto++ base64 encoder. Base64 uses // 3 characters to store 2 characters. const int BUFFER_LENGTH = 255; byte plaintext[BUFFER_LENGTH]; byte ciphertext[BUFFER_LENGTH]; byte newciphertext[BUFFER_LENGTH]; byte decrypted[BUFFER_LENGTH]; CryptoPP::Base64Encoder base64Encoder; CBase64Coding base64Coder; CString MySensitiveDataUncrypted; CString MySensitiveData; // Set up the same key and IV const int KEY_LENGTH = 24; const int BLOCK_SIZE = CryptoPP::DES::BLOCKSIZE; byte key[KEY_LENGTH], iv[CryptoPP::DES::BLOCKSIZE]; memset( key, 0, KEY_LENGTH); memcpy( key, "012345678901234567890123", KEY_LENGTH ); memset( iv, 0, CryptoPP::DES::BLOCKSIZE); memcpy( iv, "01234567", CryptoPP::DES::BLOCKSIZE ); memset( plaintext, 0, BUFFER_LENGTH); memset( ciphertext, 0, BUFFER_LENGTH); memset( newciphertext, 0, BUFFER_LENGTH); strcpy((char*)plaintext,MySensitiveDataUncrypted.GetBuffer(0)); // now encrypt CryptoPP::ECB_Mode ecbEncryption.ProcessString(newciphertext, plaintext, BUFFER_LENGTH); // your own base64 encoder/decoder base64Coder.Encode((char *)newciphertext,BUFFER_LENGTH,(char *)ciphertext); MySensitiveData.Format(_T("%s"),ciphertext); // MySensitiveData can now be send over http Server Side in PHP using ECB - DES_EDE3 ========================================= // $MyBase64EncodedSecretString will receive/store the encrypted string which will also be base64Encoded for HTTP protocol convenience $key = "012345678901234567890123"; $iv = "01234567"; // Set up an "encryption" descriptor. This is basically just an object that // encapsulates the encryption algorithm. 'tripledes' is the name of the // algorithm, which is simply the DES algorithm done three times back to // back. 'ecb' describes how to encrypt different blocks. See, DES // actually only encrypts 8-byte blocks at a time. To encrypt more than 8 // bytes of data, you break the data up into 8-byte chunks (padding the // last chunk with NULL, if need be), and then encrypt each block // individually. Now, ECB (which stands for "Electronic Code Book", for // whatever that's worth) means that each 8-byte block is encrypted // independently. This has pros and cons that I don't care to discuss. // The other option is CBC ("Cipher Block Chaining") which links the blocks, // such as by XORing each block with the encrypted result of the previous // block. Security geeks probably really get excited about this, but for my // needs, I don't really care. $td = mcrypt_module_open( 'tripledes', '', 'ecb', '' ); mcrypt_generic_init( $td, $key, $iv ); // Grab some interesting data from the descriptor. // $maxKeySize = 24, meaning 24 bytes // $maxIVSize = 8, meaning 8 bytes $maxKeySize = mcrypt_enc_get_key_size( $td ); $maxIVSize = mcrypt_enc_get_iv_size( $td ); //echo "maxKeySize=$maxKeySize, maxIVSize=$maxIVSize\n"; // let's decrypt it and verify the result. Because DES pads // the end of the original block with NULL bytes, let's trim those off to // create the final result. $MyEncodedSecretString = base64_decode( $MyBase64EncodedSecretString ); $MyDecodedString = rtrim( mdecrypt_generic( $td, $MyEncodedSecretString ), "\0" ); // And finally, clean up the encryption object mcrypt_generic_deinit($td); mcrypt_module_close($td); Client Side Stronger Encryption using RSA ========================================= First you will need to generate a public/private key pair using crypto++ keygen console application then your client code should be something like // Client Side Using RSA #include "cryptlib.h" #include "rsa.h" #include "hex.h" #include "randpool.h" #include "filesource.h" CString MyNotverySecretStringInMemory; CString MySensitiveData; char pubFilename[128]; char seed[1024], message[1024]; // MAX = 19999991 strcpy(seed,"12345"); CString tmpPath; TCHAR appPath[MAX_PATH]; ::GetModuleFileName(NULL,appPath,MAX_PATH); tmpPath = appPath; tmpPath = tmpPath.Left(tmpPath.ReverseFind('\\')+1); tmpPath += "public.key"; // 1024 key length for higher security. strcpy(pubFilename,tmpPath.GetBuffer(0)); strcpy(message,MyNotverySecretStringInMemory.GetBuffer(0)); CryptoPP::FileSource pubFile(pubFilename, true, new CryptoPP::HexDecoder); CryptoPP::RSAES_OAEP_SHA_Encryptor pub(pubFile); CryptoPP::RandomPool randPool; randPool.IncorporateEntropy((byte *)seed, strlen(seed)); std::string result; CryptoPP::StringSource(message, true, new CryptoPP::PK_EncryptorFilter(randPool, pub, new CryptoPP::HexEncoder(new CryptoPP::StringSink(result)))); MySensitiveData.Format(_T("%s"),result.c_str()); // Server code will need to use private.key to decode the hexencode rsa string Didn't find any PHP implementation yet if anyone heard about one shoot me an email. Use it at your own certitude :)) to think your solution is going to be more secure. I would say one sentence about security. All is a matter of time given to put yourself or your solution in a "SAFE" zone until the code is broken. The longer time it takes to break the solution the more bullet proof your solution is. 2 comments: Hello, You can use the openssl set of functions. OpenSSL allows for certificate exchange and full RSA encryption. Cheers, Willo some of us aren't so lucky as to have a host that has mcrypt installed. here's a pure-php implementation of des that you should use instead: ...or better yet, AES:
http://younsi.blogspot.com/2008/11/crypto-to-php.html
CC-MAIN-2017-09
en
refinedweb
Hi everybody, I started with Visual C++ and stumbled upon a small problem. I began with the tutorial which can be found here: Link. This all works well. I'm getting my application with the text in the middle. Perfect. Now I want a menu with an option to close the application. I created a menu with the resource editor in Visual C++. After that I got a header file and a script file. The header file contains: And the most important parts in the script are:And the most important parts in the script are:Code://{{NO_DEPENDENCIES}} // Microsoft Developer Studio generated include file. // Used by recource.rc // #define IDR_MENU1 101 // Next default values for new objects // #ifdef APSTUDIO_INVOKED #ifndef APSTUDIO_READONLY_SYMBOLS #define _APS_NEXT_RESOURCE_VALUE 102 #define _APS_NEXT_COMMAND_VALUE 40001 #define _APS_NEXT_CONTROL_VALUE 1000 #define _APS_NEXT_SYMED_VALUE 101 #endif #endif I know, this doesn't close the application, but I wanted to see if the menu would show up at all.I know, this doesn't close the application, but I wanted to see if the menu would show up at all.Code:#include "resource.h" IDR_MENU1 MENU DISCARDABLE BEGIN MENUITEM "&File", 65535 END I was under the assumption that I can make the menu visible by simple stating: But..... This isn't the case.But..... This isn't the case.Code:windowClass.lpszMenuName = MAKEINTRESOURCE("IDR_MENU1"); What am I doing wrong here? Any help is much appreciated.
https://cboard.cprogramming.com/windows-programming/77159-menu-not-showing.html
CC-MAIN-2017-09
en
refinedweb
os-client-config¶ os - If you have a config file, you will get the clouds listed in it - If you have environment variables, you will get a cloud named envvars - If you have neither, you will get a cloud named defaults with base defaults Config Files¶' Site Specific File Locations¶. Auth Settings¶. Splitting Secrets¶ SSL Settings¶ Cache Settings¶¶. Per-region settings¶ Usage¶ The simplest and least useful thing you can do is: python -m os_client_config.config Which will print out whatever if finds for your config. If you want to use it from python, which is much more likely what you want to do, things like: Get a named cloud. import os_client_config cloud_config = os_client_config.OpenStackConfig().get_one_cloud( 'internap', region_name='ams01') print(cloud_config.name, cloud_config.region, cloud_config.config) Or, get all of the clouds. import os_client_config cloud_config = os_client_config.OpenStackConfig().get_all_clouds() for cloud in cloud_config: print(cloud.name, cloud.region, cloud.config) argparse¶ If you’re using os-client-config from a program that wants to process command line options, there is a registration function to register the arguments that both os-client-config and keystoneauth know how to deal with - as well as a consumption argument. import argparse import sys import os_client_config cloud_config = os_client_config.OpenStackConfig() parser = argparse.ArgumentParser() cloud_config.register_argparse_arguments(parser, sys.argv) options = parser.parse_args() cloud = cloud_config.get_one_cloud(argparse=options) Constructing OpenStack SDK object¶ If what you want to do is get an OpenStack SDK Connection and you want it to do all the normal things related to clouds.yaml, OS_ environment variables, a helper function is provided. The following will get you a fully configured openstacksdk instance. import os_client_config sdk = os_client_config.make_sdk() If you want to do the same thing but on a named cloud. import os_client_config sdk = os_client_config.make_sdk(cloud='mtvexx') If you want to do the same thing but also support command line parsing. import argparse import os_client_config sdk = os_client_config.make_sdk(options=argparse.ArgumentParser()) It should be noted that OpenStack SDK has ways to construct itself that allow for additional flexibility. If the helper function here does not meet your needs, you should see the from_config method of openstack.connection.Connection Constructing shade objects¶ If what you want to do is get a shade OpenStackCloud object, a helper function that honors clouds.yaml and OS_ environment variables is provided. The following will get you a fully configured OpenStackCloud instance. import os_client_config cloud = os_client_config.make_shade() If you want to do the same thing but on a named cloud. import os_client_config cloud = os_client_config.make_shade(cloud='mtvexx') If you want to do the same thing but also support command line parsing. import argparse import os_client_config cloud = os_client_config.make_shade(options=argparse.ArgumentParser()) Constructing REST API Clients¶ What if you want to make direct REST calls via a Session interface? You’re in luck. A similar interface is available as with openstacksdk and shade. The main difference is that you need to specify which service you want to talk to and make_rest_client will return you a keystoneauth Session object that is mounted on the endpoint for the service you’re looking for. import os_client_config session = os_client_config.make_rest_client('compute', cloud='vexxhost') response = session.get('/servers') server_list = response.json()['servers'] Constructing Legacy Client objects¶ If you want get an old-style Client object from a python-*client library, and you want it to do all the normal things related to clouds.yaml, OS_ environment variables, a helper function is also provided. The following will get you a fully configured novaclient instance. import os_client_config nova = os_client_config.make_client('compute') If you want to do the same thing but on a named cloud. import os_client_config nova = os_client_config.make_client('compute', cloud='mtvexx') If you want to do the same thing but also support command line parsing. import argparse import os_client_config nova = os_client_config.make_client( 'compute', options=argparse.ArgumentParser()) If you want to get fancier than that in your python, then the rest of the API is available to you. But often times, you just want to do the one thing. Source¶ - Free software: Apache license - Documentation: - Source: - Bugs:
https://docs.openstack.org/developer/os-client-config/
CC-MAIN-2017-09
en
refinedweb
This is your resource to discuss support topics with your peers, and learn from each other. 01-22-2013 10:34 AM in Flash Pro. I have library where i store items how this is achieved in "flash builder"? I really thought these simple things gonna be sraightforward. Im moving from Flash Pro so , Flash Builder is new to me you embed them (like in my code snippet) or you load them (like jtegen said). embedding is faster, with loading you can load from web etc. as well you don't have to use ANE for something this simple 01-22-2013 10:58 AM 01-22-2013 11:00 AM 01-22-2013 05:53 PM - edited 01-22-2013 05:55 PM Thanks for answers! OK so Im getting somewhere - i want to use pure ActionScript approach and try using embeding first. (maybe latter will try loading) But still things dont work... thats what I do: 1. Created folder assets, right mouse click --> Import-->General-->File System, import image.png, hit Refresh. 2. Write code: package { import flash.display.Bitmap; import flash.display.Sprite; [SWF(width="1024", height="600", backgroundColor="#404040", frameRate="60")] public class kkk extends Sprite { [Embed(source='assets/image.png')] public static const MyImage : Class; public function kkk() { initBGR(); } private function initBGR():void { var myImg:Bitmap = new MyImage(); var mySprite:Sprite = new Sprite(); mySprite.addChild(myImg); mySprite.cacheAsBitmap = true; addChild(mySprite); } } } the line [Embed(source='assets/image.png')] is underlined in red and i ger error: Could not find Embed source 'assets/image.png'. Searched 'C:\Documents and Settings\mmm\grybasssssssssss\kkk\src\assets\image What I am doing wrong? P.S. I can see image.png inside folder assets so its really there 01-22-2013 06:09 PM Yeees! I created new workspace, and created folder assets INSIDE folder src and it works perfectly! Thanks to all who helped me out, Cheers!
https://supportforums.blackberry.com/t5/Adobe-AIR-Development/Help-how-to-put-bitmap-on-screen/m-p/2113769
CC-MAIN-2017-09
en
refinedweb
Hi guys I got a generic question (looking for a generic answer): I have to load a DLL into C++, I've been playing around all day with it trying to access the methods (I have a big manual for the DLL), so far I have tried the next: 1.- #import "path\NameofDLL.dll" using namespace NameofDLLlib; 2.- turning the DLL to Lib (with DLL to Lib) and just calling a #include "NameofDLL.h" and adding the "NameofDLL.lib" to the linker paths Now I am doing a lot of things wrong but I got no one to go here except lots of internet info. The DLL has a class, when I declare the class in my program I just do: ClassName Class1; and then start calling the methods with Class1.method(); but that is not working So my questions are: What method do you recommend me to import he DLL? How do I call the methods of the DLL class? I tried googling my questions, but I have the excess information syndrome right now
https://www.daniweb.com/programming/software-development/threads/327779/dll-importing
CC-MAIN-2017-09
en
refinedweb
Spring 3.1 M1: Introducing @Profile Introduction In my earlier post announcing Spring 3.1 M1, I discussed the new bean definition profiles feature as applied when using Spring <beans/> XML to configure the container. Today we’ll introduce the new @Profile annotation and see how this same feature can be applied when using @Configuration classes instead of XML. Along the way we’ll cover some best practices for designing @Configuration classes. Recall @Configuration For those unfamiliar with @Configuration classes, you can think of them as a pure-Java equivalent to Spring <beans/> XML files. We’ve blogged about this featureset before, and the reference documentation covers it well. You may want to revisit those resources if you need an introduction or a refresher. As we’ll see in this and subsequent posts, much attention has been given to the @Configuration approach in Spring 3.1 in order to round it out and make it a truly first-class option for those who wish to configure their applications without XML. Today’s post will cover just one of these enhancements: the new @Profile annotation. As with the previous post, I’ve worked up a brief sample where you can follow along and try things out for yourself. You can find it at and all the details for getting set up are in the README. This sample contains both the XML-based configuration covered in the last post, as well as @Configuration classes, in the com.bank.config.xml and com.bank.config.code packages, respectively. The IntegrationTests JUnit test case has been duplicated for each package; this should help you compare and contrast the two styles of bootstrapping the container. From XML to @Configuration Let’s dive in! Our task is simple: take the XML-based application shown previously and port it to an @Configuration style. We started the last post with an XML configuration looking like the following: > And this is straightforward to port into a @Configuration class: src/main/com/bank/config/code/TransferServiceConfig.java @Configuration public class TransferServiceConfig { (); } @Bean public DataSource dataSource() { return new EmbeddedDatabaseBuilder() .setType(EmbeddedDatabaseType.HSQL) .addScript("classpath:com/bank/config/sql/schema.sql") .addScript("classpath:com/bank/config/sql/test-data.sql") .build(); } } Note: The EmbeddedDatabaseBuilder is the component that underlies the <jdbc:embedded-database/> element originally used in the XML. As you can see, it’s quite convenient for use within a @Bean method. At this point, our @Configuration-based unit test would pass with the green bar: src/test/com/bank/config/code/IntegrationTests.java public class IntegrationTests { @Test public void transferTenDollars() throws InsufficientFundsException { AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext(); ctx.register(TransferServiceConfig.class); ctx.refresh();)); } } AnnotationConfigApplicationContext is used above, which allows for direct registration of @Configuration and other @Component-annotated classes. This leaves us with a string-free and type-safe way of configuring the container. There’s no XML, which is great, but at this point our application suffers from the same problem we saw in the first post: when the application is deployed into production, a standalone datasource won’t make sense. It will need to be looked up from JNDI. This is no problem. Let’s break the embedded- and JNDI-based datasources out into their own dedicated @Configuration classes: src/main/com/bank/config/code/StandaloneDataConfig.java (); } } src/main/com/bank/config/code/JndiDataConfig.java @Configuration @Profile("production") public class JndiDataConfig { @Bean public DataSource dataSource() throws Exception { Context ctx = new InitialContext(); return (DataSource) ctx.lookup("java:comp/env/jdbc/datasource"); } } At this point we have declared the two different DataSource beans within their own @Profile-annotated @Configuration classes. Just as with XML, these classes and the @Bean methods within them will be skipped or processed based on which Spring profiles are currently active. However, before we can see that in action, we first need to finish our refactoring. We’ve split out the two possible DataSource beans but how can we reference them method from within TransferServiceConfig – specifically it’s accountRepository() method? We have a couple of options, and both begin with understanding that @Configuration classes are candidates for @Autowired injection. This is because, in the end, @Configuration objects are managed as “just another Spring bean” in the container. Let’s take a look: src/main/com/bank/config/code/TransferServiceConfig.java (); } } With the use of the @Autowired annotation above, we’ve asked the Spring container to inject the bean of type DataSource for us, regardless of where it was declared – in XML, in a @Configuration class, or otherwise. Then in the accountRepository() method, the injected dataSource field is simply referenced. This is one way of acheiving modularity between @Configuration classes, and is conceptually not unlike ref-style references between two <bean> elements declared in different XML files. The final step in our refactoring is be to update the unit test to bootstrap not only TransferServiceConfig, but also the JNDI and standalone @Configuration variants of our DataSource bean: src/test/com/bank/config/code/IntegrationTests.java public class IntegrationTests { @Test public void transferTenDollars() throws InsufficientFundsException { AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext(); ctx.getEnvironment().setActiveProfiles("dev"); ctx.register(TransferServiceConfig.class, StandaloneDataConfig.class, JndiDataConfig.class); ctx.refresh(); // proceed with assertions as above ... } } Now all of our @Configuration classes are available to the container at bootstrap time, and based on the profiles active (“dev” in this case), @Profile-annotated classes and their beans will be processed or skipped. As a quick note, you could avoid listing out each @Configuration class above and instead tell AnnotationConfigApplicationContext to simply scan the entire .config package, detecting all of our classes in one fell swoop. This is the loose equivalent of loading Spring XML files based on a wildcard (e.g., **/*-config.xml): AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext(); ctx.getEnvironment().setActiveProfiles("dev"); ctx.scan("com.bank.config.code"); // find and register all @Configuration classes within ctx.refresh(); However you choose to register your @Configuration classes, at this point our task is complete! We’ve ported our configuration from Spring <beans/> XML to @Configuration classes and bootstrapped the container directly from those classes using AnnotationConfigApplicationContext. Further improving @Configuration class structure Everything works in our application and the JUnit bar is green, but there’s still room for improvement. Recall how a DataSource bean was @Autowired into TransferServiceConfig? This works well, but it’s not terribly clear where the bean came from. As mentioned above, it could be from XML, or from any other @Configuration class. The technique I’ll describe below introduces object-oriented configuration and should further our goals of having a natural Java-based configuration – one that can take full advantage of the power of your IDE. If we think about StandaloneDataConfig and JndiDataConfig, they’re really two clases of the same kind, in that they both declare a method with the following signature: public DataSource dataSource(); All that’s missing, it seems, is an interface unifying the two. Let’s introduce one – we’ll see why shortly below: src/main/com/bank/config/code/DataConfig.java interface DataConfig { DataSource dataSource(); } And of course update the two @Configuration classes to implement this new interface: @Configuration public class StandaloneDataConfig implements DataConfig { ... } @Configuration public class JndiDataConfig implements DataConfig { ... } What does this buy us? Just like we @Autowired the DataSource bean directly into TransferServiceConfig, we an also inject @Configuration instances themselves. Let’s see this in action: src/main/com/bank/config/code/TransferServiceConfig.java @Configuration public class TransferServiceConfig { @Autowired DataConfig dataConfig; // ... @Bean public AccountRepository accountRepository() { return new JdbcAccountRepository(dataConfig.dataSource()); } // ... } This allows us full navigability through the codebase using the IDE. The screenshot below shows the result of pressing CTRL-T on the invocation of dataConfig.dataSource() to get a “Quick Hierarchy” hover: It’s now very easy to ask the question “where was the DataSource bean defined?” and have the answer constrained to a set of types implementing DataConfig. Not bad if we’re trying to do things in a way that is as familiar and useful to Java developers as possible. More advanced use of @Profile Worth a quick mention is that like many Spring annotations, @Profile may be used as a meta-annotation. This means that you may define your own custom annotations, mark them with @Profile, and Spring will still detect the presence of the @Profile annotation as if it had been declared directly. package com.bank.annotation; @Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) @Profile("dev") pubilc @interface Dev { } This allows us to mark our @Component classes with the new custom @Dev annotation, rather than being required to use Spring’s @Profile: @Dev @Component public class MyDevService { ... } Or, from the examples above, marking our StandaloneDataConfig with @Dev would work too: @Dev @Configuration public class StandaloneDataConfig { ... } Summary Spring 3.1’s bean definition profiles feature is supported fully across the XML and @Configuration styles. Whichever style you prefer, we hope you’ll find profiles useful. Keep the feedback coming, as it’ll have direct impact on 3.1 M2 which is just around the corner. In the next post we’ll take a deeper look at Spring’s new Environment abstraction and how it helps with regard to managing configuration properties in your applications. Stay tuned!
http://spring.io/blog/2011/02/14/spring-3-1-m1-introducing-profile/
CC-MAIN-2017-09
en
refinedweb
Dive into this Quick Tip and discover how to change the Frame Rate of your movie, while it's running… Final Result Preview Let's take a look at the final result we will be working towards: Step 1: Brief Overview We'll make use of a Slider component to modify the stage framerate property and display a MovieClip to see the changes. Step 2: Set Up Your Flash File Launch Flash and create a new Flash Document, set the stage size to 400x200px and the frame rate to 25fps. Step 3: Interface This is the interface we'll be using, it includes a Slider Component and a MovieClip taken from my Apple Preloader tutorial. You'll also notice some static text below the slider indicating the minimum and maximum FPS. Step 4: Slider Open the Components Panel (Cmd+F7) and drag the Slider component from the User Interface folder, align it to the center in the stage and click the Properties Panel to edit its parameters. Use the data from the image above and prepare for some ActionScript 3… Step 5: ActionScript Create a new ActionScript Class (Cmd+N), save the file as Main.as and start writing: package { import flash.display.Sprite; import fl.events.SliderEvent; public class Main extends Sprite { public function Main():void { //Listen for slider movement slider.addEventListener(SliderEvent.CHANGE, changeFPS); } private function changeFPS(e:SliderEvent):void { //Change the frame rate using the slider value stage.frameRate = e.value; } } } Step 6: Document Class Remember to add the class name to the Class field in the Publish section of the Properties panel. Conclusion Try the demo and experiment with the uses of this feature! I hope you liked this Quick Tip, thank you for reading! Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/quick-tip-change-the-frame-rate-at-runtime-using-actionscript-3--active-6967
CC-MAIN-2017-39
en
refinedweb
The set. There are a few ways to do this, but I ended up going with the way Stephen Kaufman showed here: select=”//cities/city[not(@country=preceding-sibling::city/@country)]/@country” /> In the above example, you end up with a variable containing all the unique countries that correspond to the many cities in the list. My challenge was that I was going to use this variable within a map, using Inline XSLT as part of the Scripting functoid. And, my inbound document has elements from multiple namespaces. Using a namespace prefix was not going to work, so I had to write my elements in the “//*[localname()=’node’ namespaceuri()=’namespace’]” way instead of the easier “//node” or (“//ns1:node”) way. Thank goodness for the Visual XPath tool which made testing my Xpath much easier. You can’t just swap the node names in the above Xpath with the “[localname()=” namespaceuri()=”]” equivalent, as there are subtle differences. So given that my XML looked like this: <Results> <QueryRecord> <DomainID></DomainID> <PageName></PageName> </QueryRecord> <QueryRecord> <DomainID></DomainID> <PageName></PageName> </QueryRecord> </Results> … Let’s say I have two nodes I need to use in the Xpath statement: - [local-name()=’QueryRecord’ and namespace-uri()=’′%5D - [local-name()=’DOMAINID’ and namespace-uri()=’′%5D My “converted” Xpath now looks like this: Got that? Yowza. So I’m getting all the unique “domain ids” by grabbing each “domain id” where it doesn’t match a previous instance in the node tree. The main syntax difference between this Xpath and the one at the top is the “*” peppered around. If not included, you’ll get all sorts of does not evaluate to node set errors. Any other “Xpath with namespaces in BizTalk” war stories or tips to share? Have you used XMLSpy to test your xpath? It worked for me and also with Biztalk. Are you aware of muenchian method? I don’t have problem with Biztalk Xpath at all. Sometimes I need to use ns0: in front of the node if Biztalk generate it that way. Usually this happens inside a multipart map. I generate the default map and then use it as a template to write xslt code. That is when ns0 or ns1 or ns2, etc.. come in. For other type of situation I used regular Xpath that is W3C standards and they all work. Can you tell me or even put a small project up somewhere about the pain you have with Biztalk Xpath and namespace. I’ve used to use XmlSpy all the time, but don’t have an active license anymore. I also have regularly used the muenchian method, but for some reason I can’t recall, I wanted to try a different way. But, now that I have it working this way, I may try and get alternative (read: higher performing) ways working as well. The xslt key structure in Muenchian method can do two things. 1. Do the Muenchian method. 2. Looking up predicate very fast using the key. so if you have some code like //Node1[attribute1=’something’] If the key is put on this Node1 and on attribute1, the concept works like an SQL indexed column look up WHERE = ‘something’. I have tested it, it is very fast , to look up some attribute over a 100,000 records took me 50 minutes, when I used the muenchian key to do the lookup(not the grouping, we are in use case #2), I cut this 50 minutes down to 5 minutes. I always use the /*[local-name()=’Node1′]/*[local-name()=’Node2′]/*[local-name()=’Node3′] I’ll avoid namespaces unless there is some ambiguity between the child nodes
https://seroter.wordpress.com/2007/03/19/fun-with-biztalk-xpath-and-namespaces/
CC-MAIN-2017-39
en
refinedweb
Overview The DeploymentManager is a client-side plugin for distributing and controlling deployments in a profile. Compared to the DeploymentManager in 5.0.x there are some differences: The DeploymentPhase has been removed. The usage is now symmetric independent of the boolean isCopyContent flag. loadProfile(ProfileKey key) is not required to be able use the DeploymentManager. Not loading a profile will copy the deployment to the default location specified in deploy/profileservice-jboss-beans.xml In general loadProfile specifies the target profile. A set of profiles which support deployment actions can be obtained using deployMgr.getProfiles(). Not loading a profile or deployMgr.loadProfile(new ProfileKey(ProfileKey.DEFAULT)); is recommended. Example Usage import java.net.URL; import javax.naming.InitialContext; import org.jboss.deployers.spi.management.deploy.DeploymentManager; import org.jboss.deployers.spi.management.deploy.DeploymentProgress; import org.jboss.deployers.spi.management.deploy.DeploymentStatus; import org.jboss.profileservice.spi.ProfileService; public class DeploymentTest { public void deployAndUndeploy(String deploymentName, URL deploymentURL) throws Exception { DeploymentManager deployMgr = getDeploymentManager(); if(deployMgr == null) throw new IllegalStateException("Null deployment manager."); String[] repositoryNames = null; // Distribute a zipped file DeploymentProgress distribute = deployMgr.distribute(deploymentName, deploymentURL, true); // Run distribute.run(); // Check if the deploy failed checkFailed(distribute); // Get the deployed names repositoryNames = distribute.getDeploymentID().getRepositoryNames(); // Start the deployment DeploymentProgress start = deployMgr.start(repositoryNames); // Run start.run(); // checkFailed(start); // Stop the deployment DeploymentProgress stop = deployMgr.stop(repositoryNames); // Run stop.run(); // checkFailed(stop); // Remove the deployment DeploymentProgress remove = deployMgr.remove(repositoryNames); // Run remove.run(); // checkFailed(remove); } void checkFailed(DeploymentProgress progress) throws Exception { DeploymentStatus status = progress.getDeploymentStatus(); if(status.isFailed()) throw new RuntimeException("Failed to deploy", status.getFailure()); } DeploymentManager getDeploymentManager() throws Exception { ProfileService ps = getProfileService(); return ps.getDeploymentManager(); } ProfileService getProfileService() throws Exception { InitialContext ctx = getInitialContext(); return (ProfileService) ctx.lookup("ProfileService"); } InitialContext getInitialContext() throws Exception { return new InitialContext(); } } The main operations basically are: - distribute - distribute a deployment. The boolean flag isCopyContent indicates if the content should be copied in the applications directory. copyContent=false will just deploy the absolute url without touching the deployment. Additionally the distribute action will 'lock' the deployment, so it won't get deployed by the hot-deployment scanner. Note: if isCopyContent is true, a zipped file is expected. Copying a directory is currently not supported. - start starts the application on the server and enables hot-deployment checking for this deployment. - redeploy stops and starts the application without modifying any of the deployment contents. - stop only stops the application on the server, without deleting the file. - remove removes the deployment from the file system if it was distributed with copyContent=true In case copyContent was false, it won't delete the deployment. For starting a deployment distribute() and start() need to be called, as distribute just adds content and start actually deploys the application. To fully remove a deployment stop() and remove() have to be called, as e.g. just calling remove() would only remove the actual deployment - but the deployment could still exist in the temp. Note that DeploymentProgress.run() does not throw server side Exceptions. Errors during the deployment progress are passed to the DeploymentStatus and should be checked after each operation.
https://community.jboss.org/wiki/ProfileServiceDeploymentManagerin5x
CC-MAIN-2015-35
en
refinedweb
NAME boot -- halt or reboot the system SYNOPSIS #include <sys/types.h> #include <sys/systm.h> #include <sys/reboot.h> void boot(int howto); DESCRIPTION The.)
http://manpages.ubuntu.com/manpages/precise/man9/boot.9freebsd.html
CC-MAIN-2015-35
en
refinedweb
You can subscribe to this list here. Showing 1 results of 1 Hi Jonathan, Le lundi 11 novembre 2013 à 09:30 +1030, Jonathan Woithe a écrit : > I think it would be difficult to come up with a reliable way to import > devices settings into a totally different interface. Even if both devices > have matrices one can't easily determine in an automated way whether they > are compatible. Some even have multiple matrix mixers, and choosing the > mapping would be hit and miss. And some use matrix controls for controls > which are not a traditional matrix fader-based mixer. > > My view is that at least initially we shouldn't permit the restoration of > mixer data from one device type to another. > Yes; as I tried to do something for RME, I realized that importing from a device to another one is not so evident. Probably, importing from an EAP device to another EAP one could be quite easily feasible because they are based on the same chip and so share a lot of settings. But for devices with different hardware (even a Dice II to a DICE EAP, for instance), it will be very difficult and unsure. > > This is an "extension"; not sure I will implement it soon :-) > > Understood - I think that it's definitely not something we need to concern > ourselves with at this stage. To support this we'd have to come up with > some way of tagging controls to help the system identify compatible > controls. > Yes, and it was part of my previous questions; the chosen names for the tags should be sufficiently clear (and a little bit "unique") when such import functions could be introduced. Typically, I introduced some tag names specific to the RME device, but it is of course just a draft: feel absolutely free to introduce different tag names. Note that I tried not to obviate import feature in the functions at lowest level; but of course, I cannot guarantee anything. By the way, care when you will test for RME and a have a detailed look to the file produced by ffado-mixer saving first :-). Possibly, since I disabled the Open/Save_as functionality by default, you won't have access to it at all before introducing your own corrections ! :-) > Regards > jonathan Regards, Phil -- Philippe Carriere <la-page-web-of-phil.contact@...>
http://sourceforge.net/p/ffado/mailman/ffado-devel/?viewmonth=201311&viewday=11
CC-MAIN-2015-35
en
refinedweb
Synchronizing processes means timing their work, not in an absolute reference system (giving a precise time in which the process should begin its operations) but in a relative one, where we can schedule which process should work first and which second. Using semaphores for this reveals itself as complex and limited: complex because every process should manage a semaphore for every other process that has to synchronize with it. Limited because it does not allow us the exchange parameters between the processes. Let's consider for example the creation of a new process: this event should be notified to every working process, but semaphores do not allow a process to send such information. The concurrency control of the access to shared resources through semaphores, moreover, can lead to continuous blocking of a process, when one of the other processes involved release the resource and lock it again before others can use it: as we saw, in the world of concurrency programming it is not possible to know in advance which process will be executed and when. These brief notes let us immediately understand that semaphores are an inadequate tool for managing complex synchronization problems. An elegant solution to this matter comes with the use of message queues: in this article we will study the theory of this interprocess communication facility and write a little program using SysV primitives. The use of queues is thus a simple implementation of a mail system between processes: every process has an address with which it can other processes. The process can then read the messages delivered to its box in a preferential order and act accorting to what has been notified. The synchronization of two processes can thus be performed simply using messages between the two: resources will still own semaphores to let the processes know their status, but timing between processes will be performed directly. Immediately we can understand that the use of message queues simplified very much what at the beginning was a extremely complex problem. Before we can implement in C language the message queues it is necessary to speak about another problem related to synchronzation: the need for a communication protocol. This is a simple example of a protocol based on message exchange: two processes A and B are executing concurrently and process different data; once they end their processing the have to merge the results. A simple protocol to rule their interaction could be the following PROCESS B: This protocol is simply extensible to the case of n processes: every process but A works with its own data and then sends a message to A. When A answers every process sends it its results: the structure of the individual processes (except A) has not been modified. The structure at the basis of the system describing a message is called msgbuf ;it is declared in linux/msg.h /* message buffer for msgsnd and msgrcv calls */ struct msgbuf { long mtype; /* type of message */ char mtext[1]; /* message text */ }; struct message { long mtype; /* message type */ long sender; /* sender id */ long receiver; /* receiver id */ struct info data; /* message content */ ... }; To create a new queue a process should call the msgget() function int msgget(key_t key, int msgflg)which receives as arguments an IPC key and some flags, which by now can be set to IPC_CREAT | 0660(create the queue if it does not exist and grant access to the owner and group users), and that returns the queue identifier. As in the previous articles we will assume that no errors will happen, so that we can simplify the code, even if in a future article we will speak about secure IPC code. To send a message to a queue of which we know the identifier we have to use the msgsnd() primitive int msgsnd(int msqid, struct msgbuf *msgp, int msgsz, int msgflg) length = sizeof(struct message) - sizeof(long); To read the messages contained in a queue we use the msgrcv() system call int msgrcv(int msqid, struct msgbuf *msgp, int msgsz, long mtype, int msgflg) Removing a queue can be performed through the use of the msgctl() primitive with the flag IPC_RMID msgctl(qid, IPC_RMID, 0) #include <stdio.h> #include <stdlib.h> #include <linux/ipc.h> #include <linux/msg.h> /* Redefines the struct msgbuf */ typedef struct mymsgbuf { long mtype; int int_num; float float_num; char ch; } mess_t; int main() { int qid; key_t msgkey; mess_t sent; mess_t received; int length; /* Initializes the seed of the pseudo-random number generator */ srand (time (0)); /* Length of the message */ length = sizeof(mess_t) - sizeof(long); msgkey = ftok(".",'m'); /* Creates the queue*/ qid = msgget(msgkey, IPC_CREAT | 0660); printf("QID = %d\n", qid); /* Builds a message */ sent.mtype = 1; sent.int_num = rand(); sent.float_num = (float)(rand())/3; sent.ch = 'f'; /* Sends the message */ msgsnd(qid, &sent, length, 0); printf("MESSAGE SENT...\n"); /* Receives the message */ msgrcv(qid, &received, length, sent.mtype, 0); printf("MESSAGE RECEIVED...\n"); /* Controls that received and sent messages are equal */ printf("Integer number = %d (sent %d) -- ", received.int_num, sent.int_num); if(received.int_num == sent.int_num) printf(" OK\n"); else printf("ERROR\n"); printf("Float numero = %f (sent %f) -- ", received.float_num, sent.float_num); if(received.float_num == sent.float_num) printf(" OK\n"); else printf("ERROR\n"); printf("Char = %c (sent %c) -- ", received.ch, sent.ch); if(received.ch == sent.ch) printf(" OK\n"); else printf("ERROR\n"); /* Destroys the queue */ msgctl(qid, IPC_RMID, 0); } The code I wrote creates a queue used by the son process to send its data to the father: the son generates random numbers, sends them to the father and both print them on the standard output. #include <stdio.h> #include <stdlib.h> #include <linux/ipc.h> #include <linux/msg.h> #include <sys/types.h> /* Redefines the message structure */ typedef struct mymsgbuf { long mtype; int num; } mess_t; int main() { int qid; key_t msgkey; pid_t pid; mess_t buf; int length; int cont; length = sizeof(mess_t) - sizeof(long); msgkey = ftok(".",'m'); qid = msgget(msgkey, IPC_CREAT | 0660); if(!(pid = fork())){ printf("SON - QID = %d\n", qid); srand (time (0)); for(cont = 0; cont < 10; cont++){ sleep (rand()%4); buf.mtype = 1; buf.num = rand()%100; msgsnd(qid, &buf, length, 0); printf("SON - MESSAGE NUMBER %d: %d\n", cont+1, buf.num); } return 0; } printf("FATHER - QID = %d\n", qid); for(cont = 0; cont < 10; cont++){ sleep (rand()%4); msgrcv(qid, &buf, length, 1, 0); printf("FATHER - MESSAGE NUMBER %d: %d\n", cont+1, buf.num); } msgctl(qid, IPC_RMID, 0); return 0; } 2003-03-05, generated by lfparser version 2.35
http://www.linuxfocus.org/English/March2003/article287.shtml
CC-MAIN-2015-35
en
refinedweb
Home -> Community -> Mailing Lists -> Oracle-L -> RE: Privileges Dave, By "relevant section" I assume you mean under GRANT in "Oracle8i SQL Reference". Table 11-1 "System Privileges" is self-explanatory while tables 11-3 and 11-4 cover object privileges. For a full explanation of what any particular privilege implies, you'll have to dig into the docs and find the sections dealing with the "thing" on which the privilege is based. For example, you'll have to read about Application Context to understand what "CREATE ANY CONTEXT" (Create any context namespace) implies - see "Oracle8i Application Developer's Guide - Fundamentals" for details on Application Context and Fine-Grained Access Control. I'm not sure that a single list could explain enough without becoming a lengthy tome, though Oracle certainly could have done better (at least some links to relvant sections) in the GRANT tables of privileges. Jack -----Original Message----- Sent: Friday, October 19, 2001 7:05 AM To: Multiple recipients of list ORACLE-L Hi, Can anyone point me in the direction of a list of privileges (version 8.1.7) and their meanings. Obviously most are self explanatory but there are a few I'm not too sure about. I tried using the Java Search facility in the Oracle documentation but this only points me to the relevant section and does not highlight where in the section the search words are. I have also tried Metalink and could not find a definitive list. Any help would be appreciated. Dave Leach -- - 08:04:36 CDT (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing). Original text of this message
http://www.orafaq.com/maillist/oracle-l/2001/10/19/0918.htm
CC-MAIN-2015-35
en
refinedweb
Strange syntax Discussion in 'C Programming' started by seenutn@g: - 701 - August Derleth - Oct 25, 2003 Syntax highligth with textile: Syntax+RedCloth ?gabriele renzi, Dec 30, 2005, in forum: Ruby - Replies: - 2 - Views: - 305 - gabriele renzi - Dec 31, 2005 [ANN] SqlStatement 1.0.0 - hide the syntax of SQL behind familiarruby syntaxKen Bloom, Oct 9, 2006, in forum: Ruby - Replies: - 3 - Views: - 343 Syntax bug, in 1.8.5? return not (some expr) <-- syntax error vsreturn (not (some expr)) <-- fineGood Night Moon, Jul 22, 2007, in forum: Ruby - Replies: - 9 - Views: - 412 - Rick DeNatale - Jul 25, 2007 Syntax Checker that's better than the normal syntax checkerJacob Grover, Jul 15, 2008, in forum: Ruby - Replies: - 5 - Views: - 442 - Jacob Grover - Jul 18, 2008
http://www.thecodingforums.com/threads/strange-syntax.557446/
CC-MAIN-2015-35
en
refinedweb
Originally posted by vini singh: its showing compiler error as protected variable cant be accessed through the parent reference in subclass. A subclass can access the superclass's protected members only through a reference of the subclass's type The reason for this rule is simple. Imagine that parent class P has subclasses C1 and C2. There's no reason to think that the classes C1 and C2 have any special security relationship. But without this rule, an instance of C1 would have access to some otherwise-private variables in instances of C2. class C1 extends P { } class C2 extends P { private int c2variable=12; } class C2 { protected int c2variable=12; } public class TestC2 { C2 c2instance=null; public int createInstance(C2 c2instance) { this.c2instance=c2instance; this.c2instance.c2variable=20; return this.c2instance.c2variable; } public static void main(String[] args) { TestC2 test=new TestC2(); C2 c2=new C2(); System.out.println("Original c2 var value is "+c2.c2variable); System.out.println("Value :"+test.createInstance(c2)); } } C:\sai>java TestC2 Original c2 var value is 12 Value :20 C:\sai>javac TestC2.java TestC2.java:8: c2variable has private access in C2 this.c2instance.c2variable=20; ^ TestC2.java:9: c2variable has private access in C2 return this.c2instance.c2variable; ^ TestC2.java:16: c2variable has private access in C2 System.out.println("Original c2 var value is "+c2.c2variable); ^ 3 errors public class TestC3 extends C2 { public static void main(String[] args) { TestC3 test=new Testc3(); test.modifyVariable(); } public void modifyVariable() { super.c2variable=20; System.out.println("In method c2variable modified to :"+super.c2variable); } } C:\sai>javac TestC3.java TestC3.java:5: cannot find symbol symbol : class Testc3 location: class TestC3 TestC3 test=new Testc3(); ^ TestC3.java:11: c2variable has private access in C2 super.c2variable=20; ^ TestC3.java:12: c2variable has private access in C2 System.out.println("In method c2variable modified to :"+super.c2variable); ^ 3 errors package n; class superc { protected int x=6; } package n; class subc extends superc { public static void main(String s[]) { superc c=new superc(); System.out.println(c.x); } } C:\sai>javac -d . subc.java C:\sai>java n/subc 6 When we can access the protected property inside the same package through a superclass reference, then why have they enforced the restrictions on accessing the protected property in other packages. Regards.
http://www.coderanch.com/t/417086/java/java/protected-member
CC-MAIN-2015-35
en
refinedweb
Term Definition .bak The file name extension of an auxiliary file, created either automatically or upon command, that contains the second-most-recent version of a file and that bears the same file name. .dat The generic file name extension of a data file. . 2PC A protocol that ensures that transactions that apply to more than one server are completed on all servers or none at all. Two-phase commit is coordinated by the transaction manager and supported by resource managers. access control list In Windows-based systems, a list of access control entries (ACE) that apply to an entire object, a set of the object's properties, or an individual property of an object, and that define the access granted to one or more security principals. accessibility The quality of a system incorporating hardware or software to engage a flexible, customizable user interface, alternative input and output methods, and greater exposure of screen elements to make the computer usable by people with cognitive, hearing, physical, or visual disabilities. accessor A data structure or group of structures created by the consumer that describes how row or parameter data from the data store is to be laid out in the consumer's data buffer, enabling providers to optimize access. An accessor is a collection of bindings. action An end-user-initiated operation on a selected cube or portion of a cube. activation The process of starting a service program in response to a Service Broker message. active statement A SQL statement that has been run but whose result set has not yet been canceled or fully processed. ActiveX Data Objects A data access interface that communicates with OLE DB-compliant data sources to connect to, retrieve, manipulate, and update data. ActiveX Data Objects (Multidimensional) A high-level, language-independent set of object-based data access interfaces optimized for multidimensional data applications. ActiveX Data Objects MultiDimensional.NET A .NET managed data provider that provides access to multidimensional data sources, such as Microsoft SQL Server Analysis Services. activity data Data generated as part of a business transaction by executing an activity within an application. It is characterized by an exclusive write access pattern. ad hoc reporting A reporting system that enables end users to run queries and create custom reports without having to know the technicalities of the underlying database schema and query syntax. adapter host The root abstract class Adapter, which defines the handshake between the adapter and the StreamInsight server in the ENQUEUE interaction point. It provides all the required adapter services such as memory management, and exception handling. add-in A supplemental program that can extend the capabilities of an application program. ADF An XML file that fully describes a single Notification Services application. The ADF file contains the schemas for the events, subscriptions, and notifications; the rules for matching events with subscriptions; and may provide the name of the XSLT file used to format generated notifications. adjective phrasing A way of expressing a relationship in English in which an entity is described by an adjective. ADO ADO MD ADOMD.NET adornment A control or status area that is attached to the edge of a pane or window, such as a toolbar or ruler. Agent XP An option to enable extended stored procedures. aggregate A single value that is composed of multiple values. To combine multiple values. Pertaining to a combination of multiple values. aggregate function A function that performs a calculation on multiple values and returns a single value. aggregate query A query (SQL statement) that summarizes information from multiple rows by including an aggregate function such as Sum or Avg. aggregate. aggregation A table or structure containing pre-calculated data for an online analytical processing (OLAP) cube. Aggregations support the rapid and efficient querying of a multidimensional database. aggregation prefix A string that is combined with a system-defined ID to create a unique name for a partition's aggregation table. aggregation wrapper A wrapper that encapsulates a COM object within another COM object. alert An audible or visual warning signal, generated by a computer, indicating that a threshold has been or is about to be breached. alias An alternative label for some object, such as a file or data collection. alias type A user-defined data type based on one of the SQL Server system data types that can specify a certain data type, length, and nullability. alignment A condition whereby an index is built on the same partition scheme as that of its corresponding table. allocation unit A set of pages that can be operated on as a whole. Pages belonging to an allocation unit are tracked by Index Allocation Map (IAM) pages. An allocation unit consists of the IAM page chain and all pages marked allocated in that IAM page chain. An allocation unit can contain at most a single IAM chain, and an IAM chain must belong to one, and only one, allocation unit. AMO A collection of .NET namespaces included with Analysis Services, used to provide administrative functionality for client applications. Analysis Management Objects Analysis Services A feature of Microsoft SQL Server that supports online analytical processing (OLAP) and data mining for business intelligence applications. Analysis Services organizes data from a data warehouse into cubes with precalculated aggregation data to provide rapid answers to complex analytical queries. analytical data Data that provides the values that are associated with spatial data. For example, spatial data defines the locations of cities in an area whereas analytical data provides the population for each city. ancestor In a tree structure, the element of which a given element is a child. Equivalent to a parent element. ancestor element anchor cap A line cap where the width of the cap is bigger than the width of the line. anchor member The first invocation of a recursive CTE consists of one or more CTE_query_definition joined by UNION ALL, UNION, EXCEPT or INTERSECT operators. Because these query definitions form the base result set of the CTE structure, they are referred to as anchor members. animation manager A core component of an animation application and the central programmatic interface for managing (creating, scheduling, and controlling) animations.. ANSI to OEM conversion The conversion of characters that must occur when data is transferred from a database that stores character data using a specific code page to a client application on a computer that uses a different code page. Typically, Windows-based client computers use ANSI/ISO code pages, and some databases (for compatibility reasons) might use OEM code pages. antialiasing A software technique for smoothing the jagged appearance of curved or diagonal lines caused by poor resolution on a display screen. Methods of anti-aliasing include surrounding pixels with intermediate shades and manipulating the size and horizontal alignment of the pixels. anti-aliasing API A set of routines that an application uses to request and carry out lower-level services performed by a computer's operating system. These routines usually carry out maintenance tasks such as managing files and displaying information. API server cursor A server cursor that is built to support the cursor functions of an API, such as ODBC, OLE DB, ADO, and DB-Library. API support application program interface application programming interface application role A SQL Server role created to support the security needs of an application. application time The clock time supplied by applications which must communicate their application time to the StreamInsight server so that all temporal operators refer to the timestamp of the events and never to the system clock of the host machine. apply branch The set of operations applied to an event group. arbitration port A TCP/IP port used by the cache hosts to determine whether a cache host in the cluster has become unavailable. The port number that is used for arbitration can be different on each cache host. ARIMA A method for determining dependencies in observations taken sequentially in time, that also supports multiplicative seasonality. arithmetic overflow A condition that results from calculating a column value that exceeds the column's specified size. article A component in a publication. For example, a table, a column, or a row. assembly A managed application module containing class metadata and managed code as an object in SQL Server, against which CLR functions, stored procedures, triggers, user-defined aggregates, and user-defined types can be created in SQL Server. associative array An array composed of a collection of keys and a collection of values, where each key is associated with one value. The keys and values can be of any type. atom feed An XML structure that contains metadata about content, such as the language version and the date when the content was last modified, and is sent to subscribers by using the Atom Publishing Protocol (AtomPub). atomic Pertaining to an operation where all the transaction data modifications must be performed; either all of the transaction data modifications are performed or none are performed. attribute A single characteristic or additional piece of information (financial or non-financial) that exists in a database. attribute hierarchy A flat hierarchy (typically having an All level and a member level) containing a single attribute. It is created from one column in a dimension table, if supported by the cube. attribute relationship The hierarchy associated with an attribute containing a single level based on the corresponding column in a dimension table. attribute type The type of information contained by an attribute, such as quarters or months in a time dimension, which may enable specific treatment by the server and client applications. the proper clearance.. In the Kerberos authentication protocol, authenticators include timestamps, to prevent replay attacks, and are encrypted with the session key issued by the Key Distribution Center (KDC). authorization The process of granting a person, computer process, or device access to certain information, services or functionality. Authorization is derived from the identity of the person, computer process, or device requesting access, which is verified through authentication. autocommit mode The default transaction management mode for the Database Engine. The Database Engine automatically starts a transaction for each individual Transact-SQL statement. When the statement completes, the transaction is committed or rolled back based on the success or failure of the statement. auto-consistency check A feature that automatically runs a consistency check on protected data sources when it detects an inconsistent replica. automatic failover In a database mirroring session, a failover initiated by the witness and mirror upon the failure of the principal server (if the database is in a synchronized state). automatic recovery Recovery that occurs every time SQL Server is restarted. auto-protection In DPM, a feature that automatically identifies and adds new data sources for protection. autoregressive integrated moving average axis A set of tuples. Each tuple is a vector of members. A set of axes defines the coordinates of a multidimensional data set. back up To make a duplicate copy of a program, a disk, or data. backing stream The existing stream, that the new stream will be based on. backup A duplicate of a program, a disk, or data, made either for archiving purposes or for safeguarding files. backup copy backup device A tape or disk drive containing a backup medium. backup file backup medium Disk file or tape used to hold one or more backups. backup set A collection of files, folders, and other data that have been backed up and stored in a file or on one or more tapes. balanced hierarchy A dimension hierarchy in which all leaf nodes are the same distance from the root node. base backup A data backup of a database or files upon which a differential backup is fully or partially based. The base backup is the most recent full or file backup of the database or files. base data type Any system-supplied data type, for example, char , varchar , binary , and varbinary . User-defined data types are derived from base data types. base object The object that a synonym references. base table A table stored permanently in a database. Base tables are referenced by views, cursors, SQL statements, and stored procedures. basic marker map A map that displays a marker at each location (for example, cities) and varies marker color, size, and type. batch A set of requests or transactions that have been grouped together. batch job A set of computer processes that can be run without user interaction. batch processing The execution of a batch file. batching The process of sending changes in small groups instead of in a one-shot transfer of the data in its entirety. BI Development Studio A project development and management tool for business intelligence solution developers. It can be used to design end-to-end business intelligence solutions that integrate projects from Microsoft SQL Server Analysis Services (SSAS), Microsoft SQL Server Integration Services (SSIS), and Microsoft SQL Server Reporting Services (SSRS). billion In American usage (as is usual with microcomputers), a thousand million, or 109. Computer terminology uses the prefixes giga-for 1 billion and nano-for 1 billionth. binary large object. binder A tool/module that creates a binding/bindery. binding In Analysis Services, a defined relationship between an attribute or a measure and one or more underlying columns in a dimension or fact table. bit pattern A combination of bits, often used to indicate the possible unique combinations of a specific number of bits. For example, a 3-bit pattern allows 8 possible combinations and an 8-bit pattern allows 256 combinations. bitwise operation An operation that manipulates a single bit, or tests whether a bit is on or off. blittable type A data type that has a unique characteristic and an identical presentation in memory for both managed and unmanaged environments. It can be directly shared. blob BLOb BLOB block A Transact-SQL statement enclosed by BEGIN and END. block cursor A cursor with a rowset size greater than 1. blocking transaction A transaction that causes another transaction to fail. Boolean Of, pertaining to, or characteristic of logical (true, false) values. Boolean expression An expression that yields a Boolean value (true or false). Such expressions can involve comparisons (testing values for equality or, for non-Boolean values, the < [less than] or > [greater than] relation) and logical combination (using Boolean operators such as AND, OR, and XOR) of Boolean expressions. Boolean operator An operator designed to work with Boolean values. The four most common Boolean operators in programming use are AND (logical conjunction), OR (logical inclusion), XOR (exclusive OR), and NOT (logical negation). bound stream An event stream that contains all the information needed to produce events. Either the information is an already instantiated data source, or the information is sufficient for the StreamInsight server to start the data source. bounding box The smallest rectangular area that will surround a path, shape, or group of objects. box plot chart A statistical type of chart that uses boxes to indicate statistical distribution and easily identify outlier points. There are five values: upper quartile, lower quartile, High Box, Low Box, and Median. breakpoint A location in a program at which execution is halted so that a programmer can examine the program's status, the contents of variables, and so on. browse button A button that displays a dialog box to help users select a valid value. browse mode A function that lets you scan database rows and update their values one row at a time. B-tree A tree structure for storing database indexes. bubble map A geographical map that displays a circle over specific locations, where the radius of the circle is proportional to a numeric value. buffer pool A block of memory reserved for index and table data pages. buffer size The size of the area of memory reserved for temporary storage of data. built-in functions A group of predefined functions provided as part of the Transact-SQL and Multidimensional Expressions languages. BUILTIN\Administrators User account (local administrators) bulk copy An action of copying a large set of data. bulk export To copy a large set of data rows out of a SQL Server table into a data file. bulk import To load a large amount of data, usually in batches, from a data file or repository to another data repository. bulk load An action of inserting a large set of rows into a table. bulk log backup A backup that includes log and data pages changed by bulk operations. Point-in-time recovery is not allowed. bulk rowset provider A provider used for the OPENROWSET instruction to read data from a file. In SQL Server 2005, OPENROWSET can read from a data file without loading the data into a target table. This enables you to use OPENROWSET with a simple SELECT statement. Bulk Smart Card Issuance Tool A software program running on a client computer that a certificate manager can use to simultaneously issue multiple certificates. bulk-logged recovery model A database recovery mode that minimally logs bulk operations, such as index creation and bulk imports, while fully logging other transactions. Bulk-logged recovery increases performance for bulk operations, and is intended to be used an adjunct to the full recovery model. Business Intelligence Development Studio business logic. business logic handler A merge replication feature that allows you to run custom code during the synchronization process. business logic handler framework The business logic handler framework allows you to write a managed code assembly that is called during the merge synchronization process. business rules The logical rules that are used to run a business cache aging The mechanism of caching that determines when a cache row is outdated and must be refreshed. cache client A .NET application that uses the Windows Server AppFabric client APIs to communicate with and store data to a Windows Server AppFabric distributed cache system. cache cluster The instantiation of the distributed cache service, made up of one or more instances of the cache host service working together to store and distribute data. Data is stored in memory to minimize response times for data requests. This clustering technology differs from Windows Clustering. cache invalidation The process of flagging an object in the cache so that it will no longer be used by any cache clients. This occurs when an object remains in cache longer than the cache time-out value (when it expires). cache item An object that is stored in the cache and additional information associated with that object, such as tags and version. It can be extracted from the cache cluster using the GetCacheItem client API. cache notification An asynchronous notification that can be triggered by a variety of cache operations on the cache cluster. Cache notifications can be used to invoke application methods or automatically invalidate locally cached objects. cache operation An event that occurs on regions or cached items that can trigger a cache notification. cache port A TCP/IP port used by cache hosts to transmit data to and from the cache clients. The port number used for the cache port can be different on each cache host. These settings are maintained in the cluster configuration settings. cache region A container of data, within a cache, that co-locates all cached objects on a single cache host. Cache Regions enable the ability to search all cached objects in the region by using descriptive strings, called tags. cache service A distributed, in-memory, application cache service that accelerates the performance of Windows Azure and SQL Azure applications by allowing data to be kept in-memory. cache tag One or more optional string-based identifiers that can be associated with each cached object stored in a region. Regions allow you to retrieve cached objects based on one or more tags. cache-aside programming pattern A programming pattern in which if the data is not present in the cache, the application, not the distributed cache system, must reload data into the cache from the original data source. cache-enabled application An application that uses the Windows Server AppFabric cache client to store data in cache on the cache cluster. calculated column A type of column that displays the results of mathematical or logical operations or expressions instead of stored data. A type of column that displays the results of mathematical or logical operations or expressions instead of stored data. calculated field A field defined in a query that displays the result of an expression rather than displaying stored data. The value is recalculated each time a value in the expression changes. calculated member A member of a dimension whose value is calculated at run time by using an expression. Calculated member values can be derived from the values of other members. pass number An ordinal position used to refer to a calculation pass. calculation subcube The set of multidimensional cube cells that is used to create a calculated cells definition. The set of cells is defined by a combination of MDX set expressions. callback The process used to authenticate users calling in to a network. During callback, the network validates the caller's username and password, hangs up, and then returns the call, usually to a preauthorized number. This process prevents unauthorized access to an account even if an individual's logon ID and password have been stolen. call-level interface The interface supported by ODBC for use by an application. candidate key A column or set of columns that have a unique value for each row in a table. cap For paths that contain unconnected ends, such as lines, the end of a stroke. You can change the way the stroke looks at each end by applying one of four end cap styles: flat cap, round cap, square cap, and triangle cap. cardinality The number of entities that can exist on each side of a relationship. carousel view In PowerPivot Gallery, a specialized view where the preview area is centered and the thumbnails that immediately precede and follow the current thumbnail are adjacent to the preview area. cascading delete For relationships that enforce referential integrity between tables, the deletion of all related records in the related table or tables when a record in the primary table is deleted. cascading update For relationships that enforce referential integrity between tables, the updating of all related records in the related table or tables when a record in the primary table is changed. case An abstract view of data characterized by attributes and relations to other cases. case key The element of a case by which the case is referenced within a case set. catalog views Built-in views that form the system catalog for SQL Server. catastrophic error An error that causes the system or a program to fail abruptly with no hope of recovery. An example of a fatal error is an uncaught exception that cannot be handled. CD sleeve A case for holding CDs. CD-ROM A form of storage characterized by high capacity (roughly 650 MB) and the use of laser optics instead of magnetic means for reading data. cell In a cube, the set of properties, including a value, specified by the intersection when one member is selected from each dimension. cellset In ADO MD, an object that contains a collection of cells selected from cubes or other cellsets by a multidimensional query. centralized registration model A registration model that removes all certificate subscriber participation from the management policy. For the workflow, a user designated as the originator will initiate the request and an enrollment agent will execute the request. CEP The continuous and incremental processing of event streams from multiple sources based on declarative query and pattern specifications with near-zero latency. CEP engine The core engine and adapter framework components of Microsoft StreamInsight. The StreamInsight server can be used to process and analyze the event streams associated with a complex event processing application. CERN A physics research center located in Geneva, Switzerland, where the original development of the World Wide Web took place under the leadership of Tim Berners-Lee in 1989 as a method to facilitate communication among members of the scientific community. enrollment The process of requesting, receiving, and installing a certificate. certificate issuer The certification authority which issued the certificate to the subject. Certificate Lifecycle Manager Client A suite of Certificate Lifecycle Manager (CLM) client tools that assist end users with managing their smart cards. The tools include the Smart Card Self Service Control, the Smart Card Personalization Control, and the Certificate Profile Update Control. See Smart Card Self-Service Control, Smart Card Personalization Control, Certificate Profile Update Control. certificate manager A Certificate Lifecycle Manager (CLM) user that has the appropriate CLM permissions to either administer other CLM users or to administer the CLM application itself. certificate manager Web portal A Web application running on the Certificate Lifecycle Manager (CLM) server. This portal allows certificate administrators to administer other users’ certificates and smart cards. The certificate subscriber and certificate manager Web portals are both accessed through the same universal resource locator (URL); however, the content displayed is based on a user's roles and permissions. Certificate Profile Update Control An ActiveX control that automates the update of Certificate Lifecycle Manager (CLM) profiles on client computers. certificate revocation The process of revoking a digital certificate. certificate subscriber A user that needs certificates with our without smart cards. Certificate subscribers can access a small number of functions that can only be performed for the user’s own certificates. certificate subscriber Web portal A Web application running on the Certificate Lifecycle Manager (CLM) server. This component of the CLM server interacts directly with users in a self-service mode. The specific functionality is based upon Active Directory group memberships and permissions. The certificate subscriber and certificate manager Web portals are both accessed through the same universal resource locator (URL); however, the content displayed is based on a user's roles and permissions. certificate template A Windows construct that specifies the format and content of certificates based on their intended usage. When requesting a certificate from a Windows enterprise certification authority (CA), certificate requestors can select from a variety of certificate types that are based on certificate templates. change applier An object that performs conflict detection, conflict handling, and change application for a batch of changes. change propagation The process of applying changes from one replica to another. change script A text file that contains SQL statements for all changes made to a database, in the order in which they were made, during an editing session. change unit The minimal unit of change tracking in a store. In change propagation, only the units that are changed must be sent; whereas, in conflict detection, independent changes to the same unit are considered a conflict. changing dimension A dimension that has a flexible member structure, and is designed to support frequent changes to structure and data. character encoding A one-to-one mapping between a set of characters and a set of numbers. character set A grouping of alphabetic, numeric, and other characters that have some relationship in common. For example, the standard ASCII character set includes letters, numbers, symbols, and control codes that make up the ASCII coding scheme.. The checksum is calculated for a given chunk. child In a tree structure, the relationship of a node to its immediate predecessor. chronicle A table that stores state information for a single application. An example is an event chronicle, which can store event data for use with scheduled subscriptions. chunk A specified amount of data. claims identity A unique identifier that represents a specific user, application, computer, or other entity, enabling it to gain access to multiple resources, such as applications and network resources, without entering credentials multiple times. It also enables resources to validate requests from an entity. clause In Transact-SQL, a subunit of an SQL statement. A clause begins with a keyword. clean shutdown A system shutdown that occurs without errors. clear text Data in its unencrypted or decrypted form. cleartext CLI clickstream analysis Clickstream data are information that users generate as they move from page to page and click on items within a Web site, usually stored in log files. Web site designers can use clickstream data to improve users' experiences with a site. clickthrough report A report that displays related report model data when you click data within a rendered Report Builder report. client A service, application, or device that wants to integrate into the Microsoft Sync Framework architecture. A computer or program that connects to or requests the services of another computer or program. client code generation The action of generating code for the client project based on operations and entities exposed in the middle tier. A RIA Services link must exist between the client and server projects.. client type Information that determines how a cache client functions and impacts the performance of your application. There are two client types: a simple client type and a routing client type. CLM Audit A Certificate Lifecycle Manager (CLM) extended permission in Active Directory that allows the generation and display of CLM policy templates, defining management policies within a profile template, and generating CLM reports. CLM credentials User account information that can be used to authenticate a user to Certificate Lifecycle Manager (CLM). These credentials can be in the form of domain credentials or one-time passwords. CLM Enroll A Certificate Lifecycle Manager (CLM) extended permission in Active Directory that allows the user to specify the workflow and the data to be collected while issuing certificates using a template. This extended permission only applies to profile templates. CLM Enrollment Agent A Certificate Lifecycle Manager (CLM) extended permission in Active Directory that allows a user or group to perform certificate requests on behalf of another user. The issued certificate’s subject will contain the target user’s name, rather than the requestor’s name. CLM Recover A Certificate Lifecycle Manager (CLM) extended permission in Active Directory that allows the initiation of encryption key recovery from the certification authority database. CLM Renew A Certificate Lifecycle Manager (CLM) extended permission in Active Directory that allows the initiation, running, or completion of an enrollment request. The renew request replaces a user’s certificate that is near its expiration date with a new certificate that has a new validity period. CLM reports Audit information pertaining to credential management activities within Certificate Lifecycle Manager (CLM). CLM Request Enroll A Certificate Lifecycle Manager (CLM) extended permission in Active Directory that allows the initiation, running, or completion of an enrollment request. CLM Request Recover CLM Request Renew CLM Request Revoke A Certificate Lifecycle Manager (CLM) extended permission in Active Directory that allows the revocation of a certificate before the expiration of the certificate’s validity period. An example of when this is necessary is if a user’s computer or smart card is compromised (stolen). CLM Request Unblock Smart Card A Certificate Lifecycle Manager (CLM) extended permission in Active Directory that enables a smart card’s User Personal Identification Number (PIN) to be reset, allowing access to the key material on a smart card and for that material to be re-established. CLM Revoke clock vector A collection of clock vector elements that represents updates to a replica. Any change that occurs between 0 and the tick count is contained in the vector. clock vector element A pair of values, consisting of a replica key and a tick count, that represents a change to a replica. CLR function A function created against a SQL Server assembly whose implementation is defined in an assembly created in the .NET Framework common language runtime (CLR). CLR stored procedure A stored procedure created against a SQL Server assembly whose implementation is defined in an assembly created in the .NET Framework common language runtime (CLR). CLR trigger A trigger created against a SQL Server assembly whose implementation is defined in an assembly created in the .NET Framework common language runtime (CLR). CLR user-defined type A user-defined data type created against a SQL Server assembly whose implementation is defined in an assembly created in the .NET Framework common language runtime (CLR). cluster configuration storage location The shared location (or shared storage location) where cluster configuration information is persisted. It can be a shared file or a database. cluster disk resource A disk on a cluster storage device. cluster node An individual computer in a server cluster. cluster port A TCP/IP port used by the cache hosts to manage the cache cluster. The port number used for the cluster ports can be different on each cache host. These settings are maintained in the cluster configuration settings. cluster repair A repair operation in which all missing or corrupt files are replaced, all missing or corrupt registry keys are replaced and all missing or invalid configuration values are set to default values. clustered index An index in which the logical order of the key values determines the physical order of the corresponding rows in a table. clustered server A server that belongs to a server cluster. element The minimum bit combination that can represent a unit of encoded text for processing or exchange. code page A table that relates the character codes (code point values) used by a program to keys on the keyboard or to characters on the display. This provides support for character sets and keyboard layouts for different countries or regions. code point cold standby A second data center that can provide availability within hours or days. collation A set of rules that determines how data is compared, ordered, and presented. collection An object that contains a set of related objects. An object's position in the collection can change whenever a change occurs in the collection; therefore, the position of any specific object in a collection may vary. collection item An instance of a collector type that is created with a specific set of input properties and collection frequency, and that is used to gather specific types of data. collection mode The frequency at which data is collected and uploaded to the management data warehouse. collection set A group of collection items with which a user can interact through the user interface. collector type A logical wrapper around the SQL Server Integration Services packages that provide the actual mechanism for collecting data and uploading it to the management data warehouse. collocate To select a partitioned table that contains related data and join with this table on the partitioning column. collocation A condition whereby partitioned tables and indexes are partitioned according to equivalent partition functions. color range The range of colors available to a display device. color rule A rule that applies to fill colors for polygons, lines, and markers that represent points or polygon center points. color scale A scale that displays the results of color rules only. column The area in each row of a database table that stores the data value for some attribute of the object modeled by the table. Pattern Profile A report containing a set of regular expressions that cover the specified percentage of values in a string column. column set An untyped XML representation that combines all the sparse columns of a table into a structured output. column-level collation Supporting multiple collations in a single instance. column-level constraint A constraint definition that is specified within a column definition when a table is created or altered. columnstore index Stores each column in a separate set of disk pages rather than storing multiple rows per page.. command buffer An area in memory in which commands entered by the user are kept. A command buffer can enable the user to repeat commands without retyping them completely, edit past commands to change some argument or correct a mistake, undo commands, or obtain a list of past commands. command prompt An interface between the operating system and the user in which the user types command language strings of text that are passed to the command interpreter for execution. command relationship Provides instructions to hardware based on natural-language questions or commands. commit An operation that saves all changes to databases, cubes, or dimensions made since the start of a transaction. committed Characteristic of a transaction that is logged and cannot be rolled back. commodity channel index formula A formula that calculates the mean deviation of the daily average price of a commodity from the moving average. A value above 100 indicates that the commodity is overbought, and a value below -100 indicates that the commodity is oversold. comparator A device for comparing two items to determine whether they are equal. In electronics, for example, a comparator is a circuit that compares two input voltages and indicates which is higher. compilation error An error which occurs while compiling an application. These compilation errors typically occur because syntax was entered incorrectly. compile time The amount of time required to perform a compilation of a program. Compile time can range from a fraction of a second to many hours, depending on the size and complexity of the program, the speed of the compiler, and the performance of the hardware. complete database restore A restore of a full database backup, the most recent differential database backup (if any), and the log backups (if any) taken since the full database backup. complex event processing Component Object Model composable Pertaining to the ability to form complex queries by using query components (objects or operators) as reusable building blocks. This is done by linking query components together or encapsulating query components within each other. composed environment A virtual environment that was created from virtual machines. Those virtual machines were created outside of Microsoft Test Manager and are already deployed on a host group. composite index An index that uses more than one column in a table to index data. composite key A key whose definition consists of two or more fields in a file, columns in a table, or attributes in a relation. compositional hierarchy A set of entities that are conceptually part of a hierarchy, such as a parent entity and a child entity. Data operations require that the entities be treated as a single unit. computed column A virtual column in a table whose value is computed at run time. computed field A value in a formatted notification that has been computed by using a Transact-SQL expression. COM-structured storage file A component object model (COM) compound file used by Data Transformation Services (DTS) to store the version history of a saved DTS package. concatenation The process of combining two or more character strings or expressions into a single character string or expression, or combining two or more binary strings or expressions into a single binary string or expression. concurrency A process that allows multiple users to access and change shared data at the same time. The Entity Framework implements an optimistic concurrency model. concurrency conflict A conflict that occurs when the same item or change unit is changed on two different replicas that are later synchronized. concurrency model A way in which an application can be designed to account for concurrent operations that use the same cached data. Windows Server AppFabric supports optimistic and pessimistic concurrency models. concurrent operation A computer operation in which two or more processes (programs) have access to the microprocessor's time and are therefore carried out nearly simultaneously. Because a microprocessor can work with much smaller units of time than people can perceive, concurrent processes appear to be occurring simultaneously but in reality are not. conditional expression conditional split A restore of a full database backup, the most recent differential database backup (if any), and the log backups (if any) taken since the full database backup. config file A file that contains machine-readable operating specifications for a piece of hardware or software or that contains information on another file or on a specific user, such as the user's logon ID. configuration In reference to a single microcomputer, the sum of a system's internal and external components, including memory, disk drives, keyboard, video, and generally less critical add-on hardware, such as a mouse, modem, or printer.. configuration file Configuration Tools In SQL Server, a menu item which allows the user to enable, disable, start, or stop the features, services, and remote connectivity of the SQL Server installations. conflict detection The process of determining which operations were made by one replica without knowledge of the other, such as when two replicas make local updates to the same item. conflict resolution method The method that is used to determine which change is written to the store in the event of a conflict. Typical conflict resolution methods are as follows: last writer wins, source wins, destination wins, custom, or deferred. For custom resolution, the resolving application reads the conflict from the conflict log and selects a resolution. For deferred resolution, the conflict is logged together with the conflicting change data and the made-with knowledge of the change. conflict resolver A special mechanism which handles resolving of conflict situations. Connection Director A connectivity technology where applications based on different data access technologies (.NET or native Win32) can share the same connection information. Connection information can be centrally managed for such client applications. connection manager A logical representation of a run-time connection to a data source. connection string A series of semicolon-delimited arguments that define the location of a database and how to connect to it. consistency unit The minimal unit of data synchronization. Because all changes that have the same consistency unit are sent together, synchronization can never be interrupted with part of a consistency unit applied. constant A numeric or string value that is not calculated and, therefore, does not change. constraint conflict A conflict that violates constraints that are put on items or change units, such as the relationship of folders or the location of identically-named data within a file system. constraint violation A violation that occurs when the restriction criteria are not satisfied. contained database A SQL Server database that includes all of the user authentication, database settings, and metadata required to define and access the database, and has no configuration dependencies on the instance of the SQL Server Database Engine where the database is installed. container A control flow element that provides package structure. content formatter The part of the distributor that turns raw notification data into readable messages. content key The cryptographic key used to both encrypt and decrypt protected content during publishing and consumption. contention On a network, competition among stations for the opportunity to use a communications line or network resource. context switch The changing of the identity against which permissions to execute statements or perform actions are checked. continuation media The series of removable backup media used after the initial medium becomes full, allowing continuation of the backup operation. continuation tape A tape that is used after the initial tape in a media family fills, allowing continuation of a media family. contract A Service Broker object that defines the message types that can be exchanged within a given conversation. control flow A group of connected control flow elements that perform endpoint The object which represents a party participating in the conversation. conversation group A group of related Service Broker conversations. Messages in the same conversation group can only be processed by one service program at a time. conversation handle An handle which uniquely defines a conversation.). correlated subquery A subquery that references a column in the outer statement. The inner query is run for each candidate row in the outer statement. count window A window with a variable window size that moves along a timeline with each distinct event start time. countersign To sign a document already signed by the other party. CPU busy A SQL Server statistic that reports the time, in milliseconds, that the central processing unit (CPU) spent on SQL Server work. crawl The process of scanning content to compile and maintain an index. credentials Information that includes identification and proof of identification that is used to gain access to local and network resources. Examples of credentials are user names and passwords, smart cards, and certificates. cross-database ownership chaining An ownership chain that spans more than one database. cross-validation A method for evaluating the accuracy of a data mining model. CTI event A special punctuation event that indicates the completeness of the existing events. cube A set of data that is organized and summarized into a multidimensional structure that is defined by a set of dimensions and measures. cube role A collection of users and groups with the same access to a cube. A cube role is created when you assign a database role to a cube, and it applies only to that cube. current time increment event cursor An entity that maps over a result set and establishes a position on a single row within the result set. cursor degradation The return of a different type of cursor than the user had declared. cursor library A part of the ODBC and DB-Library application programming interfaces A variable provided by package developers. custom volume A volume that is not in the DPM storage pool and is specified to store the replica and recovery points for a protection group member. cyclic protection A type of protection between two DPM servers where each server protects the data on the other. DAC A application that captures the SQL Server database and instance objects used by a client-server or 3-tier application. DAC instance A copy of a DAC deployed on an instance of the Database Engine. There can be multiple DAC instances on the same instance of the Database Engine. DAC package An XML manifest that contains all of the objects defined for the DAC; the package gets created when a developer builds a DAC project. DAC package file The XML file that is the container of a DAC package. DAC placement policy A PBM policy that comprises a set of conditions, which serve as prerequisites on the target instance of SQL Server where the DAC can be deployed. DAC project A Visual Studio project used by database developers to create and develop a DAC. DAC projects get full support from Visual Studio and VSTS source code control, versioning, and development project management. data adapter An object used to submit data to and retrieve data from databases, Web services, and Extensible Markup Language (XML) files. data backup Any backup that includes the full image of one or more data files. data binning The process of grouping data into specific bins or groups according to defined criteria. data block In text, ntext, and image data, a data block is the unit of data transferred at one time between an application and an instance of SQL Server. The term is also applied to the units of storage for these data types. data co-location In DPM, a feature that enables protection of multiple data sources on a single volume or on the same tape. This allows you to store more data on each volume or tape. data connection A connection that specifies the name, type, location, and, optionally, other information about a database file or server. Data Control Language The subset of SQL statements used to control permissions on database objects. data convergence Data at the Publisher and the Subscriber that matches. data corruption A process wherein data in memory or on disk is unintentionally changed, with its meaning thereby altered or obliterated. data definition The attributes, properties, and objects in a database. data definition language A language that defines all attributes and properties of a database, especially record layouts, field definitions, key fields, file locations, and storage strategy. data description language data dictionary A database containing data about all the databases in a database system. Data dictionaries store all the various schema and file specifications and their locations. They also contain information about which programs use which data and which users are interested in which reports. data element A single unit of data. data explosion The exponential growth in size of a multidimensional structure, such as a cube, due to the storage of aggregated data. data extension A plug-in that processes data for a specific kind of data source. For example, Microsoft OLE DB Provider for DB2. data feed An XML data stream in Atom 1.0 format. data flow The movement of data through a group of connected elements that extract, transform, and load data. data flow component A component of SQL Server 2005 Integration Services that manipulates data. data flow engine An engine that executes the data flow in a package. data flow task The task that encapsulates the data flow engine that moves data between sources and destinations, providing the facility to transform, clean, and modify data as it is moved. data integrity The accuracy of data and its conformity to its expected value, especially after being transmitted or processed. data manipulation language The subset of SQL statements that is used to retrieve and manipulate data. DML statements typically start with SELECT INSERT UPDATE or DELETE. data mart A subset of the contents of a data warehouse that mining The process of identifying commercially useful patterns or relationships in databases or other computer repositories through the use of advanced statistical tools. data mining extension In Analysis Services, a statement that performs mining tasks programmatically. data mining model training The process a data mining model uses to estimate model parameters by evaluating a set of known and predictable data. data processing extension A plug-in that processes data for a specific kind of data source (similar to a database driver). Data Processor component A component of the report server engine that processes data. Data Profile Viewer A stand-alone utility that displays the profile output in both summary and detail format with optional drilldown capability. data provider A known data source specific to a target type that provides data to a collector type. data pump A component used in SQL Server 2000 Transformation Services (DTS) to import, export, and transform data between heterogeneous data stores. data region A report item that provides data manipulation and display functionality for iterative data from an underlying dataset. data scrubbing The process of building a data warehouse out of data coming from multiple online transaction processing (OLTP) systems. data segment The portion of memory or auxiliary storage that contains the data used by a program. data set A collection of related information made up of separate elements that can be treated as a unit in data handling. data source view A named selection of database objects--such as tables, views, relationships, and stored procedures, based on one or more data sources--that defines the schema referenced by OLAP and data mining objects in an Analysis Services databases. It can also be used to define sources, destinations, and lookup tables for DTS tasks, transformations, and data adapters. data steward The person responsible for maintaining a data element in a metadata registry. data viewer A graphical tool that displays data as it moves between two data flow components at run time. data warehouse The database that stores operations data for long periods of time. This data is then used by the Operations Manager reporting server to build reports. By default, this database is named OperationsManagerDW. Data Warehouse database administrator The person who manages a database. The administrator determines the content, internal structure, and access strategy for a database, defines security and integrity, and monitors performance. database catalog The part of a database that contains the definition of all the objects in the database, as well as the definition of the database. database diagram A graphical representation of any portion of a database schema. It can be either a whole or partial picture of the structure of the database. It includes tables, the columns they contain, and the relationships between the tables. database engine The program module or modules that provide access to a database management system (DBMS). Database Engine Tuning Advisor A tool for tuning the physical database design that helps users to select and create an optimal set of indexes, indexed views, and partitioning. Database Explorer A simple database administration tool that lets the user perform database operations such as creating new tables, querying and modifying existing data, and other database development functions. database file One of the physical files that make up a database. database language The language used for accessing, querying, updating, and managing data in relational database systems. database management system A layer of software between the physical database and the user. The DBMS manages all access to the database. database manager database mirroring Immediately reproducing every update to a read-write database (the principal database) onto a read-only mirror of that database (the mirror database) residing on a separate instance of the database engine (the mirror server). In production environments, the mirror server is on another machine. The mirror database is created by restoring a full backup of the principal database (without recovery). Database Mirroring Monitor A tool used to monitor any subset of the mirrored databases on a server instance. database mirroring partner One in a pair of server instances that act as role-switching partners for a mirrored database. database mirroring partners A pair of server instances that act as role-switching partners for a mirrored database. database project A collection of one or more data connections (a database and the information needed to access that database). database reference A path, expression or filename that resolves to a database. database role A collection of users and groups with the same access to an Analysis Services database. database schema The names of tables, fields, data types, and primary and foreign keys of a database. database script A collection of statements used to create database objects. database snapshot A read-only, static view of a database at the moment of snapshot creation. database structure Database view A read-only, static snapshot of a source database at the moment of the view's creation. data-definition query An SQL-specific query that contains data definition language (DDL) statements. These statements allow you to create or alter objects in the database. data-driven subscription A subscription that takes generated output for subscription values (for example, a list of employee e-mail addresses). datareader A stream of data that is returned by an ADO.NET query. data-tier application data-tier application instance data-tier application package date A SQL Server system data type that stores a date value from January 1, 1 A.D., through December 31, 9999 DB A collection of data formatted/arranged to allow for easy search and retrieval. DBCS A character set that can use more than one byte to represent a single character. A DBCS includes some characters that consist of 1 byte and some characters that consist of 2 bytes. Languages such as Chinese, Japanese, and Korean use DBCS. DBMS DDL DDL trigger A special kind of trigger that executes. Decision trees can be used for prediction. declaration A binding of an identifier to the information that relates to it. For example, to make a declaration of a constant means to bind the name of the constant with its value. Declaration usually occurs in a program's source code; the actual binding can take place at compile time or run time. Declarative Management Framework A policy based system of SQL Server management. Declarative Management Framework Facet A set of logical pre-defined properties that model the behavior or characteristics for certain types of managed targets (such as a database, table, login, view,etc) in policy-based management. declarative referential integrity FOREIGN KEY constraints defined as part of a table definition that enforce proper relationships between tables. dedicated administrator connection A dedicated connection that allows an administrator to connect to a server when the Database Engine will not respond to regular connections. default A value that is automatically used by a program when the user does not specify an alternative. Defaults are built into a program when a value or option must be assumed for the program to function. default database The database the user is connected to immediately after logging in to SQL Server. default instance The instance of SQL Server that uses the same name as the computer name on which it is installed. default language The human language that SQL Server uses for errors and messages if a user does not specify a language. default member The dimension member used in a query when no member is specified for the dimension. default result set The default mode that SQL Server uses to return a result set back to a client. defection The removal of a server from multiserver operations. deferred transaction A transaction that is not committed when the roll forward phase of recovery completes and that cannot be rolled back during database startup because data needed by roll back is offline. This data can reside in either a page or a file. degenerate dimension A relationship between a dimension and a measure group in which the dimension main table is the same as the measure group table. DEK A bit string that is used in conjunction with an encryption algorithm to encrypt and decrypt data. delegated registration model A registration model in which a person other than the certificate subscriber initiates the certificate transaction. The certificate subscriber then completes the transaction by providing a supplied one-time password. DELETE clause A part of a DML Statement that contains the DELETE keyword and associated parameters. delete level In Data Transformation Services, the amount and kind of data to remove from a data warehouse. delimited identifier An object in a database that requires the use of special characters (delimiters) because the object name does not comply with the formatting rules of regular identifiers. delivery channel A pipeline between a distributor and a delivery service. delivery channel type The protocol for a delivery channel, such as Simple Mail Transfer Protocol (SMTP) or File. delivery extension A plug-in that delivers reports to a specific target (for example, e-mail delivery). delivery protocol The set of communication rules used to route notification messages to external delivery systems. denormalize To introduce redundancy into a table to incorporate data from a related table. deploy To build a DAC instance, either directly from a DAC package or from a DAC previously imported to the SQL Server Utility. deployed environment A group of virtual machines located on a team project host group and controlled by Microsoft Test Manager. A deployed environment can be running or stopped. dequeue To remove from a queue. derived column A transformation that creates new column values by applying expressions to transformation input columns. deserialization The process of converting an object from a serial storage format to binary format in the form of an object that applications can use. This happens when the object is retrieved from the cache cluster with the Get client APIs. destination The SSIS data flow component that loads data into data stores or creates in-memory datasets. A synchronization provider that provide its current knowledge, accept a list of changes from the source provider, detect any conflicts between that list and its own items, and apply changes to its data store. destination adapter A data flow component that loads data into a data store. destination provider detect To find something. device type A value from a developer-defined list that specifies the types of devices that a given application will support. diacritic A mark placed over, under, or through a character, usually to indicate a change in phonetic value from the unmarked state. diacritical mark dialect The syntax and general rules used to parse a string or a query statement. diamond-shape relationship A chain of attribute relationships that splits and rejoins but that contains no redundant relationships. For example, Day->Month->Year and Day->Quarter->Year have the same start and end points, but do not have any common relationships. differencer An interface to a tool that creates a DifferencingService. digest delivery A method of sending notifications that combines multiple notifications within a batch and sends the resulting message to a subscriber. digital certificate dimension A structural attribute of a cube that organizes data into levels. For example, a Geography dimension might include the members Country, Region, State or Province, and City. dimension expression A valid Multidimensional Expressions (MDX) expression that returns a dimension. dimension granularity The lowest level available to a particular dimension in relation to a particular measure group. The "natural" or physical grain is always that of the key that joins the main dimension table to the primary fact table. dimension hierarchy A logical tree structure that organizes the members of a dimension such that each member has one parent member and zero or more child members. dimension level The name of a set of members in a dimension hierarchy such that all members of the set are at the same distance from the root of the hierarchy. For example, a time hierarchy may contain the levels Year, Month, and Day. dimension member A single position or item in a dimension. Dimension members can be user-defined or predefined and can have properties associated with them. dimension member property A characteristic of a dimension member. Dimension member properties can be alphanumeric, Boolean, or Date/Time data types, and can be user-defined or predefined. dimension table A table in a data warehouse whose entries describe data in a fact table. A read that contains uncommitted data. disabled index Any index that has been marked as disabled. A disabled index is unavailable for use by the database engine. The index definition of a disabled index remains in the system catalog with no underlying index data. discrete signal A time series consisting of a sequence of quantities, that is a time series that is a function over a domain of discrete integers. discretize To put values of a continuous set of data into groups so that there are a discrete number of possible states. discretized column A column that represents finite, counted data distinct count measure A measure commonly used to determine for each member of a dimension how many distinct, lowest-level members of another dimension share rows in the fact table. distributed partitioned view A view that joins horizontally partitioned data from a set of member tables across more than one server, making the data appear as if from one table. distributed query A single query that accesses data from multiple data sources. distributed transaction A transaction that spans multiple data sources. distribution cleanup agent A scheduled job that runs under SQL Server Agent. After all Subscribers have received a transaction, the agent removes the transaction from the distribution database. It also cleans up snapshot files from the file system after entries corresponding to those files have been removed from the distribution database. distribution database A database on the Distributor that stores data for replication including transactions, snapshot jobs, synchronization status, and replication history information. distribution retention period In transactional replication, the amount of time transactions are stored in the distribution database. distributor A database instance that acts as a store for replication-specific data associated with one or more Publishers. DMF One of a set of built-in functions that returns server state information about values, objects, and settings in SQL Server. DML DML trigger A stored procedure that executes when data in a specified table is modified. DMV A set of built-in views that return server state information about values, objects, and settings in SQL Server. DMX domain The set of possible values that you can specify for an independent variable in a function, or for a database attribute. A collection of computers in a networked environment that share a common database, directory database, or tree. A domain is administered as a unit with common rules and procedures, which can include security policies, and each domain has a unique name. domain context A client-side representation of a domain service. domain integrity The validity of entries for a specific column of data. domain operation A method on a domain service that is exposed to a client application. It enables client applications to perform an action on the entity such as, query, update, insert, or delete records. domain service A service that encapsulates the business logic of an application. It exposes a set of related domain operations in a service layer. dormant session Session in pre-login state. Sessions can be initiated or ended to modify their state, but they generally remain in either a "sleep/idle" state, such as when the session has been initiated and is open at the server for client use; or a "dormant" state, such as when the session has been ended and the session is not currently available at the server for client use. double byte character set double-byte character set down Not functioning, in reference to computers, printers, communications lines on networks, and other such hardware. download-only article An article in a merge publication that can be updated only at the Publisher or at a Subscriber that uses a server subscription. DPM Client The Data Protection Manager (DPM) Client enables the user to protect and recover their data as per the company protection policy configured by their backup administrator. DPM engine A policy-driven engine that DPM uses to protect and recover data. DPM Management Shell The command shell, based on Windows PowerShell (Powershell.exe), that makes available the cmdlets that perform functions in Data Protection Manager. DPM Online A feature that provides an online remote backup, both for securely storing data offsite for long durations and for disaster recovery (DR). DPM Online account A user account that DPM uses to start the DPM Online service. DPM Online cache A cache volume required by DPM Online on which the DPM server stores information for faster backup and recovery from DPM Online. DPM Online protection The process of using DPM online to protect data from loss or corruption by creating and maintaining replicas and recovery points of the data online. DPM Online protection group A collection of data sources that share the same DPM Online protection configuration. DPM Online recovery The process by which an administrator recovers previous versions of protected data from online recovery points on an internet-connected DPM server. DPM Online recovery point The date and time of a previous version of a data source that is available from DPM Online. DPM Online replica A complete copy of an online-protected data source on DPM Online. Each member of an online protection group on the DPM server is associated with a DPM Online replica. DPM role The grouping of users, objects, and permissions that is used by DPM administrators to manage DPM features that are used by end users. DPM Self-Service Recovery Configuration Tool A tool that enables DPM administrators to authorize end users to perform self-service recovery of data by creating and managing DPM roles (grouping of users, objects, and permissions). DPM Self-Service Recovery Tool A tool that is used by end users to recover backups from DPM, without any action required from the DPM administrator. DPM Self-Service Tool for SQL Server A tool for SQL Server that enables backup administrators to authorize end users to recover backups of SQL Server databases from DPM, without further action from the backup administrator. DPM SRT Software provided with DPM to facilitate a Windows Server 2003 bare metal recovery for the DPM server and the computers that DPM protects. DPM SST for SQL Server DPM System Recovery Tool DQS knowledge base A repository of metadata that is used by Data Quality Services to improve the quality of data. This metadata is created either by the user interactively or by the Data Quality Services platform in an automated knowledge discovery process. drillthrough In Analysis Services, a technique to retrieve the detailed data from which the data in a cube cell was summarized. drill-through report A secondary report that is displayed when a user clicks an item in a report. Detailed data is displayed in the same report. drop-down list A list that can be opened to reveal all choices for a given field. DSN The collection of information used to connect an application to a particular ODBC database. DSN-less connection A type of data connection that is created based on information in a data source name (DSN), but is stored as part of a project or application. dump dump device dynamic cursor A cursor that can reflect data modifications made to the underlying data while the cursor is open. dynamic filter A row filter available with merge replication that allows you to restrict the data replicated to a Subscriber based on a system function or user-defined function (for example: SUSER_SNAME()). dynamic locking The process used by SQL Server to determine the most cost-effective locks to use at any one time. dynamic management function dynamic management view dynamic recovery The process that detects and/or attempts to correct software failure or loss of data integrity within a relational database management system (RDBMS). dynamic routing Routing that adjusts automatically to the current conditions of a network. Dynamic routing typically uses one of several dynamic-routing protocols such as Routing Information Protocol (RIP) and Border Gateway Protocol (BGP). Compare static routing. dynamic snapshot In merge replication, a snapshot that includes only the data from a single partition. eager loading A pattern of loading where a specific set of related objects are loaded along with the objects that were explicitly requested in the query. edge event An event whose event payload is valid for a given interval; however, only the start time is known upon arrival to the CEP server. The valid end time of the event is provided later in a separate edge event. effective policy The set of enabled policies for a target. The exchange of text messages and computer files over a communications network, such as a local area network or the Internet. encryption The process of converting readable data (plaintext) into a coded form (ciphertext) to prevent it from being read by an unauthorized party. encryption key end cap endpoint A synchronization provider and its associated replica. endpoint mapper A service on a remote procedure call (RPC) server that maintains a database of dynamic endpoints and allows clients to map an interface/object UUID pair to a local dynamic endpoint. enqueue To place (an item) in a queue. enroll To add an instance of SQL Server to the set of SQL Server instances managed by a utility control point. enrollment Enrollment Agent A user account used to request smart card certificates on behalf of another user account. A specific certificate template is applied to an Enrollment Agent. enterprise license A license that authorizes protection of both file and application resources on a single computer. entity In Reporting Services, a logical collection of model items, including source fields, roles, folders, and expressions, presented in familiar business terms. entity integrity A state in which every row of every table can be uniquely identified. envelopes formula A financial formula that calculates "envelopes" above and below a moving average using a specified percentage as the shift. The envelopes indicator is used to create signals for buying and selling. You can specify the percentage the formula uses to calculate the envelopes. equijoin A join in which the values in the columns being joined are compared for equality, and all columns are included in the results. equirectangular projection In a map report item, a very simple equidistant cylindrical projection in which the horizontal coordinate is the longitude and the vertical coordinate is the latitude. error handling The process of dealing with errors (or exceptions) as they arise during the running of a program. Some programming languages, such as C++, Ada, and Eiffel, have features that aid in error handling. error log A file that lists errors that were encountered during an operation. error state number A number associated with SQL Server messages that helps Microsoft support engineers find the specific code location that issued the message. ETL The act of extracting data from various sources, transforming data to consistent types, and loading the transformed data for use by applications. ETW-based log sink A means of capturing trace events on the cache client or cache host with the Event Tracing for Windows (ETW) framework inside Windows.. Any significant occurrence in the system or an application that requires a user to be notified or an entry to be added to a log. event category In SQL Trace, a grouping of similar and logically related event classes. event chronicle A table that stores event state information. event chronicle rule One or more Transact-SQL statements that manage the data in the event chronicle. event class In SQL Trace, a collection of properties that define an event. event classification A means of differentiating types of events that occur on the cache client and cache host. The Windows Server AppFabric log sinks follow the classification established with the System.Diagnostics.TraceLevel enumeration. event collection stored procedures System-generated stored procedures that an application can call to submit events to the event table in the application database. event handler A software routine that executes in response to an event. event header The portion of an event that defines the temporal properties of the event and the event kind. Temporal properties include a valid start time and end time associated with the event. event kind Event metadata that defines the event type. event model The event metadata that defines the temporal characteristics (shape) of the event. event notification A special kind of trigger that sends information about database events to a service broker. event payload The data portion of an event in which the data fields are defined as CLR (common language runtime) types. An event payload is a typed structure. event provider A provider that monitors a source of events and notifies the event table when events occur. event source The point of origin of an event. event table A table in the application database that stores event data. Event Tracing for Windows (ETW)-based log sink Everyone A type of user account. eviction The physical removal of a cached object from the memory of the cache host or hosts that it is stored on. This is typically done to keep the memory usage of the cache host service in check. exclusive lock A lock that prevents any other transaction from acquiring a lock on a resource until the original lock on the resource is released at the end of the transaction. execute To perform an instruction. execution tree The path of data in the data flow of a SQL Server 2005 Integration Services package from sources through transformations to destinations. exit module A Certificate Services component that performs post-processing after a certificate is issued, such as the publication of an issued certificate to Active Directory. expiration The point at which an object has exceeded the cache time-out value. When an object expires, it is evicted. explicit cap An explicit hierarchy used as the top level of a derived hierarchy structure. explicit hierarchy In Master Data Services, a hierarchy that uses consolidated members to group other consolidated and leaf members. explicit loading A pattern of loading where related objects are not loaded until explicitly requested by using the Load method on a navigation property. explicit transaction A group of SQL statements enclosed within transaction delimiters that define both the start and end of the transaction. exploded pie A pie chart that displays the contribution of each value to a total while emphasizing individual values, by showing each slice of the pie as "pulled out," or separate, from the whole. exploded pie chart exponential moving average A moving average of data that gives more weight to the more recent data in the period and less weight to the older data in the period. The formula applies weighting factors which decrease exponentially. The weighting for each older data point decreases exponentially, giving much more importance to recent observations while still not discarding older observations entirely. export format UI text for subscriptions and HTML viewer. Corresponds to rendering extensions. expression Any combination of operators, constants, literal values, functions, and names of fields (columns), controls, and properties that evaluates to a single value. expression host assembly All expressions found within a report are that are compiled into an assembly. The expression host assembly is stored as a part of the compiled report. extended permission A permission that is specific to an object added to the standard Active Directory object schema. The permission associated with the new object extends the existing default permission set. Extended property User-defined text (descriptive or instructional including input masks and formatting rules) specific to a database or database object. The text is stored in the database as a property of the database or object. Extended Protection for Authentication A security feature that helps protect against man-in-the-middle (MITM) attacks. Transformation A declarative, XML-based language that is used to present or transform XML data. Extensible Stylesheet Language Transformations extent On a disk or other direct-access storage device, a continuous block of storage space reserved by the operating system for a particular file or program. external delivery system A system, such as Microsoft Exchange Server, that delivers formatted notifications to destination devices. extract To build a DAC package file that contains the definitions of all the objects in an existing database, as well as instance objects that are associated with the database. extraction, transformation, and loading facet Facet facet property A predefined property that applies to a specific facet in Policy-Based Management. fact A row in a fact table in a data warehouse. A fact contains values that define a data event such as a sales transaction. fact dimension fact table A central table in a data warehouse schema that contains numerical measures and keys relating facts to dimension tables. factory method A method, usually defined as static, whose purpose is to return an instance of a class. fail over To switch processing from a failed component to its backup component. failed transaction A transaction that encountered an error and was not able to complete. failover cluster A group of servers that are in one location and that are networked together for the purpose of providing live backup in case one of the servers fails. failover clustering A high availability process in which an instance of an application or a service, running over one machine, can fail-over onto another machine in the failover cluster in the case the first one fails. failover partner The server used if the connection to the partner server fails. fail-safe operator A user who receives the alert if the designated operator cannot be reached. failure notification A type of cache notification triggered when the cache client misses one or more cache notifications. fatal error federated database servers A set of linked servers that shares the processing load of data by hosting partitions of a distributed partitioned view. feed consumer A software component that extracts items from a FeedSync feed and applies them to a destination replica by using a synchronization provider. fiber A Windows NT lightweight thread scheduled within a single OS thread. fiber mode A situation where an instance of SQL Server allocates one Windows thread per SQL scheduler, and then allocates one fiber per worker thread, up to the value set in the max worker threads option. field An area in a window or record that stores a single data value. field length In bulk copy, the maximum number of characters needed to represent a data item in a bulk copy character format data file. field marshaller A SQL Server feature that handles marshaling for fields. field terminator In bulk copy, one or more characters marking the end of a field or row, separating one field or row in the data file from the next. file backup A backup of all the data in one or more files or filegroups. file differential backup A backup of one or more files containing only changes made to each file since its most recent file backup. A file differential backup requires a full file backup as a base. The process when a program closes a file, based on a certain event, and creates a new file. filegroup forgotten knowledge The knowledge that is used as the starting point for filter tracking. A filter-tracking replica can save storage space by removing ghosts and advancing the filter forgotten knowledge to contain the highest version of the ghosts that have been removed. filter key A 4-byte value that maps to a filter in a filter key map. filtered replica A replica that stores item data only for items that are in a filter, such as a media storage replica that stores only songs that are rated as three stars or better. filter-tracking replica A replica that can identify which items are in a filter and which have moved in or out of the filter recently. fine-grained lock A lock that applies to a small amount of code or data. fit One of the criteria used for evaluating the success of a data mining algorithm. Fit is typically represented as a value between 0 and 1, and is calculated by taking the covariance between the predicted and actual values of evaluated cases and dividing by the standard deviations of the same predicted and actual values.. FK A key in a database table that comes from another table (also know as the "referenced table") and whose values match the primary key (PK) or unique key in the referenced table. flat file A file consisting of records of a single record type in which there is no embedded structure information that governs relationships between records. flatten To convert a nested structure into a flat structure. flattened interface An interface created to combine members of multiple interfaces. flattened rowset A multidimensional data set presented as a two-dimensional rowset in which unique combinations of elements of multiple dimensions are combined on an axis. flexible ID An identifier that is assigned to various synchronization entities, such as replicas. The identifier can be of fixed or variable length. flexible identifier fold count A value that represents the number of partitions that will be created within the original data set. folder hierarchy A bounded namespace that uniquely identifies all reports, folders, shared data source items, and resources that are stored in and managed by a report server. forced service In a database mirroring session, a failover initiated by the database owner upon the failure of the principal server that transfers service to the mirror database while it is in an unknown state. Data may be lost. foreign key foreign key association An association between entities that is managed through foreign key properties.. free-form language A language whose syntax is not constrained by the position of characters on a line. C and Pascal are free-form languages; FORTRAN is not. full backup A backup of an entire database. full differential backup A backup of all files in the database, containing only changes made to the database since the most recent full backup. A full differential backup requires a full backup as a base. full outer join A type of outer join in which all rows in all joined tables are included, whether they are matched or not. For example, a full outer join between titles and publishers shows all titles and all publishers, even those that have no match.. full-text catalog A collection of full-text index components and other files that are organized in a specific directory structure and contain the data that is needed to perform queries. full-text enabling The process of allowing full-text querying to occur on the current database. In a double-byte character set, a character that is represented by 2 bytes and typically has a half-width variant. function A piece of code that operates as a single logical unit. A function is called by name, accepts optional input parameters, and returns a status and optional output parameters. Many programming languages support functions.. GAC A computer-wide code cache that stores assemblies specifically installed to be shared by many applications on the computer. gap depth A measure that specifies the distance between data series that are displayed along distinct rows, as a result of clustering. garbage collection A process for automatic recovery of heap memory. Blocks of memory that had been allocated but are no longer in use are freed, and blocks of memory still in use may be moved to consolidate the free memory into larger blocks. garbage collector The part of the operating system that performs garbage collection. gated link A protected link between two or more objects. During execution, permissions are not checked across the object relationship once it has been established and credentials don't have to be checked several times. This type of link is useful when it is not appropriate or manageable to give permissions to many dependent objects. gather-write operation A performance optimization where the Database Engine collects multiple modified data pages into a single write operation. GC generated code Code that is automatically generated for the client project based on operations and entities exposed in the middle tier when a RIA Services link exists between the server and client projects. generator The component of Notification Services that matches events to subscriptions and produces notifications. geographic data A type of spatial data that stores ellipsoidal (round-earth) data, such as GPS latitude and longitude coordinates. geometric data A type of spatial data that supports planar, or Euclidean (flat-earth), data. ghost An item or change unit in a filtered replica that was in the filter and has moved out. ghost record Row in the leaf level of an index that has been marked for deletion, but has not yet been deleted by the database engine. ghost row global assembly cache global default A default that is defined for a specific database and is shared by columns of different tables. global ID A unique identifier that is assigned to a data item. The identifier must be unique across all clients. A global identifier is a flexible identifier and so can be any format, but it is typically a GUID and an 8-byte prefix. global identifier global rule A rule that is defined for a specific database and is shared by columns of different tables. global subscription A subscription to a merge publication with an assigned priority value used for conflict detection and resolution. globalization The process of designing and developing a software product to function in multiple locales. Globalization involves identifying the locales that must be supported, designing features that support those locales, and writing code that functions equally well in any of the supported locales. granularity A description, from "coarse" to "fine", of a computer activity or feature (such as screen resolution, searching and sorting, or time slice allocation) in terms of the size of the units it handles (pixels, sets of data, or time slices). The larger the pieces, the coarser the granularity. granularity attribute The single attribute is used to specify the level of granularity for a given dimension in relation to a given measure group. graphical query designer A query designer provided by the Reporting Services that allows the user to interactively build a query and view the results for data source types SQL Server, Oracle, OLE DB, and ODBC. graphics primitive A basic shape (a point, a line, circle, curve, or polygon) that a graphics adapter can manipulate as a discrete entity. group A collection of users, computers, contacts, and other groups that is used as security or as e-mail distribution collections. Distribution groups are used only for e-mail. Security groups are used both to grant access to resources and as e-mail distribution lists. grouping A set of data that is grouped together in a report. half-width character In a double-byte character set, a character that is represented by one byte and typically has a full-width variant. hard disk An inflexible platter coated with material in which data can be recorded magnetically with read/write heads. hard page-break renderer A rendering extension that maintains the report layout and formatting so that the resulting file is optimized for a consistent printing experience, or to view the report online in a book format. hard-coding Basing numeric constants on the assumed length of a string; assumptions about language or culture-specific matters fixed in the code - e.g., string length, date formats, etc. The process of putting string or character literals in the main body of code, instead of in external resource files. hardware security module A secure device that provides cryptographic capabilities, typically by providing private keys used in Public-key cryptography. hardware token hash partitioning A way of partitioning a table or index by allowing SQL Server to apply an internal hash algorithm to spread rows across partitions based on the number of partitions specified and the values of one or more partitioning columns. heat map A type of map presentation where the intensity of color for each polygon corresponds to the related analytical data. For example, low values in a range appear as blue (cold) and high values as red (hot). helpdesk An individual or team of support professionals that provide technical assistance for an organization's network, hardware devices, and software. heterogeneous data Data stored in multiple formats. hierarchy tree A structure in which elements are related to each other hierarchically. high availability A Windows Server AppFabric feature that supports continuous availability of cached data by storing copies of that data on multiple cache hosts. The ability of a system or device to be usable when it is needed. When expressed as a percentage, high availability is the actual service time divided by the required service time. Although high availability does not guarantee that a system will have no downtime, a network often is considered highly available if it achieves 99.999 percent network uptime. high watermark A memory consumption threshold on each cache host that specifies when objects are evicted out of memory, regardless of whether they have expired or not, until memory consumption goes back down to the low watermark. high whisker The highest value that is not an outlier on a box plot chart. hint An option or strategy specified for enforcement by the SQL Server query processor on SELECT, INSERT, UPDATE, or DELETE statements. The hint overrides any execution plan the query optimizer might select for a query. history A list of the user's actions within a program, such as commands entered in an operating system shell, menus passed through using Gopher, or links followed using a Web browser. holdability Refers to the possibility of leaving result sets open ("on hold") that have been processed and are normally closed after this. For instance: "SQL Server supports holdability at the connection level only." holdout A percentage of training data that is reserved for use in measuring the accuracy of the structure of the data mining model. holdout data holdout store The data mining structure that is used to cache the holdout data. It contains references to the holdout data. Root folder in report server folder namespace. A document that serves as a starting point in a hypertext system. On the World Wide Web, an entry page for a set of Web pages and other files in a Web site. The home page is displayed by default when a visitor navigates to the site using a Web browser. homogeneous data Data that comes from multiple data sources that are all managed by the same software. hop In data communications, one segment of the path between routers on a geographically dispersed network. hopping window A type of window in which consecutive windows "hop" forward in time by a fixed period. The window is defined by two time spans: the period P and the window length L. For every P time unit a new window of size L is created. horizontal partitioning To segment a single table into multiple tables based on selected rows. hot standby A standby server that can support rapid failover without a loss of data from committed transactions. hot standby server HSM HTML An application of the Standard Generalized Markup Language that uses tags to mark elements, such as text and graphics, in a document to indicate how Web browsers should display these elements to the user and should respond to user actions. HTML Viewer UI element consisting of a report toolbar and other navigation elements used to work with a report. hybrid OLAP A storage mode that uses a combination of multidimensional data structures and relational database tables to store multidimensional data. Hypertext Markup Language identifying field A field or group of fields that identify an entity as a unique object. identifying relationship A relationship where the primary key of the principal entity is part of the primary key of the dependent entity. In this kind of relationship, the dependent entity cannot exist without the principal entity. identity column A column in a table that has been assigned the identity property. identity property A property that generates values that uniquely identify each row in a table. ideograph A character in an Asian writing system that represents a concept or an idea, but not a particular word or pronunciation. ideographic character subscription A subscription to a transactional publication for which the user is able to make data modifications at the Subscriber. The data modifications are then immediately propagated to the Publisher using the two-phase commit protocol (2PC). implicit cursor conversion implicit transaction A connection option in which each SQL statement executed by the connection is considered a separate transaction. implied permission Permission to perform an activity specific to a role. inactive data source A data source that has been backed up on the DPM server but is no longer being actively protected. included column index A nonclustered index containing both key and nonkey columns. incoming message A message that has been sent across one or more messaging systems. It may have been sent only to you or to many other recipients. Incoming messages are placed in a receive folder designated to hold messages of a particular class. You can set up a different receive folder for each message class that you handle or use one folder for all of the classes. incremental update The set of operations that either adds new members to an existing cube or dimension, or adds new data to a partition. independent association An association between entities that is represented and tracked by an independent object. Index Allocation Map A page that maps the extents in a 4-GB part of a database file that is used by an allocation unit. index page A database page containing index rows. indexed view A view. information technology The formal name for a company's data processing department.. in-person authentication Physical authentication to complete a certificate request transaction. For example, an end user requesting his/her personal identification number (PIN) be unblocked will visit a certificate manager in person to provide in-person authentication with identification, such as an employee badge or drivers license. InProc A circumstance where the COM object’s code is loaded from a DLL file and is located in the same process as the client. input adapter An adapter that accepts incoming event streams from external sources such as databases, files, ticker feeds, network ports, manufacturing devices and so on.. input stream A flow of information used in a program as a sequence of bytes that are associated with a particular task or destination. Input streams include series of characters read from the keyboard to memory and blocks of data read from disk files. insensitive cursor A cursor that does not reflect data modification made to the underlying data by other users while the cursor is open. insert event The event kind used to signify the arrival of an event into the stream. The insert event type consists of metadata that defines the valid lifetime of the event and the payload (data) fields of the event. Insert Into query A query that copies specific columns and rows from one table to another or to the same table. Insert Values query A query (SQL statement) that creates a new row and inserts values into specified columns. instance A copy of SQL Server running on a computer. instance control provider A provider that allows you to issue control commands against workflow instances in an instance store. For example, a SQL control provider lets you suspend, resume, or terminate instances stored in a SQL Server database. When you execute a cmdlet that controls a workflow instance in an instance store, the cmdlet internally uses the control provider for that instance store to send commands to the instance. instance query provider A provider that allows you to issue queries against an instance store. For example, a SQL query provider lets you query for workflow instances stored in a SQL Server database. When you execute a cmdlet that queries for instances against an instance store, the cmdlet internally uses the query provider to retrieve instances from that store. instance store A set of database tables that store workflow instance state and workflow instance metadata. instance store provider In Windows AppFabric, a provider that allows you to create instance store objects. For example, a SQL store provider allows clients to create SQL workflow instance store objects, which in turn allows clients to save and retrieve workflow instances to and from a persistence store. integration In computing, the combining of different activities, programs, or hardware components into a functional unit. integrity integrity constraint A property defined on a table that prevents data modifications that would create invalid data. intent lock A lock that is placed on one level of a resource hierarchy to protect shared or exclusive locks on lower-level resources. intent share. intermediate language A computer language used as an intermediate step between the original source language, usually a high-level language, and the target language, usually machine code. Some high-level compilers use assembly language as an intermediate language. International Electrotechnical Commission Internet Protocol security A set of industry-standard, cryptography-based services and protocols that help to protect data over a network. interprocess communication The ability of one task or process to communicate with another in a multitasking operating system. Common methods include pipes, semaphores, shared memory, queues, signals, and mailboxes. interval event An event whose payload is valid for a given period of time. The metadata of the interval event requires that both the start and end time of the interval be provided in the event metadata. Interval events are valid only for this specific interval. interval event model The event model of an interval event. invoke operation A domain operation that is executed without tracking or deferred execution. IP address A binary number that uniquely identifies a host (computer) connected to the Internet to other Internet hosts, for the purposes of communication through the transfer of packets. IPC IPsec isolation level The property of a transaction that controls the degree to which data is isolated for use by one process, and is guarded against interference from other processes. ISQL item A unit of data or metadata that is being synchronized. A typical item of data might be a file or record, whereas a typical item of metadata might be a knowledge item. item-level role assignment A security policy that applies to an item in the report server folder namespace. item-level role definition A security template that defines a role used to control access to or interaction with an item in the report server folder namespace. iterate To execute one or more statements or instructions repeatedly. Statements or instructions so executed are said to be in a loop. job A specified series of operations, called steps, performed sequentially by a program to complete an action. job history Log that keeps a historical record of jobs. join To combine the contents of two or more tables and produce a result set that incorporates rows and columns from each table. Tables are typically joined using data that they have in common.. Kagi chart A chart, mostly independent of time, used to track price movements and to make decisions on purchasing stock. key A string that identifies an object in the cache. This string must be unique within a region. Objects are associated with a key when they are added and then retrieved with the same key. In encryption, authentication, and digital signatures, a value used in combination with an algorithm to encrypt or decrypt information. In an array, the field by which stored data is organized and accessed. A column or group of columns that uniquely identifies a row (primary key), defines the relationship between two tables (foreign key), or is used to build an index. key attribute The attribute of a dimension that links the non-key attributes in the dimension to related measures. key column A column whose contents uniquely identify every row in a table. key generator A hardware or software component that is used to generate encryption key material. key performance indicator. key range lock A lock that is used to lock ranges between records in a table to prevent phantom additions to, or deletions from, a set of records. Ensures serializable transactions. key recovery The process of recovering a user's private key. Key Recovery Agent A designated user that works with a certificate administrator to recover a user’s private key. A specific certificate template is applied to a Key Recovery Agent. keyset-driven cursor A cursor that shows the effects of updates made to its member rows by other users while the cursor is open, but does not show the effects of inserts or deletes. knowledge The metadata about all the changes that a participant has seen and maintains. KPI KRA Language for non-Unicode. language service parser A component that is used to describe the functions and scope of the tokens in source code. language service scanner A component that is used to identify types of tokens in source code. This information is used for syntax highlighting and for quickly identifying token types that can trigger other operations, for example, brace matching. latch A short-term synchronization object protecting actions that need not be locked for the life of a transaction. A latch is primarily used to protect a row that the storage engine is actively transferring from a base table or index to the relational engine. latency The delay that occurs while data is processed or delivered. lazy loading A pattern of data loading where related objects are not loaded until a navigation property is accessed. lazy schema validation An option that delays checking the remote schema to validate its metadata against a query until execution in order to increase performance. lead byte The byte value that is the first half of a double-byte character. lead host A cache host that has been designated to work with other lead hosts and to keep the cluster running at all times. leaf A node with no child objects represented in the tree. leaf level The bottom level of a clustered or nonclustered index, or the bottom level of a hierarchy. leaf member A member that has no descendents. leaf node learned knowledge The current knowledge of a source replica about a specific set of changes, and the logged conflicts of that replica. least recently used The type of eviction used by the cache cluster, where least recently used objects are evicted before the most recently used objects. lift chart In Analysis Services, a chart that compares the accuracy of the predictions of each data mining model in the comparison set. lightweight pooling An option that provides a means of reducing the system overhead associated with the excessive context switching sometimes seen in symmetric multiprocessing (SMP) environments by performing the context switching inline, thus helping to reduce user/kernel ring transitions. line layer The layer in a map report that displays spatial data as lines, for example, lines that indicate paths or routes. linked dimension A reference in a cube to a dimension in a different cube (that is, a cube with a different data source view that exists in the same or a different Analysis Services database). A linked dimension can only be related to measure groups in the source cube, and can only be edited in the source database. linked measure group A reference in a cube to a measure group in a different cube (that is, a cube with a different data source view that exists in the same or a different Analysis Services database). A linked measure group can only be edited in the source database. linked server A definition of an OLE DB data source used by SQL Server distributed queries. The linked server definition specifies the OLE DB provider required to access the data, and includes enough addressing information for the OLE DB provider to connect to the data.. LINQ A query syntax that defines a set of query operators that allow traversal, filter, and projection operations to be expressed in a direct, declarative way in any .NET-based programming language. little endian Pertaining to a processor memory architecture that stores numbers so that the least significant byte is placed first. local cache A feature that enables deserialized copies of cached objects to be saved in the memory of the same process that runs the cache-enabled application. local Distributor A server that is configured as both a Publisher and a Distributor for SQL Server Replication. local partitioned view A view that joins horizontally partitioned data from a set of member tables across a single server, making the data appear as if from one table. local subscription locale A collection of rules and data specific to a language and a geographic area. Locales include information on sorting rules, date and time formatting, numeric and monetary conventions, and character classification. localization The process of adapting a product and/or content (including text and non-text elements) to meet the language, cultural, and political expectations and/or requirements of a specific local market (locale). lock A restriction on access to a resource in a multiuser environment. lock escalation The process of converting many fine-grain locks into fewer coarse-grain locks, thereby reducing system overhead. log backup A backup of transaction logs that includes all log records not backed up in previous log backups. Log backups are required under the full and bulk-logged recovery models and are unavailable under the simple recovery model. log chain A continuous sequence of transaction logs for a database. A new log chain begins with the first backup taken after the database is created, or when the database is switched from the simple to the full or bulk-logged recovery model. A log chain forks after a restore followed by a recovery, creating a new recovery branch. log provider A provider that logs package information at run time. Integration Services includes a variety of log providers that make it possible to capture events during package execution. Logs are created and stored in formats such as XML, text, database, or in the Windows event log. Log Reader Agent In Replication, the executable that monitors the transaction of each database configured for transactional replication, and copies the transactions marked for replication from the transaction into the distribution database. log sequence number A unique number assigned to each entry in a transaction log. LSNs are assigned sequentially according to the order in which entries are created. log shipping Copying, at regular intervals, log backup from a read-write database (the primary database) to one or more remote server instances (secondary servers). Each secondary server has a read-only database, called a secondary database, that was created by restoring a full backup of the primary database without recovery. The secondary server restores each copied log backup to the secondary database. The secondary servers are warm standbys for the primary server. log shipping configuration A single primary server, one or more secondary servers (each with a secondary database), and a monitor server. log shipping job A job performing. See also: primary database, primary server, secondary database, secondary server. log sink A tracing function of the cache client and cache host. Log sinks capture trace events from the cache client or cache host and can display them in a console, write them to a log file, or report them to the Event Tracing for Windows (ETW) framework inside Windows. logic error An error, such as a faulty algorithm, that causes a program to produce incorrect results but does not prevent the program from running. Consequently, a logic error is often very difficult to find. logical name A name used by SQL Server to identify a file. logical record A merge replication feature that allows you to define a relationship between related rows in different tables so that the rows are processed as a unit. login ID A string that is used to identify a user or entity to an operating system, directory service, or distributed system. For example, in Windows® integrated authentication, a login name uses the form "DOMAIN\username." login security mode A security mode that determines the manner in which an instance of SQL Server validates a login request. long parsing In the SQL system there are two typical types of threads: - short thread: it is a process that use the resources for a short time and - long thread: it is a process that use the resources for a long time. long parsing: is the analysis of the threads that lived for a long time Note: the definition of short/long is based on the system calculation/statistic for each process. Low Box The lowest value of a box on a Box Plot chart. low watermark A memory consumption threshold on each cache host that specifies when expired objects are evicted out of memory. low whisker The lowest value that is not an outlier on a box plot chart. LRU LSN luring attack An attack in which the client is lured to voluntarily connect to the attacker. made-with knowledge In synchronization processes, the current knowledge of the source replica, to be used in conflict detection. Manage Relationships A UI element that enables a user to view, delete or create new relationships in a model.. managed instance An instance of SQL Server monitored by a utility control point. management data warehouse A relational database that is used to store data that is collected. management policy A definition of the workflows used for managing certificates within a Certificate Lifecyle Manager (CLM) profile template. A management policy defines who performs specific management tasks within the workflows, and provides management details for the entire lifecycle of the certificates within the profile template. Management Studio A suite of management tools included with Microsoft SQL Server for configuring, managing, and administering all components within Microsoft SQL Server.. To define this relationship between the dimension and the fact table, the dimension is joined to an intermediate fact table and the intermediate fact table is joined, in turn, to an intermediate dimension table that is joined to the fact table. many-to-one relationship A relationship between two tables in which one row in one table can relate to many rows in another table. map To associate data with a specified location in memory. map control A JavaScript control that contains the objects, methods, and events that you need to display maps powered by Bing Maps™ on your Web site. map gallery A gallery that contains maps from reports that are located in the map gallery folder for the report authoring environment. map layer A child element of the map, each map layer including elements for their map members and map member attributes. map resolution The accuracy at which the location and shape of map features can be depicted for a given map scale. In a large scale map (e.g. a map scale of 1:1) there is less reduction of features than those shown on a small scale map (e.g. 1:1,000,000). map tile One of a number of 256 x 256 pixel images that are combined to create a Bing map. A map tile contains a segment of a view of the earth in Mercator projection, with possible road and text overlays depending on the style of the Bing map. map viewport The area of the map to display in the map report item. For example, a map for the entire United States might be embedded in a report, but only the area for the northwestern states are displayed. MAPI A messaging architecture that enables multiple applications to interact with multiple messaging systems across a variety of hardware platforms. MAPI is built on the Component Object Model (COM) foundation. mapper A component that maps objects. marker A visual indicator that identifies a data point. In a map report, a marker is the visual indicator that identifies the location of each point on the point layer. marker map market basket analysis A standard data mining algorithm that analyzes a list of transactions to make predictions about which items are most frequently purchased together. master data The critical data of a business, such as customer, product, location, employee, and asset. Master data fall generally into four groupings: people, things, places, and concepts and can be further categorized. For example, within people, there are customer, employee, and salesperson. Within things, there are product, part, store, and asset. Within concepts, there are things like contract, warrantee, and licenses. Finally, within places, there are office locations and geographic divisions. master data management The technology, tools, and processes required to create and maintain consistent and accurate lists of master data of an organization. Master Data Manager A component of the Master Data Services application for managing and accessing master data. Master Data Services A master data management application to consistently define and manage the critical data entities of an organization. merge The process of combining shadow indexes with the current master index to form a new master index. master server A server that distributes jobs and receives events from multiple servers. materialized view A view in which the query result is cached as a concrete table that may be updated from the original base tables from time to time.. MDS MDX A language for querying and manipulating data in multidimensional objects (OLAP cubes). measure In a cube, a set of values that are usually numeric and are based on a column in the fact table of the cube. Measures are the central values that are aggregated and analyzed. measure group A collection of related measures in an Analysis Services cube. The measures are generally from the same fact table. media family Data written by a backup operation to a backup device used by a media set. In a media set with only a single device, only one media family exists. In a striped media set, multiple media families exist. If the striped media set is unmirrored, each device corresponds to a family. A mirrored media set contains from two to four identical copies of each media family (called mirrors). Appending backups to a media set extends its media families. media header A label that provides information about the backup media. media set An ordered collection of backup media written to by one or more backup operations using a constant number of backup devices. median The middle value in a set of ordered numbers. The median value is determined by choosing the smallest value such that at least half of the values in the set are no greater than the chosen value. If the number of values within the set is odd, the median value corresponds to a single value. If the number of values within the set is even, the median value corresponds to the sum of the two middle values divided by two. median price formula A formula that calculates the average of the high and low prices. median value member member delegation A modeling concept that describes how interface members are mapped from one interface to another. member expression A valid Multidimensional Expressions (MDX) expression that returns a member. member property memo A type of column that contains long strings, typically more than 255 characters. memory broker A software component that manages the distribution of memory resources in SQL Server. memory clerk A memory management component that allocates memory. merge replication A type of replication that allows sites to make autonomous changes to replicated data, and at a later time, merge changes and resolve conflicts when necessary. merge tombstone A marker created when a constraint conflict is resolved by merging the two items in conflict. message number A number that identifies a SQL Server error message. Message Queuing A Microsoft technology that enables applications running at different times to communicate across heterogeneous networks and systems that may be temporarily offline. message type A definition of a Service Broker message. The message type specifies the name of the message and the type of validation Service Broker performs on incoming messages of that type. Messaging Application Programming Interface metadata Information about the properties or structure of data that is not part of the values the data contains. method In object-oriented programming, a named code block that performs a task when called. Microsoft Message Queuing Microsoft Sequence Clustering algorithm Algorithm that is a combination of sequence analysis and clustering, which identifies clusters of similarly ordered events in a sequence. The clusters can be used to predict the likely ordering of events in a sequence based on known characteristics. Microsoft SQL Server A family of Microsoft relational database management and analysis systems for e-commerce, line-of-business, and data warehousing solutions. Microsoft SQL Server 2008 Express A lightweight and embeddable version of Microsoft SQL Server 2008. Microsoft SQL Server 2008 Express with Advanced Services A Microsoft relational database design and management system for e-commerce, line-of-business, and data warehousing solutions. Microsoft SQL Server 2008 Express with Tools A free, easy-to-use version of the SQL Server Express data platform that includes the graphical management tool: SQL Server Management Studio (SMSS) Express. Microsoft SQL Server Compact A Microsoft relational database management and analysis system for e-commerce, line-of-business, and data warehousing solutions. Microsoft SQL Server Compact 3.5 for Devices A file that installs the SQL Server Compact 3.5 devices runtime components. Microsoft SQL Server Notification Services A Microsoft SQL Server add-in that provides a development framework and hosting server for building and deploying notification applications. Microsoft SQL Server PowerPivot for Microsoft Excel A SQL Server add-in for Excel. Microsoft SQL Server PowerPivot for Microsoft SharePoint A Microsoft SQL Server technology that provides query processing and management control for PowerPivot workbooks published to SharePoint. Microsoft SQL Server Reporting Services Report Builder A report authoring tool that features a Microsoft Office-like authoring environment and features such as new sparkline, data bar, and indicator data visualizations, the ability to save report items as report parts, a wizard for creating maps, aggregates of aggregates, and enhanced support for expressions. Microsoft SQL Server Service Broker A technology that helps developers build scalable, secure database applications. Microsoft SQL Server System CLR Types A stand-alone package, part of SQL Server 2008 R2 Feature Pack, that contains the components implementing the geometry, geography, and hierarchy id types in SQL Server. Microsoft Time Series algorithm Algorithm that uses a linear regression decision tree approach to analyze time-related data, such as monthly sales data or yearly profits. The patterns it discovers can be used to predict values for future time steps. middleware Software that sits between two or more types of software and translates information between them. Middleware can cover a broad spectrum of software and generally sits between an application and an operating system, a network operating system, or a database management system. structure A data mining object that defines the data domain from which the mining models are built. minor tick mark A tick mark that corresponds to a minor scaling unit on an axis. mirror database In a database mirroring session, the copy of the database that is normally fully synchronized with the principal database. mirror server In a database mirroring configuration, the server instance where the mirror database resides. The mirror server is the mirroring partner whose copy of the database is currently the mirror database. The mirror server is a hot standby server. mirrored media set A media set that contains two to four identical copies (mirrors) of each media family. Restore operations require only one mirror per family, allowing a damaged media volume to be replaced by the corresponding volume from a mirror. mirroring model database A database that is installed with Microsoft SQL Server and that provides the template for new user databases. SQL Server creates a database by copying in the contents of the model database and then expanding the new database to the size requested. model dependency A relationship between two or more models in which one model is dependent on the information of another model. modulo An arithmetic operation whose result is the remainder of a division operation. For example, 17 modulo 3 = 2 because 17 divided by 3 yields a remainder of 2. Modulo operations are used in programming. monitor server In a log shipping configuration, a server instance on which every log shipping job in the configuration records its history and status. Each log shipping configuration has its own dedicated monitor server. msdb A database that stores scheduled jobs, alerts, and backup/restore history information. MSMQ multibase differential A differential backup that includes files that were last backed up in distinct base backups. multibyte character set multicast delivery A method for delivering notifications that formats a notification once and sends the resulting message to multiple subscribers. multidimensional expression multidimensional OLAP A storage mode that uses a proprietary multidimensional structure to store a partition's facts and aggregations or a dimension. multidimensional structure A database paradigm that treats data as cubes that contain dimensions and measures in cells. multiplier In arithmetic, the number that indicates how many times another number (the multiplicand) is multiplied. multiserver administration The process of automating administration across multiple instances of SQL Server. multithreaded server application An application that creates multiple threads within a single process to service multiple user requests at the same time. multiuser Pertaining to any computer system that can be used by more than one person. Although a microcomputer shared by several people can be considered a multiuser system, the term is generally reserved for machines that can be accessed simultaneously by several people through communications facilities or via network terminals. mutex A programming technique that ensures that only one program or routine at a time can access some resource, such as a memory location, an I/O port, or a file, often through the use of semaphores, which are flags used in programs to coordinate the activities of more than one program or routine. My Reports A personalized workspace. My Subscriptions A page that lists all subscriptions that a user owns. named cache A configurable unit of in-memory storage that has policies associated with it and that is available across all cache hosts in a cache cluster. named instance An installation of SQL Server that is given a name to differentiate it from other named instances and from the default instance on the same computer. named pipe A portion of memory that can be used by one process to pass information to another process, so that the output of one is the input of the other. The second process can be local (on the same computer as the first) or remote (on a networked computer). named update A custom service operation that performs an action which is different than a simple query, update, insert, or delete operation. named update method naming convention Any standard used more or less universally in the naming of objects, etc. national language support API Set of system functions in 32-bit Windows containing information that is based on language and cultural conventions. native compiler A compiler that produces machine code for the computer on which it is running, as opposed to a cross-compiler, which produces code for another type of computer. Most compilers are native compilers. native format A data format that maintains the native data types of a database. Native format is recommended when you bulk transfer data between multiple instances of Microsoft SQL Server using a data file that does not contain any extended/double-byte character set (DBCS) characters. natural hierarchy A hiearchy in which at every level there is a one-to-many relationship between members in that level and members in the next lower level. needle cap One of the two appearance properties that can be applied to a radial gauge. nested query A SELECT statement that contains one or more subqueries. Net-Library A SQL Server communications component that isolates the SQL Server client software and the Database Engine from the network APIs. network software Software that enables groups of computers to communicate, including a component that facilitates connection to or participation in a network. new line character A control character that causes the cursor on a display or the printing mechanism on a printer to move to the beginning of the next line. nickname When used with merge replication system tables, a name for another Subscriber that is known to already have a specified generation of updated data. niladic functions Functions that do not have any input parameters. NLS API node noise word A word such as 'the' or 'an' that is not useful for searches, or that a crawler should ignore when creating an index. nonclustered index An index in which the logical order of the index key values is different than the physical order of the corresponding rows in a table. The index contains row locators that point to the storage location of the table data. non-contained database A SQL Server database that stores database settings and metadata with the instance of SQL Server Database Engine where the database is installed, and requires logins in the master database for authentication. Nonkey index column Column in a nonclustered index that does not participate as a key column. Rather, the column is stored in the leaf-level of the index and is used in conjunction with the key columns to cover one or more queries.. normalization rule A design rule that minimizes data redundancy and results in a database in which the Database Engine and application software can easily enforce integrity. notification A message or announcement sent to the user or administrator of a system. The recipient may be a human or an automated notification manager. NSControl The command prompt utility for administering Notification Services instances and applications. NUL A "device," recognized by the operating system, that can be addressed like a physical output device (such as a printer) but that discards any information sent to it. null Pertaining to a value that indicates missing or unknown data. null key A null value that is encountered in a key column. null pointer A pointer to nothing: usually a standardized memory address, such as 0. A null pointer usually marks the last of a linear sequence of pointers or indicates that a data search operation has come up empty. nullability The attribute of a column, parameter, or variable that specifies whether it allows null data values. nullable property A property which controls if a field can have a NULL value. NUMA node A group of processors with its own memory and possibly its own I/O channels. numeric array An array composed of a collection of keys and a collection of values, where each key is associated with one value. The values can be of any type, but the keys must be numeric. numeric expression Any expression that evaluates to a number. The expression can be any combination of variables, constants, functions, and operators.). object variable A variable that contains a reference to an object. ODS library A set of C functions that makes an application a server. ODS library calls respond to requests from a client in a client/server network. Also manages the communication and data between the client and the server. ODS library follows the tabular data stream (TDS) protocol. Office File Validation A security feature that validates files before allowing them to be loaded by the application, in order to protect against file format vulnerabilities. offline restore A restore during which the database is offline. OLAP A technology that uses multidimensional structures to provide rapid access to data for analysis. The source data for OLAP is commonly stored in data warehouses in a relational database. OLAP cube OLAP database A relational database system capable of handling queries more complex than those handled by standard relational databases, through multidimensional access to data (viewing the data by several different criteria), intensive calculation capability, and specialized indexing techniques. one-time password Passwords produced by special password generating software or by a hardware token and that can be used only once. online analytical processing online restore A restore in which one or more secondary filegroups, files belonging to secondary filegroups, or pages are restored while the database remains online. Online restore is available only in the SQL Server 2005 Enterprise Edition (including the Evaluation and Developer Editions). Open Data Services library operation code The portion of a machine language or assembly language instruction that specifies the type of instruction and the structure of the data on which it operates.. A sign or symbol that specifies the type of calculation to perform within an expression. There are mathematical, comparison, logical, and reference operators. optimistic concurrency A method of managing concurrency by using a cached object's version information. Because every update to an object changes its version number, using version information prevents the update from overwriting someone else's changes. optimize synchronization An option in merge replication that allows you to minimize network traffic when determining whether recent changes have caused a row to move into or out of a partition that is published to a Subscriber. ordered set A set of members returned in a specific order. origin object An object in a repository that is the origin in a directional relationship. OTP outer join A join that includes all the rows from the joined tables that meet the search conditions, even rows from one table for which there is no matching row in the other join table. output adapter An adapter that receives events processed by the server, transforms the events into a format expected by the output device (a database, text file, PDA, or other device), and emits the data to that device. output column In SQL Server Integration Services, a column that the source adds to the data flow and that is available as input column to the next data flow component in the data flow. output stream A flow of information that leaves a computer system and is associated with a particular task or destination. overfitting A problem in data mining when random variations in data are misclassified as important patterns. Overfitting often occurs when the data set is too small to represent the real world. ownership chain When an object references other objects and the calling and the called objects are owned by the same user. SQL Server uses the ownership chain to determine how to check permissions. package A collection of control flow and data flow elements that runs as a unit. packet A unit of information transmitted from one computer or device to another on a network. Pad index An option that specifies the space to leave open on each page in the intermediate levels of the index. padding an embedded command page To return the results of a query in smaller subsets of data, thus making it possible for the user to navigate through the result set by viewing 'pages' of data. In a virtual storage system, a fixed-length block of contiguous virtual addresses copied as a unit from memory to disk and back during paging operations. page fault The interrupt that occurs when software attempts to read from or write to a virtual memory location that is marked "not present." page restore An operation that restores one or more data pages. Page restore is intended for repairing isolated damaged pages. pager A pocket-sized wireless electronic device that uses radio signals to record incoming phone numbers or short text messages. Some pagers allow users to send messages as well. paging system A system that allows users to send and receive messages when they are out of range. PAL The primary mechanism for securing the Publisher. It contains a list of logins, accounts, and groups that are granted access to the publication. parallel execution The apparently simultaneous execution of two or more routines or programs. Concurrent execution can be accomplished on a single process or by using time-sharing techniques, such as dividing programs into different tasks or threads of execution, or by using multiple processors. parallel processing A method of processing that can run only on a computer that contains two or more processors running simultaneously. Parallel processing differs from multiprocessing in the way a task is distributed over the available processors. In multiprocessing, a process might be divided up into sequential blocks, with one processor managing access to a database, another analyzing the data, and a third handling graphical output to the screen. Programmers working with systems that perform parallel processing must find ways to divide a task so that it is more or less evenly distributed among the processors available. parameterized query A query that accepts input values through parameters. parameterized report A published report that accepts input values through parameters. parameterized row filter partial backup A backup of all the data in the primary filegroup, every read-write filegroup, and any optionally specified files. A partial backup of a read-only database contains only the primary filegroup. partial database restore A restore of only a portion of a database consisting of its primary filegroup and one or more secondary filegroups. The other filegroups remain permanently offline, though they can be restored later. partial differential backup A partial backup that is differential relative to a single, previous partial backup (the base backup). For a read-only database, a partial differential backup contains only the primary filegroup. participant particle A very small piece or part; an indivisible object. partitioned table A table built on a partition scheme, and whose data is horizontally divided into units which may be spread across more than one filegroup in a database. partitioned table parallelism The parallel execution strategy for queries that select from partitioned objects. As part of the execution strategy, the query processor determines the table partitions that are required for the query and the proportion of threads to allocate to each partition. In most cases, the query processor allocates an equal or almost equal number of threads to each partition, and then executes the query in parallel across the partitions. phrase A sequence of words or other text used to gain access to a network, program, or data. A passphrase is generally longer for added security. pass-through query An SQL-specific query you use to send commands directly to an ODBC database server. pass-through statement A SELECT statement that is passed directly to the source database without modification or delay. password authentication password policy A collection of policy settings that define the password requirements for a Group Policy object (GPO). password provider A one-time-password generation and validation component for user authentication. path A data flow element that connects the output of one data flow component to the input of another data flow component. PBM A set of built-in functions that return server state information about values, objects, and settings in SQL Server. Policy Based Management allows a database administrator to declare the desired state of the system and checks the system for compliance with that state.. performance tools Tools that you can use to evaluate the performance of a solution. Performance tools can have different purposes; some are designed to evaluate end-to-end performance while others focus on evaluating performance of a particular aspect of a solution. persisted computed column A computed column of a table that is physically stored, and whose values are updated when any other columns that are part of its computation change. Applying the persisted property to a computed column allows for indexes to be created on it when the column is deterministic, but not precise. persistence database A type of persistence store that stores workflow instance state and workflow instance metadata in a SQL Server database. perspective A user-defined subset of a cube, whereas a view is a user-defined subset of tables and columns in a relational database. pessimistic concurrency A method of managing concurrency by using a lock technique to prevent other clients from updating the same object at the same time. phantom Pertaining to the insertion of a new row or the deletion of an existing row in a range of rows that were previously read by another task, where that task has not yet committed its transaction. PHP An open source scripting language that can be embedded in HTML documents to execute interactive functions on a Web server. It is generally used for Web development. PHP:Hypertext Preprocessor physical name The name of the path where a file or mirrored file is located. The default is the path of the Master.dat file followed by the first eight characters of the file's logical name. physical storage The amount of RAM memory in a system, as distinguished from virtual memory. Pickup folder The directory from which messages are picked up. piecemeal restore A composite restore in which a database is restored in stages, with each stage corresponding to a restore sequence. The initial sequence restores the files in the primary filegroup, and, optionally, other files, to any point in time supported by the recovery model and brings the database online. Subsequent restore sequences bring remaining files to the point consistent with the database and bring them online. PIN A unique and secret identification code similar to a password that is assigned to an authorized user and used to gain access to personal information or assets via an electronic device. PInvoke The functionality provided by the common language runtime to enable managed code to call unmanaged native DLL entry points. pivot To rotate a table-valued expression by turning the unique values from one column in the expression into multiple columns in the output, and perform aggregations where they are required on any remaining column values that are wanted in the final output. pivot currency The currency against which exchange rates are entered in the rate measure group. PivotTable An interactive technology in Microsoft Excel or Access that can show dynamic views of the same data from a list or a database. PL/SQL Oracle's data manipulation language that allows sequenced or grouped execution of SQL statements and is commonly used to manipulate data in an Oracle database. The syntax is similar to the Ada programming language. placeholder item An artificial address, instruction, or other datum fed into a computer only to fulfill prescribed conditions and not affecting operations for solving problems. plaintext plan guide A SQL Server module that attaches query hints to queries in deployed applications, without directly modifying the query. planar In computer graphics, lying within a plane. platform invoke platform layer The layer that includes the physical servers and services that support the services layer. The platform layer consists of many instances of SQL Server, each of which is managed by the SQL Azure fabric. PMML An XML-based language that enables sharing of defined predictive models between compliant vendor applications. Point and Figure chart A chart that plots day-to-day price movements without taking into consideration the passage of time. point depth The depth of data points displayed in a 3D chart area. point event An event occurrence as of a single point in time. Only the start time is required for the event. The CEP server infers the valid end time by adding a tick (the smallest unit of time in the underlying time data type) to the start time to set the valid time interval for the event. Point events are valid only for this single instant of time. point event model The event model of a point event. point layer The layer in a map report that displays spatial data as points, for examples, points that indicate cities or points of interest. pointer A needle, marker, or bar that indicates a single value displayed against a scale on a report. point-in-time recovery The process of recovering only the transactions within a log backup that were committed before a specific point in time, instead of recovering the whole backup. poison message A message containing information that an application cannot successfully process. A poison message is not a corrupt message, and may not be an invalid request. policy module A Certificate Services component that determines whether a certificate request should be automatically approved, denied, or marked as pending. policy-based management poller A component or interface that monitors the status of other components. It performs this function by repeatedly polling the other component to provide its current status. polling The process of periodically determining the status of each device in a set so that the active program can process the events generated by each device, such as whether a mouse button was pressed or whether new data is available at a serial port. This can be contrasted with event-driven processing, in which the operating system alerts a program or routine to the occurrence of an event by means of an interrupt or message rather than having to check each device in turn. polling query A singleton query that returns a value Analysis Services can use to determine if changes have been made to a table or other relational object. polygon layer The layer in a map report that displays spatial data as areas, for example, areas that indicate geographical regions such as counties. population positioned update An update, insert, or delete operation performed on a row at the current position of the cursor. PowerPivot PowerPivot data A SQL Server Analysis Services cube that is created and embedded through Microsoft SQL Server PowerPivot for Microsoft Excel. PowerPivot for SharePoint PowerPivot service A middle tier service of the Analysis Services SharePoint integration feature that allocates requests, monitors server availability and health, and communicates with other services in the farm. PowerPivot service application A specific configuration of the PowerPivot service. PowerPivot Web service A Web service that performs request redirection for processing requests that are directed to a PowerPivot Engine service instance that is outside the farm. PowerPivot workbook An Excel 2010 workbook that contains PowerPivot data. PowerShell-based cache administration tool The exclusive management tool for Windows Server AppFabric. With more than 130 standard command-line tools, this new administration-focused scripting language helps you achieve more control and productivity. precedence constraint A control flow element that connects tasks and containers into a sequenced workflow. precomputed partition A performance optimization that can be used with filtered merge publications. predictable column A data mining column that the algorithm will build a model around based on values of the input columns. Besides serving as an output column, a predictable column can also be used as input for other predictable columns within the same mining structure. prediction A data mining technique that analyzes existing data and uses the results to predict values of attributes for new records or missing attributes in existing records. For example, existing credit application data can be used to predict the credit risk for a new application. Prediction Calculator A new report that is based on logistic regression analysis and that presents each contributing factor together with a score calculated by the algorithm. The report is presented both as a worksheet that helps you enter data and make calculations of the probable outcomes, and as a printed report that does the same thing. prefix characters A set of 1 to 4 bytes that prefix each data field in a native-format bulk-copy data file. prefix length The number of prefix characters preceding each noncharacter field in a bcp native format data file. prerequisite knowledge The minimum knowledge that a destination provider is required to have to process a change or change batch. presentation model A data model that aggregates data from multiple entities in the data access layer. It is used to avoid directly exposing an entity to the client project. DPM server A DPM server that protects file or application data sources. primary key One or more fields that uniquely identify each record in a table. In the same way that a license plate number identifies a car, the primary key uniquely identifies a record. primary protection A type of protection in which data on the protected server is directly protected by a primary DPM Server. primary server In a log shipping configuration, the server instance where the primary database resides. primary table The "one" side of two related tables in a one-to-many relationship. A primary table should have a primary key and each record should be unique. principal database In database mirroring, a read-write database whose transaction log is continuously sent to the mirror server, which restores the log to the mirror database. principal server In database mirroring, the partner whose database is currently the principal database. priority boost An advanced option that specifies whether Microsoft® SQL Server™ should run at a higher Microsoft Windows NT® scheduling priority than other processes on the same computer. private key The secret half of a cryptographic key pair that is used with a public key algorithm. Private keys are typically used to decrypt a symmetric session key, digitally sign data, or decrypt data that has been encrypted with the corresponding public key.. profile template The core of all Certificate Lifecycle Manager (CLM) management activities. The profile template provides a single administrative unit that includes all information necessary to manage the multiple certificates that might be required by a user community throughout the certificate’s lifecycle. A profile template also includes information regarding the final location of those certificates, which can be software-based (that is, stored on the local computer) or hardware-based (stored on a smart card). A profile template cannot include both software-based and smart card-based certificates. profit chart A diagram that displays the theoretical increase in profit that is associated with using various data models. programmable Capable of accepting instructions for performing a task or an operation. Being programmable is a characteristic of computers. properties page A dialog box that displays information about an object in the interface. property Attribute or characteristic of an object that is used to define its state, appearance, or value. property mapping A mapping between a variable and a property of a package element. property page A grouping of properties presented as a tabbed page of a property sheet. protected computer A computer that contains data sources that are protection group members. protected member A data source within a protection group. protocol A standard set of formats and procedures that enable computers to exchange information. provider An in-process dynamic link library (DLL) that provides access to a database. A software component that allows a replica to synchronize its data with other replicas. provider object An object that's part of a data provider such as Oracle Provider for SQL Server. proximity search Full-text query searching for those occurrences where the specified words are close to one another. proxy account An account that is used to provide additional permissions for certain actions to users which do not have these permissions but have to execute these actions. public key The nonsecret half of a cryptographic key pair that is used with a public key algorithm. Public keys are typically used when encrypting a session key, verifying a digital signature, or encrypting data that can be decrypted with the corresponding private key. publication A collection of one or more articles from one database. publication access list. A Publisher also detects changed data and maintains information about all publications at the site. publisher database publishing server A server running an instance of Analysis Services that stores the source cube for one or more linked cubes. publishing table The table at the Publisher in which data has been marked for replication and is part of a publication. pull The process of retrieving data from a network server. pull replication Replication that is invoked at the target. pull subscription A subscription created and administered at the Subscriber. The Distribution Agent or Merge Agent for the subscription runs at the Subscriber. pure log backup A backup containing only the transaction log covering an interval without any bulk changes. push The process of sending data to a network server. push replication Replication that is invoked at the source. push subscription A subscription created and administered at the Publisher. qualified component A method of identifying a component in an MSI database indirectly by a pair "component category GUID, string qualifier" instead of identifying it directly by the component identifier. qualifier A modifier containing information that describes a class, instance, property, method, or parameter. Qualifiers are defined by the Common Information Model (CIM), by the CIM Object Manager, and by developers. qualifier flavor A flag that provides additional information about a qualifier, such as whether a derived class or instance can override the qualifier's original value. quantum A brief period of time when a given thread executes in a multitasking operating system. It performs the multitasking before it is rescheduled against other threads with the same priority. Previously known as a "time slice." query An instance of a query template that runs continuously in the StreamInsight server processing events received from instances of input adapters to which the query is bound and sending processed events to instances of output adapters to which it is bound. query binder An object that binds an existing StreamInsight query template to specific input and output adapters. query binding The process of binding instances of input adapters and instances of output adapters to an instance of a query template. query designer A tool that helps a user create the query command that specifies the data the user wants in a report dataset. query governor A configuration option that can be used to prevent system resources from being consumed by long-running queries. query hint A hint that specifies that the indicated hints should be used throughout the query. Query hints affect all operators in the statement. query optimizer The SQL Server Database Engine component responsible for generating efficient execution plans for SQL statements. query template The fundamental unit of query composition. A query template defines the business logic required to continuously analyze and process events submitted to and emitted by the StreamInsight server. question template A structure that describes a set of questions that can be asked using a particular relationship or set of relationships. Queue Reader Agent The executable that reads the changes from subscribers in the queue and delivers the changes to the publisher. quorum In a database mirroring session with a witness server, a relationship in which the servers that can currently communicate with each other arbitrate who owns the role of principal server. quoted identifier RAD A method of building computer systems in which the system is programmed and implemented in segments, rather than waiting until the entire project is completed for implementation. Developed by programmer James Martin, RAD uses such tools as CASE and visual programming. ragged hierarchy A hierarchy in which one or more levels do not contain members in one or more branches of the hierarchy. range A set of continuous item identifiers to which the same clock vector applies. A range is represented by a starting point, an ending point, and a clock vector that applies to all IDs that are in between. range partition A table partition that is defined by specific and customizable ranges of data.. ranking function Function that returns ranking information about each row in the window (partition) of a result set depending on the row's position within the window. rapid application development rate of change The rate of price change compared with historical data. The rate of change is calculated against a period of days prior to the current price. The output is a percentage. raw destination adapter An SSIS destination that writes raw data to a file. raw file A native format for fast reading and writing of data. Raw File destination Raw File source An SSIS source that reads raw data from a file. raw source adapter RDA A service that provides a simple way for a smart device application to access (pull) and send (push) data to and from a remote SQL Server database table and a local SQL Server Mobile Edition database table. RDA can also be used to issue SQL commands on a server running SQL Server. RDBMS A database system that organizes data into related rows and columns as specified by a relational model. RDL A set of instructions that describe layout and query information for a report. RDL is composed of XML elements that conform to an XML grammar created for Reporting Services. RDL Sandboxing A feature that makes it possible to detect and restrict specific types of resource use by individual tenants in a scenario where multiple tenants share a single Web farm of report servers. record set A data structure made up of a group of database records. It can originate from a base table or from a query to the table. recover To put back into a stable condition. A computer user may be able to recover lost or damaged data by using a program to search for and salvage whatever information remains in storage. A database may be recovered by restoring its integrity after some problem has damaged it, such as abnormal termination of the database management program. is possible that includes ranges of LSNs that cover two or more recovery fork points. recovery fork point The point (LSN,GUID) at which a new recovery branch is started, every time a RESTORE WITH RECOVERY is performed. Each recovery fork determines a parent-child relationship between the recovery branches. If you recover a database to an earlier point in time and begin using the database from that point, the recovery fork point starts a new recovery path. recovery interval The maximum amount of time that the Database Engine should require to recover a database. LSNs from a start point (LSN,GUID) to an end point (LSN,GUID). The range of LSNs in a recovery path can traverse one or more recovery branches from start to. recursive partitioning The iterative process, used by data mining algorithm providers, of dividing data into groups until no more useful groups can be found. redo The phase during recovery that applies (rolls forward) logged changes to a database to bring the data forward in time. redo phase redo set The set of all files and pages being restored. reference data Data characterized by shared read operations and infrequent changes. Examples of reference data include flight schedules and product catalogs. Windows Server AppFabric offers the local cache feature for storing this type of data. reference dimension A relationship between a dimension and a measure group in which the dimension is coupled to the measure group through another dimension. reference table The source table to use in fuzzy lookups. referenced key A primary key or unique key referenced by a foreign key. referencing key reflexive relationship A relationship from a column or combination of columns in a table to other columns in that same table. region A collection of 128 leaf level pages in logical order in a single file. Used to identify areas of a file that are fragmented. registration model A defined method for submitting and approving enrollment requests. An enterprise generally chooses one registration model and modifies their management policies accordingly. regression The statistical process of predicting one or more continuous variables, such as profit or loss, based on other attributes in the dataset. regressor An input variable that has a linear relationship with the output variable. relational database management system relational engine A component of SQL Server that works with the storage engine. It is responsible for interpreting Transact-SQL search queries and mapping out the most efficient methods of searching the raw physical data provided by the storage engine and returning the results to the user. relational OLAP A storage mode that uses tables in a relational database to store multidimensional structures. relational store A data repository structured in accordance with the relational model. relationship object An object representing a pair of objects that assume a role in relation to each other. output The output from a rendering extension. rendered report A fully processed report that contains both data and layout information, in a format suitable for viewing (such as HTML). rendering extension A plug-in that renders reports to a specific format (for example, an Excel rendering extension) rendering object model Report object model used by rendering extensions. replica A complete copy of protected data residing on a single volume on the DPM server. A replica is created for each protected data source after it is added to its protection group. With co-location, multiple data sources can have their replicas residing on the same replica volume. A particular repository of information to be synchronized. replica ID A value that uniquely identifies a replica. replica key A 4-byte value that maps to a replica ID in a replica key map. replica tick count A monotonically increasing number that is used to uniquely identify a change to an item in a replica. replication The process of copying content and/or configuration settings from one location, generally a server node, to another. Replication is done to ensure synchronization or fault tolerance. Replication Management Objects A managed code assembly that encapsulates replication functionalities for SQL Server. Report Builder report data pane A data pane that displays a hierarchical view of the items that represent data in the user's report. The top level nodes represent built-in fields, parameters, images, and data source references. report definition The blueprint for a report before the report is processed or rendered. A report definition contains information about the query and layout for the report. Report Definition Language Report Designer A collection of design surfaces and graphical tools that are hosted within the Microsoft Visual Studio environment. report execution snapshot A report snapshot that is cached. Report administrators create report execution snapshots if they want to run reports from static copies. report history Collection of previously run copies of a report. report history snapshot Report history that contains data captured at a specific point in time. report intermediate format Internal representation of a report. report item Entity on a report. report layout template A pre-designed table, matrix, or chart report template in Report Builder. report link URL to a report. Report Manager A Web-based report management tool report model A metadata description of business data used for creating ad hoc reports. report part A report item that has been published separately to a report server and that can be reused in other reports. Report Processor component A component that retrieves the report definition from the report server database and combines it with data from the data source for the report. Report Project A template in the report authoring environment. Report Project Wizard A wizard in the report authoring environment used to create reports.. A report server administrator is a user who is assigned to the Content Manager role, the System Administrator role, or both. All local administrators are automatically report server administrators, but additional users can become report server administrators for all or part of the report server namespace. report server database A database that provides internal storage for a report server. report server execution account The account under which the Report Server Web service and Report Server Windows service run. Report Server Web service A Web service that hosts, processes, and delivers reports. report snapshot A static report that contains data captured at a specific point in time. report-specific schedule Schedule defined inline with a report. Report-specific schedules are defined in the context of an individual report, subscription, or report execution operation to determine cache expiration or snapshot updates.lisher A Subscriber that publishes data that it has received from a Publisher. reserved character A keyboard character that has a special meaning to a program and, as a result, normally cannot be used in assigning names to files, documents, and other user-generated tools, such as macros. Characters commonly reserved for special uses include the asterisk (*), forward slash (/), backslash (\), question mark (?), and vertical bar (|). resolution strategy A set of criteria that the repository engine evaluates sequentially when selecting an object, where multiple versions exist and version information is unspecified in the calling program. resource A special variable that holds a reference to a database connection or statement. Any item in a report server database that is not a report, folder, or shared data source item. resource data A type of data that is characterized by shared, concurrently read and written into operations, and accessed by many transactions. Examples of resource data include user accounts and auction items. resource governor A feature in SQL Server 2008 that enables the user to manage SQL Server workload and resources by specifying limits on resource consumption by incoming requests.. restore sequence A sequence of one or more restore commands that, typically, initializes the contents of the database, files, and/or pages being restored (the data-copy phase), rolls forward logged transactions (the redo phase), and rolls back uncommitted transactions (the undo phase). result set The set of records that results from running a query or applying a filter. results retract event An internal event kind used to modify an existing insert event by modifying the end time of the event. reusable bookmark A bookmark that can be consumed from a rowset for a given table and used on a different rowset of the same table to position on a corresponding row. revocation delay The period of time between when the credential revocation request is placed and when the credentials are actually revoked. RIA A web application that provides a user interface which is more similar to a desktop application than typical web pages. It is able to process user actions without posting the whole web page to a web server. RIA Services link A project-to-project link reference that facilitates generating presentation tier code from middle tier code. rich Internet application. ring index An index that indicates the number of rings in a polygon instance. RMO role assignment The assignment of a specific role that determines whether a user or group can access a specific item and perform an operation on it. role definition The collection of task permissions associated with a role. restored by a restore sequence. A roll forward set is defined by restoring a series of one or more data backups. roll up To collect subsets of data from multiple locations in one location. rollover file A file created when the file rollover option causes SQL Server to close the current file and create a new file when the maximum file size is reached. route A Service Broker object that specifies the network address for a remote service. routing client A type of cache client that includes a routing table that is maintained by lead hosts in the cluster and enables the client to obtain cached data directly from the cache host on which the data resides. routing table A data structure used by the routing client to track the connectivity information of all cache hosts in the cache cluster. It is maintained by lead hosts in the cluster. It allows a routing client to obtain cached data directly from the cache host on which the data resides. row aggregate function A function that generates summary values, which appear as additional rows in the query results. row lock A lock on a single row in a table. row versioning In 0nline index operations, a feature that isolates the index operation from the effects of modifications that are made by other transactions. row-overflow data varchar, nvarchar, varbinary, or sql_variant data stored off the main data page of a table or index as a result of the combined widths of these columns exceeding the 8,060-byte row limit in a table. rowset A set of rows in which each row has one or more columns of data. rs utility Report scripting tool. rs.exe rsconfig utility Server connection management tool. rsconfig.exe rule firing The process of running one of the application rules (event chronicle rules, subscription event rules, and subscription scheduled rules) defined in the application definition file. runaway query A query with an excessive running time, that can lead to a blocking problem. Runaway queries usually do not use use a query or lock time out. run-time error A software error that occurs while a program is being executed, as detected by a compiler or other supervisory program. safe code sampling A statistical process that yields some inferential knowledge about a population or data set of interest as a whole by observing or analyzing a portion of the population or data set. save process The process of writing data to disk. savepoint A location to which a transaction can return if part of the transaction is conditionally canceled or encounters an error, hence offering a mechanism to roll back portions of transactions. SBCS A character encoding in which each character is represented by 1 byte. Single byte character sets are mathematically limited to 256 characters. scalar A factor, coefficient, or variable consisting of a single value (as opposed to a record, an array, or some other complex data structure). scalar-valued function A function that returns a single value, such as a string, integer, or bit value. scale break line A line drawn across a chart area to indicate a significant gap between a high and low range of values on the chart. scale-out deployment A deployment model in which an installation configuration has multiple report server instances sharing a single report server database. Scheduling and Delivery Processor A component of the report server engine that handles scheduling and delivery. Works with SQL Agent. schema rowset A specially defined rowset that returns metadata about objects or functionality on an instance of SQL Server or Analysis Services. For example, the OLE DB schema rowset DBSCHEMA_COLUMNS describes columns in a table, while the Analysis Services schema rowset MDSCHEMA_MEASURES describes the measures in a cube. schema snapshot A snapshot that includes schema for published tables and objects required by replication (triggers, metadata tables, and so on), but not user data. schema-aware Pertaining to a processing method based on a schema that defines elements, attributes and types that will be used to validate the input and output documents. scope The extent to which an identifier, such as an object or property, can be referenced within a program. Scope can be global to the application or local to the active document. The set of data that is being synchronized. script memory The local memory (the client-side RAM) that is used by a PHP script. scripting Pertaining to the automation of user actions or the configuration of a standard state on a computer by means of scripts. SDK A set of routines (usually in one or more libraries) designed to allow developers to more easily write programs for a given computer, operating system, or user interface. search condition In a WHERE or HAVING clause, predicates that specify the conditions that the source rows must meet to be included in the SQL statement. search key The value that is to be searched for in a document or any collection of data. secondary database In log shipping, a read-only database that was created by restoring a full backup of the primary database (without recovery) on a separate server instance (the secondary server). Log backup from the primary database is restored at regular intervals onto the secondary database. secondary DPM server A DPM server that protects one or more primary DPM servers in addition to file and application data. secondary protection A type of protection in which data on the protected server is protected by a primary DPM server and the replica on the primary DPM server is protected by a secondary DPM server. secondary server In a log shipping configuration, the server instance where the secondary database resides. At regular intervals, the secondary server copies the latest log backup from the primary database and restores the log to the secondary database. The secondary server is a warm standby server. secret provider securable Entities that can be secured with permissions. The most prominent securables are servers and databases, but discrete permissions can be set at a much finer level. Secure Sockets Layer A protocol that improves the security of data communication by using a combination of data encryption, digital certificates, and public key cryptography. SSL enables authentication and increases data integrity and privacy over networks. SSL does not provide authorization or nonrepudiation. security extension A component in Reporting Services that authenticates a user or group to a report server. security ID In Windows-based systems, a unique value that identifies a user, group, or computer account within an enterprise. Every account is issued a SID when it is created. security identifier segmentation A data mining technique that analyzes data to discover mutually exclusive collections of records that share similar attributes sets.. self-service registration model A Certificate Lifecycle Manager (CLM) registration model in which a certificate subscriber performs or requests certificate management activities directly using a Web-based interface. self-tracking entity An entity built from a Text Template Transformation Toolkit (T4) template that has the ability to record changes to scalar, complex, and navigation properties. Semantic Model Definition Language A set of instructions that describe layout and query information for reports created in Report Builder. semantic object An object that can be represented by a database object or other real-world object. semantic validation The process of confirming that the elements of an XML file are logically valid. semantics In programming, the relationship between words or symbols and their intended meanings. semiadditive measure A measure that can be summed along one or more, but not all, dimensions in a cube. sensitive cursor A cursor that can reflect data modifications made to underlying data by other users while the cursor is open. sensitive data Personally identifiable information (PII) that is protected in special ways by law or policy. sequenced collection A collection of destination objects of a sequenced relationship object. sequenced relationship A relationship in a repository that specifies explicit positions for each destination object within the collection of destination objects. serial number A number assigned to a specific inventory item to identify it and differentiate it from similar items with the same item number. A computer that provides shared resources, such as files or printers, to network users. server collation The collation for an instance of SQL Server. server cursor A cursor implemented on the server. server name A name that uniquely identifies a server computer on a network. server subscription service A program, routine, or process that performs a specific system function to support other programs. Service Broker service connection point An Active Directory node on which system administrators can define Certificate Lifecycle Manager (CLM) management permissions for users and groups. service principal An entity that represents a service at the key distribution center (KDC). A service principal generally does not correspond to a human user of the system, but rather to an automated service providing a resource, such as a file server. service principal name The name by which a client uniquely identifies an instance of a service. It is usually built from the DNS name of the host. The SPN is used in the process of mutual authentication between the client and the server hosting a particular service. service program A program that uses Service Broker functionality. A service program may be a Transact-SQL stored procedure, a SQLCLR stored procedure, or an external program. session A period of time when a connection is active and communication can take place. For the purpose of data communication between functional units, session also refers to all the activities that take place during the establishment, maintenance, and release of the connection. session state In ASP.NET, a variable store created on the server for the current user; each user maintains a separate Session state on the server. Session state is typically used to store user-specific information between postbacks. set A grouping of dimension members or items from a data source that are named and treated as a single unit and can be referenced or reused multiple times. setup initialization file A text file, using the Windows .ini file format, that stores configuration information allowing SQL Server to be installed without a user having to be present to respond to prompts from the Setup program. setup repair An error reporting process that may run during the setup of a program if a problem occurs. shadow copy A static image of a set of data, such as the records displayed as the result of a query. shapefile A public domain format for the interchange of spatial data in geographic information systems. Shapefiles have the file name extension ".shp". sharding A technique for partitioning large data sets, which improves performance and scalability, and enables distributed querying of data across multiple tenants. shared code Code that is specifically designated to exist without modification in the server project and the client project. shared data source item Data source connection information that is encapsulated in an item. Can be managed as an item in the report server folder namespace. shared dimension A dimension created within a database that can be used by any cube in the database. shared lock A lock created by nonupdate (read) operations. shared schedule Schedule information that can be referenced by multiple items. shopping basket analysis showplan A report showing the execution plan for an SQL statement. SID significance One of the arguments of the FLOOR function. Silverlight business application A template that provides many common features for building a business application with a Silverlight client. It utilizes WCF RIA Services for authentication and registration services. simple client A type of cache client that does not have a routing table and thus does not need network connectivity to all cache hosts in the cache cluster. Because data traveling to simple clients from the cluster may need to travel across multiple cache hosts, simple clients may not perform as fast as a routing clients. Simple Mail Transfer Protocol A member of the TCP/IP suite of protocols that governs the exchange of electronic mail between message transfer agents. simple recovery model A database recovery mode that minimally logs all transactions sufficiently to ensure database consistency after a system crash or after restoring a data backup. The database is recoverable only up to the time of its most recent data backup, and restoring individual pages is unsupported. single-byte character set single-precision Of or pertaining to a floating-point number having the least precision among two or more options commonly offered by a programming language, such as single-precision versus double-precision. single-user mode A state in which only one user can access a resource. sink A device or part of a device that receives something from another device. SKU A unique identifier, usually alphanumeric, for a product. The SKU allows a product to be tracked for inventory purposes. An SKU can be associated with any item that can be purchased. For example, a shirt in style number 3726, size 8 might have a SKU of 3726-8. sleep To suspend operation without terminating. slice A subset of the data in a cube, specified by limiting one or more dimensions by members of the dimension. Slicers A feature that provides one-click filtering controls that make it easy to narrow down the portion of a data set that's being looked at. sliding window A window of fixed length L that moves along a timeline according to the stream’s events. With every event on the timeline, a new window is created, starting at the event’s start time. slipstream To integrate updates, patches or service packs into the base installation files of the original software, so that the resulting files will allow a single step installation of the updated software. slipstream installation A type of installation that integrates the base installation files for an operating system or program with its service packs, updates or patches, and enables them to be installed in a single step. smart card A plastic (credit card–sized or smaller) device with an embedded microprocessor and a small amount of storage that is used, with an access code, to enable certificate-based authentication. Smart cards securely store certificates, public and private keys, passwords, and other types of personal information. Smart Card Personalization Control An ActiveX control that performs all Certificate Lifecycle Manager (CLM) smart card application management activities on a client computer. smart card profile A Certificate Lifecycle Manager (CLM) profile created when a request is performed using a profile template that only includes smart card-based certificate templates. smart card reader A device that is installed in computers to enable the use of smart cards for enhanced security features. Smart Card Self Service Control Software installed on a client computer that enables end users and administrators to manage smart cards by providing a connection from the client computer to the smart card. smart card unblocking The action of binding a smart card with administrative credentials to reset the the personal identification number (PIN) attempt counter. SMTP snap-in A type of tool that you can add to a console supported by Microsoft Management Console (MMC). A stand-alone snap-in can be added by itself; an extension snap-in can be added only to extend the function of another snap-in. snapshot snapshot isolation level A transaction isolation level in which each read operation performed by a transaction returns all data as it existed at the start of the transaction. Because a snapshot transaction does not use locks to protect read operations, it will not block other transactions from modifying any data read by the snapshot transaction. snapshot replication A replication in which data is distributed exactly as it appears at a specific moment in time and does not monitor for updates to the data. Snapshot Share A share available for the storage of snapshot files. Snapshot files contain the schema and data for published tables. snapshot window A window that is defined according to the start and end times of the event in the stream, instead of a fixed grid along the timeline. snowflake schema An extension of a star schema such that one or more dimensions are defined by multiple tables. In a snowflake schema, only primary dimension tables are joined to the fact table. Additional dimension tables are joined to primary dimension tables. soft page A rendered page that can be slightly larger than the size specified using the InteractiveHeight and InteractiveWidth properties of a report (HTML and WinForm control). soft page-break renderer A rendering extension that maintains the report layout and formatting so that the resulting file is optimized for screen-based viewing and delivery, such as on a Web page or in the ReportViewer controls. software development kit software profile A Certificate Lifecycle Manager (CLM) profile created when a request is performed using a profile template that only includes software-based certificate templates. software transformer A software module or routine that modifies the events (data) into a format expected by the output device, and emits the data to that device. solution explorer A component of Microsoft SQL Server Management Studio that allows you to view and manage items and perform item management tasks in a solution or a project. solve order The order of evaluation (from highest to lowest solve order) and calculation (from lowest to highest solve order) for calculated members, custom members, custom rollup formulas, and calculated cells in a single calculation pass of a multidimensional cube. sort order A way to arrange data based on value or data type. You can sort data alphabetically, numerically, or by date. Sort orders use an ascending (1 to 9, A to Z) or descending (9 to 1, Z to A) order. source A disk, file, document, or other collection of information from which data is taken or moved. The SSIS data flow component that makes data from different external data sources available to the other components in the data flow. A synchronization provider that enumerates any changes and sends them to the destination provider. source adapter A data flow component that extracts data from a data store. source code control A set of features that include a mechanism for checking source code in and out of a central repository. It also implies a version control system that can manage files through the development lifecycle, keeping track of which changes were made, who made them, when they were made, and why. source control source cube The cube on which a linked cube is based. source database A database on the Publisher from which data and database objects are marked for replication as part of a publication that is propagated to Subscribers. For a database view, the database on which the view is created. source object The single object to which all objects in a particular collection are connected by way of relationships that are all of the same relationship type. source partition An Analysis Services partition that is merged into another and is deleted automatically at the end of the merger process. source provider sparkline A miniature chart that can be inserted into text or embedded within a cell on a worksheet to illustrate highs, lows, and trends in your data. sparse column A column that reduces the storage requirement for null values at the cost of more overhead to retrieve nonnull values. sparse file A file that is handled in a way that requires much less disk space than would otherwise be needed.). sparsity The relative percentage of a multidimensional structure's cells that do not contain data. spatial data Data that is represented by 2D or 3D images. Spatial data can be further subdivided into geometric data (data that can use calculations involving Euclidian geometry) and geographic data (data that identifies geographic locations and boundaries on the earth). SPN SQL A database query and programming language widely used for accessing, querying, updating, and managing data in relational database systems. SQL database A database based on Structured Query Language (SQL). SQL expression Any combination of operators, constants, literal values, functions, and names of tables and fields that evaluates to a single value. SQL Native Client A stand-alone data access API that is used for both OLE DB and ODBC. SQL Server SQL Server 2005 Express Edition An edition of a Microsoft relational database design and management system for e-commerce, line-of-business, and data warehousing solutions. SQL Server 2005 Mobile Edition SQL Server product name (edition) SQL Server Analysis Services SQL Server component A SQL Server program module developed to perform a specific set of tasks - e.g., data transformation, data analysis, reporting. SQL Server Connection Director SQL Server data-tier application project SQL Server End-User Recovery SQL Server EUR SQL Server Execute Package Utility A graphical user interface that is used to run a Integration Services package. SQL Server Express SQL Server instance SQL Server instance auto-protection A type of protection that enables DPM to automatically identify and protect databases that are added to instances of SQL Server that are configured for auto-protection. SQL Server login An account stored in SQL Server that allows users to connect to SQL Server. SQL Server PowerPivot for Excel SQL Server Profiler A graphical user interface for monitoring an instance of the SQL Server database engine or an instance of Analysis Services. SQL Server Reporting Services A server-based report generation environment for enterprise, Web-enabled reporting functionality so you can create reports that draw content from a variety of data sources, publish reports in various formats, and centrally manage security and subscriptions. SQL Server Service Broker SQL Server Trace A set Transact-SQL system stored procedures to create traces on an instance of the SQL Server Database Engine. SQL Server Utility A way to organize and monitor SQL Server resource health. It enables administrators to have a holistic view of their environment. SQL Server Utility dashboard A dashboard that provides an at-a-glance summary of resource health for managed SQL Server instances and data-tier applications. Can also be referred to as the SQL Server Utility detail view or the list view with details. SQL Server Utility Explorer A hierarchical tree displaying the objects in the SQL Server Utility. SQL Server Utility viewpoints A feature of SQL Server Utility that provides administrators a holistic view of resource health through an instance of SQL Server that serves as a utility control point (UCP). SQL statement An SQL or Transact-SQL command, such as SELECT or DELETE, that performs some action on data. SQL Trace SQL writer A VSS compliant writer provided by the SQL Server that handles the VSS interaction with SQL Server. SQL-92 The version of the SQL standard published in 1992. SSAS SSL SSRCT SSRS SSRT staged data Data imported into staging tables during the staging process in SQL Server Master Data Services. staging The process used in SQL Server Master Data Services to import data into staging tables and then process the staged data as a batch prior to importing it into the master database. staging process staging queue The batch table in SQL Server Master Data Services where staged records are queued as batches to be processed into the Master Data Services database. staging table A table in SQL Server Master Data Services that is populated with business data during the staging process. standalone server A computer that runs Windows Server but does not participate in a domain. A standalone server has only its own database of end users, and it processes logon requests by itself. It does not share account information with any other computer and cannot provide access to domain accounts. standby file In a restore operation, a file used during the undo phase to hold a "copy-on-write" pre-image of pages that are to be modified. The standby file allows reverting the undo pass to bring back the uncommitted transactions. standby server A server instance containing a copy of a database that can be brought online if the source copy of the database becomes unavailable. Log shipping can be used to maintain a "warm" standby server, called a secondary server, whose copy of the database is automatically updated from log backups at regular intervals. Before failover to a warm standby server, its copy of the database must be brought fully up to date manually. Database mirroring can be used to maintain a "hot" standby server, called a mirror server, whose copy of the database is continuously brought up to date. Failover to the database on a mirror server is essentially instantaneous. standing query An instantiation of a query template that runs within the StreamInsight server performing continuous computation over the incoming events. star join A join between a fact table (typically a large fact table) and at least two dimension tables. star query A query that joins a fact table and a number of dimension tables. star schema A relational database structure in which data is maintained in a single fact table at the center of the schema with additional dimension data stored in dimension tables. Each dimension table is directly related to and usually joined to the fact table by a key column. start angle. start cap The start of a line. statement A compiled T-SQL query.. Stemmers are language specific. step into To execute the current statement and enter Break mode, stepping into the next procedure whenever a call for another procedure is reached. stewardship portal A feature of MDS that provides centralized control over master data, including members and hierarchies and enables data model administrators to ensure data quality by developing, reviewing, and managing data models and enforcing them consistently across domains. stock keeping unit stolen page A page in Buffer Cache taken for other server requests stoplist A specific collection of so-called stopwords, which tend to appear frequently in documents, but are believed to carry no usable information. stopword A word that tends to appear frequently in documents and carries no usable information. storage engine A component of SQL Server that is responsible for managing the raw physical data in your database. For example, reading and writing the data to disk is a task handled by the storage engine. storage location The position at which a particular item can be found: either an addressed location or a uniquely identified location on a disk, tape, or similar medium. stored procedure A precompiled collection of SQL statements and optional control-of-flow statements stored under a name and processed as a unit. They are stored in an SQL database and can be run with one call from an application. stored procedure resolver A program that is invoked to handle row change-based conflicts that are encountered in an article to which the resolver was registered.. stream consumer The structure or device that consumes the output of a query. Examples are an output adapter or another running query. StreamInsight Event Flow Debugger A stand-alone tool in the Microsoft StreamInsight platform that provides event-flow debugging and analysis. StreamInsight platform The platform, consisting of the StreamInsight server, Event Flow Debugging tool, Visual Studio IDE, and other components, for the development of complex event processing applications. StreamInsight server string A group of characters or character bytes handled as a single entity. Computer programs use strings to store and transmit data and commands. Most programming languages consider strings (such as 2674:gstmn) as distinct from numeric values (such as 470924). strip line Horizontal or vertical ranges that set the background pattern of the chart in regular or custom intervals. You can use strip lines to improve readability for looking up individual values on the chart, highlight dates that occur at regular intervals, or highlight a specific key range. stripe striped media set A media set that uses multiple devices, among which each backup is distributed. strong consistency A scenario where high availability is enabled and there is more than one copy of a cached object in the cache cluster. All copies of that object remain identical. Structured Query Language subquery subreport A report contained within another report. subscribe To request data from a Publisher. subscriber In Notification Services, the person or process to which notifications are delivered. Subscriber In replication, a database instance that receives replicated data. subscriber database scheduled rule One or more Transact-SQL statements that process information for scheduled subscriptions. subset A selection of tables and the relationship lines between them that is part of a larger database diagram. subtract To perform the basic mathematical operation of deducting something from something else. Support Count A dynamic option that displays the number of rows in which the determinant column value determines the dependent column. Support Percentage A dynamic option that displays the percentage of rows in which the determinant column determines the dependent column. surface area The number of ways in which a piece of software can be attacked. suspect tape A tape that has conflicting identification information, such as the barcode or the on-media identifier. SVF SVG An XML-based language for device-independent description of two-dimensional graphics. SVG images maintain their appearance when printed or when viewed with different screen sizes and resolutions. SVG is a recommendation of the World Wide Web Consortium (W3C). sweep angle The number of degrees, between 0 and 360 that the scale will sweep in a circle. A sweep angle of 360 degrees produces a scale that is a complete circle. switch in table The staging table the user wants to use to switch in their data. The staging table needs to be created before switching partitions with the Manage PartitionsWizard. switch out table The staging table the user wants to use for the partition to switch out of the current source table. symmetric key Sync Manager A tool used to ensure that a file or directory on a client computer contains the same data as a matching file or directory on a server. sync provider synchronization application A software component, such as a personal information manager or music database, that hosts a synchronization session and invokes synchronization providers to synchronize disparate data stores. synchronization community A set of replicas that keep their data synchronized with one another. synchronization manager Synchronization Manager synchronization orchestrator An orchestrator that initiates and controls synchronization sessions. synchronization provider synchronization session A unidirectional synchronization in which the source provider enumerates its changes and sends them to the destination provider, which applies them to its store. syndication format A format used for publishing data on blogs and web sites. syntactic validation The process of confirming that an XML file conforms to its schema. System Configuration Checker A system preparation tool that helps to avoid setup failures by validating the target machine before a software application is installed. system databases A set of five databases present in all instances of SQL Server that are used to store system information. system functions A set of built-in functions that perform operations on and return the information about values, objects, and settings in SQL Server. system locale system role assignment Role assignment that applies to the site as a whole. system role definition Role definition that conveys site-wide authority. system stored procedure A type of stored procedure that supports all of the administrative tasks required to run a SQL Server system. system stored procedures A set of SQL Server-supplied stored procedures that can be used for actions such as retrieving information from the system catalog or performing administration tasks. system table A table that stores the data defining the configuration of a server and all its tables. system tables Built-in tables that form the system catalog for SQL Server. system variable A variable provided by DTS. tab page A part of a tab control that consists of the tab UI element and the display area, which acts as a container for data or other controls, such as text boxes, combo boxes, and command buttons. table A database object that stores data in records (rows) and fields (columns). The data is usually about a particular category of things, such as employees or orders. table data region A report item on a report layout that displays data in a columnar format. table lock A lock on a table including all data and indexes. table reference A name, expression or string that resolves to a table. tablespace A unit of database storage that is roughly equivalent to a file group in SQL Server. Tablespaces allow storage and management of database objects within individual groups. table-valued function A user-defined function that returns a table. Tablix A data region that can render data in table, matrix, and list format. It is intended to convey the unique functionality of the data region object and the users' ability to combine data formats. Tablix data region tabular data stream The SQL Server internal client/server data transfer protocol. TDS allows client and server products to communicate regardless of operating-system platform, server release, or network transport. tabular query A standard operation such as search, sort, filter or transform on data in a table. tail-log backup A log backup taken from a possibly damaged database to capture the log that has not yet been backed up. A tail-log backup is taken after a failure in order to prevent work loss. tape backup A SQL Server backup operation that writes to any tape device supported by the operating system. target The database on which an operation acts. target partition An Analysis Services partition into which another is merged, and which contains the data of both partitions after the merger. target queue In Service Broker, the queue associated with the service to which messages are sent. target server A server that receives jobs from a master server. target type The type of target, which has certain characteristics and behavior. task object A Data Transformation Services (DTS) object that defines pieces of work to be performed as part of the data transformation process. For example, a task can execute an SQL statement or move and transform heterogeneous data from an OLE DB source to an OLE DB destination using the DTS Data Pump. TDS temporary smart card A non-permanent smart card issued to a user for replacement of a lost smart card or to a user that requires access for a limited time. temporary stored procedure A procedure placed in the temporary database, tempdb, and erased at the end of the session. temporary table A table placed in the temporary database, tempdb, and erased at the end of the session. tenant A client organization that is served from a single instance of an application by a Web service. A company can install one instance of software on a set of servers and offer Software as a Service to multiple tenants. theater view A view where the preview is centered in a PowerPivot Gallery SharePoint document library and lets you rotate through the available worksheets. Smaller thumbnails of each worksheet appear lower on the page, on either side. theta join A join based on a comparison of scalar values. thousand separator A symbol that separates thousands from hundreds within a number that has four or more places to the left of the decimal separator.. throttle A Microsoft SQL Server tool designed to limit the performance of an instance of the database engine any time more than eight operations are active at the same time. tick A regular, rapidly recurring signal emitted by a clocking circuit. tick count tile server A map image caching engine that caches and serves pregenerated, fixed-size map image tiles. time A SQL Server system data type that stores a time value from 0:00 through 23:59:59.999999. time interval A period of time in which a given event is valid. The valid time interval includes the valid start time, and all moments of time up to, but not including the valid end time. tokenization In text mining or Full-Text Search, the process of identifying meaningful units within strings, either at word boundaries, morphemes, or stems, so that related tokens can be grouped. For example, although "San Francisco" is two words, it could be treated as a single token. tombstone A marker that is used to represent and track a deleted item and prevent its accidental reintroduction into the synchronization community. tool A utility or feature that aids in accomplishing a task or set of tasks. topology The set of participants involved in synchronization and the way in which they are connected to each other. trace A collection of events and data returned by the Database Engine. trace file A file containing records of activities of a specified object, such as an application, operating system, or network. A trace file can include calls made to APIs, the activities of APIs, the activities of communication links and internal flows, and other information. tracer token A performance monitoring tool available for transactional replication. A token (a small amount of data) is sent through the replication system to measure the amount of time it takes for transactions to reach the Distributor and Subscribers. trail byte The byte value that is the second half of a double-byte character. train To populate a model with data to derive patterns that can be used in prediction or knowledge discovery. training data set A set of known and predictable data used to train a data mining model. trait An attribute that describes an entity. trait phrasing A way of expressing a relationship in which a minor entity describes a major entity. transaction isolation level transaction log A file that records transactional changes occurring in a database, providing a basis for updating a master file and establishing an audit trail. transaction log backup transaction retention period transaction rollback Rollback of a user-specified transaction to the last savepoint inside a transaction or to the beginning of a transaction. transactional data Data related to sales, deliveries, invoices, trouble tickets, claims, and other monetary and non-monetary interactions.). transformation The SSIS data flow component that modifies, summarizes, and cleans data. transformation input Data that is contained in a column, which is used duing a join or lookup process, to modify or aggregate data in the table to which it is joined. transformation output Data that is returned as a result of a transformation procedure. trend A general tendency or inclination, typically determined by the examination of a particular attribute over time. trusted connection A Windows network connection that can be opened only by users who have been authenticated by the network. tumbling window A hopping window whose hop size is equal to the window size. tuple An ordered collection of members that uniquely identifies a cell, based on a combination of attribute members from every attribute hierarchy in the cube. two-phase commit type checking The process performed by a compiler or interpreter to make sure that when a variable is used, it is treated as having the same data type as it was declared to have. typed adapter An adapter that emits only a single event type. typed event An event for which the structure of the event payload provided by the source or consumed by the sink is known, and the input or output adapter is designed around this specific event structure. UCP A network node that provides the central reasoning point for the SQL Server Utility. It uses Utility Explorer in SQL Server Management Studio (SSMS) to organize and monitor SQL Server resource health. UDT A user-written extension to the scalar type system in SQL Server. unbalanced hierarchy unbound stream An event stream that contains the definition of the event model or payload type, but does not define the data source. uncommittable Pertaining to a transaction that remains open and cannot be completed. Uncommitable transactions could be considered a subclass of partially failed transactions, where the transaction has encountered an error that prevents its completion, but it is still holding its locks and has to be rolled back by the user. uncompress To restore the contents of a compressed file to its original form. undeliverable Not able to be delivered to an intended recipient. If an e-mail message is undeliverable, it is returned to the sender with information added by the mail server explaining the problem; for example, the e-mail address may be incorrect, or the recipient's mailbox may be full. underlying table A table referenced by a view, cursor, or stored procedure. undo The phase during database recovery that reverses (rolls back) changes made by any transactions that were uncommitted when the redo phase of recovery completed. undo file A file that saves the content of the pages in a database after they've been modified by uncommitted, rolled back transactions and before recovery restores them to their previous state. The undo file prevents the changes performed by uncommitted transactions from being lost. undo phase unenforced relationship A link between tables that references the primary key in one table to a foreign key in another table, and which does not check the referential integrity during INSERT and UPDATE transactions. uninitialize uninitialize. To change the state of an enumerator or data source object so that it cannot be used to access data. For example, uninitializing a data source object might require the provider to close a data file or disconnect from a database. unique index An index in which no two rows are permitted to have the same index value, thus prohibiting duplicate index or key values. uniqueifier A 4-byte column that the SQL Server Database Engine automatically adds to a row to make each index key unique. Universal Time Coordinate The standard time common to every place in the world, coordinated by the International Bureau of Weights and Measures. Coordinated Universal Time is used for the synchronization of computers on the Internet. unknown member A member of a dimension for which no key is found during processing of a cube that contains the dimension. unknown tape Tape that has not been identified by the DPM server. unmanaged code Code that is executed directly by the operating system, outside the .NET Framework common language runtime. Unmanaged code must provide its own memory management, type checking, and security support, unlike managed code, which receives these services from the common language runtime. unmanaged instance An instance of SQL Server not monitored by a utility control point. unpivot To expand values from multiple columns in a single record into multiple records with the same values in a single column. unsafe code untyped adapter An adapter that accepts or emits multiple event types in which the payload structure or the type of fields in the payload are not known in advance. Examples are events from a CSV or text file, a SQL table, or a socket. update lock A lock placed on resources (such as row, page, table) that can be updated. update statistics A process that recalculates information about the distribution of key values in specified indexes. updategram A template that makes it possible to modify a database in Microsoft SQL Server from an existing XML document. user account In Active Directory, an object that consists of all the information that defines a domain user, which includes user name, password, and groups in which the user account has membership. User accounts can be stored in either Active Directory or on your local computer. user database A database created by a SQL Server user and used to store application data. user instance An instance of SQL Server Express that is generated by the parent instance on behalf of a user. user-defined aggregate function An aggregate function created against a SQL Server assembly whose implementation is defined in an assembly created in the .NET Frameworks common language runtime. user-defined type utility control point Utility Reader A privilege that allows the user account to connect to the SQL Server Utility, see all viewpoints in the Utility Explorer in SSMS and see settings on the Utility Administration node in Utility Explorer in SSMS. vacuumer A tool for data removal. validity period The amount of time a defined credential is deemed to be trusted. value expression An expression in Multidimensional Expressions (MDX) that returns a value. Value expressions can operate on sets, tuples, members, levels, numbers, or strings.. Metadata that identifies a change made to an item in a replica. It consists of the replica key and the replica tick count for the item. vertical filtering Filtering columns from a table. When used as part of replication, the table article created contains only selected columns from the publishing table. vertical partitioning The process of splitting a single table into multiple tables based on selected columns. Each of the multiple tables has the same number of rows but fewer columns. very large database A database that has become large enough to be a management challenge, requiring extra attention to people, processes, and processes. victim The longest running transaction that has not generated row versions when tempdb runs out of space and the Database Engine forces the version stores to shrink. A message 3967 is generated in the error log for each victim transaction. If a transaction is marked as a victim, it can no longer read the row versions in the version store.. visualizer A way to visually represent data in debug mode. VLDB VSS writer A component within an application that interfaces with the VSS platform infrastructure during backups to ensure that application data is ready for shadow copy creation. wall-time The total time taken by a computer to complete a task which is the sum of CPU time, I/O time, and the communication channel delay. warm standby A method of redundancy in which the secondary (i.e., backup) system runs in the background of the primary system. Data is mirrored to the secondary server at regular intervals, which means that there are times when both servers do not contain the exact same data. warm standby server A standby server that contains a copy of a database that is asynchronously updated, and that can be brought online fairly quickly. watermark A threshold used to manage the memory consumption on each cache host. The high watermark and low watermark specify when objects are evicted out of memory. Web application A software program that uses Hypertext Transfer Protocol (HTTP) for its core communication protocol and that delivers Web-based information to the user in the Hypertext Markup Language (HTML) language. Web pool A grouping of one or more URLs served by a worker process. Web Pool Agent An isolated process under which the Certificate Lifecyle Manager (CLM) web portal runs. Web project A collection of files that specifies elements of a Web application. Web site A group of related Web pages that is hosted by an HTTP server on the World Wide Web or an intranet. The pages in a Web site typically cover one or more topics and are interconnected through hyperlinks. Web synchronization In merge replication, a feature that lets you replicate data by using the HTTPS protocol. website weighted close formula A formula that calculates the average of the high, low, and close prices, while giving extra weight to the close price. wide character A 2-byte multilingual character code. window A subset of events within a stream that fall within some period of time; that is, a window contains event data along a timeline. Windows Management Instrumentation The Microsoft extension to the Distributed Management Task Force (DMTF) Web-based Enterprise Management (WBEM) initiative. Windows NT Integrated Security A security mode that leverages the Windows NT authentication process. witness server In database mirroring, the server instance that monitors the status of the principal and mirror servers and that, by default, can initiate automatic failover if the principal server fails. A database mirroring session can have only one witness server (or "witness"), which is optional. WMI WMI Query Language A subset of ANSI SQL with semantic changes adapted to Windows Management Instrumentation (WMI). workbook In a spreadsheet program, a file containing a number of related worksheets. workload governor workload group In Resource Governor, a container for session requests that are similar according to the classification rules that are applied to each request. A workload group allows the aggregate monitoring of resource consumption and a uniform policy that is applied to all the requests in a group. workstation A microcomputer or terminal connected to a network. WQL write back To update a cube cell value, member, or member property value. write-ahead log A transaction logging method in which the log is always written prior to the data. x-axis The horizontal reference line on a grid, chart, or graph that has horizontal and vertical dimensions. XML for Analysis A specification that describes an open standard that supports data access to data sources that reside on the World Wide Web. XMLA XQuery Functional query language that is broadly applicable to a variety of XML data types derived from Quilt, XPath, and XQL. Both Ipedo and Software AG implement their own versions of the W3C's proposed specification for the XQuery language. Also called: XML Query, XQL. XSL XSL Transformation XSLT
https://msdn.microsoft.com/en-US/library/ms165911(d=printer,v=sql.105).aspx
CC-MAIN-2015-35
en
refinedweb
Skip navigation links public interface GranteeEntry A representation of a principal(user/group/role) or codesource granted with some permissions. Note: This interface is defined as a mechanism to exchange information only. The consumer must not implement this interface. Rather, the consumer should rely upon the existing public classes that implement this interface. CodeSourceEntry getCodeSourceEntry() java.util.List<PrincipalEntry> getPrincipalEntries() boolean equals(java.lang.Object another) equalsin class java.lang.Object Skip navigation links
http://docs.oracle.com/cd/E28280_01/apirefs.1111/e14650/oracle/security/jps/service/policystore/info/GranteeEntry.html
CC-MAIN-2015-35
en
refinedweb
csMD5 Class Reference This is an encapsulation of a C-implementation of MD5 digest algorithm by Peter Deutsch <ghost@aladdin.com>. More... #include <csutil/csmd5.h> Detailed Description This 78 of file csmd5.h. Member Typedef Documentation Member Function Documentation Encode a string. Encode a buffer. Encode a null-terminated string buffer. Append a string to the message. Finish the message and return the digest. Initialize the algorithm. The documentation for this class was generated from the following file: Generated for Crystal Space 1.4.1 by doxygen 1.7.1
http://www.crystalspace3d.org/docs/online/api-1.4.1/classcsMD5.html
CC-MAIN-2015-35
en
refinedweb
Once you've set up the directory, or have directed your program to communicate with an existing directory, what sort of information can you expect to find there? You can get two kinds of information from the directory: bindings and attributes. The directory can be viewed as consisting of name-to-object bindings. That is, each object in the directory has a corresponding name. You can retrieve an object in the directory by looking up its name. If you are using a naming service such as the file system (as you will be doing in some of this tutorial's examples), then the objects are files and they are bound to filenames. Also stored in the directory are attributes. An object in the directory, in addition to having a name, also has an optional set of attributes. You can ask the directory for an object's attributes, as well as ask it to search for an object that has certain attributes. This trail gives examples of accessing both kinds of information. The specifics of exactly what you can access from a naming or directory service depend on how the particular service has been laid out and what information has been added into it.. It also includes tools for updating an existing directory that has older versions of these schemas. Following is a list of tasks the tools can perform. Follow the instructions in the accompanying README file to run these programs.Follow the instructions in the accompanying README file to run these programs. - Create Java Schema - Create CORBA Schema - Update directory entries that use an outdated Java schema - Update directory entries that use an outdated CORBA schema Note 1: If you are using Netscape Directory Server 4.1, then you must update the schema. If you are updating the schema by manually updating its configuration files, then first locate the java-object-schema.conf file in the server installation at the directory namedNETSCAPE-DIRECTORY-HOME/slapd-SERVER-ID/config/The contents of java-object-schema.conf are out-of-date. You must replace them with the contents of the updated schema. See Note 2 for further instructions. If you are updating the schema using the Java programs that accompany this tutorial, then first locate the ns-schema.conf file in the server installation at the directory namedNETSCAPE-DIRECTORY-HOME/slapd-SERVER-ID/config/Comment out the line that contains java-object-schema.conf because that schema is out-of-date. Restart the server and use the CreateJavaSchema program to install the updated schema. You need to manually remove the reference to the old schema from the list of built-in schemas in ns-schema.conf. This is because the server does not permit such built-in schemas to be modified via the LDAP. Note 2: The Netscape Directory Server 4.1 has a different way of identifying attribute syntaxes than RFC 2252. For that server, you should use the following substitutions: - "case ignore string" for the attributes with the Directory String syntax (1.3.6.1.4.1.1466.115.121.1.15) - "binary" for the attribute with the Octet String syntax (1.3.6.1.4.1.1466.115.121.1.40) Note 3: Windows Active Directory. Active Directory manages its schema by using an internal format. To update the schema, you can use either the Active Directory Management Console snap-in, ADSIEdit, or the CreateJavaSchema utility, following the instructions for Active Directory. Providing Directory Content for This TutorialTo set up the file system namespace, run the Setup program. This program creates a file subtree that provides a common frame of reference for discussing what to expect in terms of listing and looking up objects from the file system. To run this program, give it the name of the directory in which to create the tutorial test namespace. For example, typing the followingcreates a directory /tmp/tutorial and populates it with directories and files.creates a directory /tmp/tutorial and populates it with directories and files.# java Setup /tmp/tutorial In the directory examplesONE Directory Server and Netscape Directory Server, add the aci entry suggested in the netscape, whereever it uses "o=JNDITutorial", use "o=JNDITutorial,dc=imc,dc=org" instead.Make this change for each line that begins with "dn:" in the file. Then, in all of the examples in this tutorial, whereever.
http://docs.oracle.com/javase/jndi/tutorial/basics/prepare/content.html
CC-MAIN-2015-35
en
refinedweb
/* ** (c) COPYRIGHT MIT 1995. ** Please first read the full copyright statement in the file COPYRIGH. */ This module contains code to parse URIs and various related things such as: This module is implemented by HTParse.c, and it is a part of the W3C Sample Code Library. #ifndef HTPARSE_H #define HTPARSE_H #include "HTEscape.h" These functions can be used to get information in a URI. This returns those parts of a name which are given (and requested) substituting bits from the related name where necessary. The aName argument is the (possibly relative) URI to be parsed, the relatedName is the URI which the aName is to be parsed relative to. Passing an empty string means that the aName is an absolute URI. The following are flag bits which may be OR'ed together to form a number to give the 'wanted' argument to HTParse. As an example we have the URL: " /TheProject.html#news" #define PARSE_ACCESS 16 /* Access scheme, e.g. "HTTP" */ #define PARSE_HOST 8 /* Host name, e.g. "" */ #define PARSE_PATH 4 /* URL Path, e.g. "pub/WWW/TheProject.html" */ #define PARSE_VIEW 2 /* Fragment identifier, e.g. "news" */ #define PARSE_FRAGMENT PARSE_VIEW #define PARSE_ANCHOR PARSE_VIEW #define PARSE_PUNCTUATION 1 /* Include delimiters, e.g, "/" and ":" */ #define PARSE_ALL 31 where the format of a URI is as follows: " ACCESS :// HOST / PATH # ANCHOR" PUNCTUATION means any delimiter like '/', ':', '#' between the tokens above. The string returned by the function must be freed by the caller. extern char * HTParse (const char * aName, const char * relatedName, int wanted); This function creates and returns a string which gives an expression of one address as related to another. Where there is no relation, an absolute address is retured. extern char * HTRelative (const char * aName, const char *relatedName); Search the URL and determine whether it is a relative or absolute URL. We check to see if there is a ":" before any "/", "?", and "#". If this is the case then we say it is absolute. Otherwise we say it is relative. extern BOOL HTURL_isAbsolute (const char * url); Canonicalization of URIs is a difficult job, but it saves a lot of down loads and double entries in the cache if we do a good job. A URI is allowed to contain the seqeunce xxx/../ which may be replaced by "" , and the seqeunce "/./" which may be replaced by "/". Simplification helps us recognize duplicate URIs. Thus, the following transformations are done: but we should NOT change In the same manner, the following prefixed are preserved: In order to avoid empty URIs the following URIs become: If more than one set of `://' is found (several proxies in cascade) then only the part after the last `://' is simplified. extern char *HTSimplify (char **filename); In many telnet like protocols, it can be very dangerous to allow a full ASCII character set to be in a URI. Therefore we have to strip them out. HTCleanTelnetString() makes sure that the given string doesn't contain characters that could cause security holes, such as newlines in ftp, gopher, news or telnet URLs; more specifically: allows everything between hexadesimal ASCII 20-7E, and also A0-FE, inclusive. str extern BOOL HTCleanTelnetString (char * str); #endif /* HTPARSE_H */
http://www.w3.org/Library/src/HTParse
crawl-002
en
refinedweb
Copyright ©2003 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply. Proposed Recommendation of the "Speech Recognition Grammar Specification 1.0". Proposed Recommendation status is described in section 7.1.1 of the Process Document. W3C Advisory Committee Members are invited to send formal review comments to the W3C Team-only list voice-review@w3.org until 18 February 2004, 11pm EDT. The public is also invited to send comments to the Working Group's public discussion list www-voice@w3.org (archive). See W3C mailing list and archive usage guidelines. This document is based upon the Speech Recognition Grammar Specification 1.0 Candidate Recommendation of 26 June 2002 and feedback received during the review period (see the Disposition of Comments document). The Voice Browser Working Group believes that this specification addresses its Requirements and all Last Call and Candidate Recommendation issues. Known implementations are documented in the SRGS 1.0 implementation report, along with the associated test suite. W3C policy. This document has been produced as part of the W3C Voice Browser Activity, following the procedures set out for the W3C Process. The authors of this document are members of the Voice Browser Working Group (W3C Members only). A list of current W3C Recommendations and other technical documents can be found at79]. This form of language expression is sufficient for the vast majority of speech recognition applications. This W3C standard is known as the Speech Recognition Grammar Specification and is modelled on the JSpeech Grammar Format specification [JSGF], which is owned by Sun Microsystems, Inc., California, U.S.A. A "dtmf" mode grammars. For simplicity, throughout this document references to a speech recognizer apply to other types of grammar processor unless explicitly stated otherwise. A speech recognizer is a user agent with the following inputs and outputs: The primary use of a speech recognizer grammar is to permit a speech application to indicate to a recognizer what it should listen for, specifically:2] or through a speech recognizer API.: The XSL Transformation document in Appendix F demonstrates automatic conversion from XML to ABNF. The reverse conversion requires an ABNF parser and a transformational program. There are inherent limits to the automatic conversion to and from ABNF Form and XML Form. A. <book-flight> <depart>Prague</depart> <arrive>Paris</arrive> </book-flight> The Speech Recognition Grammar Specification provides syntactic support for limited semantic interpretation. The tag construct and the tag-format and tag declarations provide a placeholder for instructions to a semantic processor. The W3C3C activities. For examples of semantic interpretation in the latest working draft see [SEM]. The output of the semantic interpretation processor may be represented using the Natural Language Semantics Markup Language [NLSML]. This XML representation of interpreted spoken input can be used to transmit the result, as input to VoiceXML 2.0 [VXML2] processing or in other ways. The semantic interpretation carried out in the speech recognition process is typically characterized by:. The Speech Recognition Grammar Specification is designed to permit ABNF Form and XML Form grammars to be embedded into other documents. For example, VoiceXML 1.0 [VXML1] and VoiceXML 2.0 [VXML2] permit inline grammars [VXML2 §3.1.1.1] in which an ABNF Form grammar or XML Form grammar is contained within a VoiceXML document. Embedding an XML Form grammar within an XML document can be achieved with XML namespaces [XMLNS] or by incorporating the grammar Schema §2.7] or the escape sequences of " <" and " >" may be required to create well-formed XML. Note: angle brackets ('<' and '>') are used in ABNF to delimit any URI, media type or repeat operator. anyURI' primitive as defined in XML Schema Part 2: Datatypes [SCHEMA2 §3.2.17]. The Schema definition follows [RFC2396] and [RFC2732]. The syntax representation of a URI differs between the ABNF Form and the XML Form. Any relative URI reference must be resolved according to the rules given in Section 4.9.1. <> rulerefand lexiconelements. [See Appendix G for information on media types for the ABNF and XML Forms of the Speech Recognition Grammar Specification.] <>~<media-type> typeattribute. A legal rule expansion is any legal token, rule reference, tag, or any logical combination of legal rule expansions as sequence, alternatives, repeated expansion or language-attributed expansion. A rule expansion is formally a regular expression (see, for example, [HU79]). A rule definition associates a legal rule expansion with a rulename. A tokenization behavior: Text spans containing token sequences are delimited as follows: White Space Normalization: White space must be normalized when contained in any token delimited by a <token> elements or by double quotes. Leading and trailing white space characters are stripped. Any token-internal white space character or sequence is collapsed to a single space character (#x20). For example, the following are all normalized to the same string, "San Francisco". "San Francisco" " San Francisco " "San Francisco" " San Francisco " Because the presence of white space within a token is significant the following are distinct tokens. "San Francisco" "SanFrancisco" "San_Francisco" Token Normalization: Other normalization processes are applied to the white space normalized token according to the language and the capabilties of the speech recognizer. Grammar processors may assume Early Uniform Normalization as defined in the Character Model for the World Wide Web 1.0 [CHARMOD §4]. Pronunciation Lookup: To match spoken (audio) input to a grammar a speech recognition must be capable of modelling3C: ABNF Form Any plain text within a rule definition is token content. The ABNF Syntax (Appendix D) normatively defines the token parsing behavior. A language attachment may be provided for any token. When attached to a token the language modifies the handling of that token only. Informative The rule expansion of a rule definition is delimited at the start and end by equals sign ('=') and semicolon (';') respectively. Any leading plain text of the rule expansion is delimited by ('=') and similarly any final plain text is closed by semicolon. Within a rule expansion the following symbols have syntactic function and delimit plain text. - Dollar sign ('$') and angle brackets ('<' and '>') when needed mark rule references - Parentheses ('(' and ')') may enclose any rule expansion - Vertical bar ('|') delimits alternatives - Forward slashes ('/' and '/') delimit any weights on alternatives - Angle brackets ('<' and '>') delimit any repeat operator - Square brackets ('[' and ']') delimit any optional expansion - Curly brackets ('{' and '}') delimit any tag - Exclamation point ('!') prefixes any language identifier Within plain text regions delimited by these characters the tokenization, white space normalization, token normalization and pronunciation lookup processes described above apply. XML Form Any tokenelement explicitly delimits a single token as described above. The tokenelement may include an optional xml:langattribute to indicate the language of the contained token. Any other character data within a rule element (rule definition) or item element is token content. Note that character data within tag or example is not token text. Any legal rule reference is a legal rule expansion . Rulenames: Every rule definition has a local name that must be unique within the scope of the grammar in which it is defined. A rulename must match the "Name" Production of XML 1.0 [XML §2.3] and be a legal XML ID. Section 3.1 documents the rule definition mechanism and the legal naming of rules. This table summarizes the various forms of rule reference that are possible within and across grammar documents. Note: an XML Form grammar document must provide one and only one of the uri or special attributes on a ruleref element. There is no equivalent constraint in ABNF since the syntactic forms are distinct. When referencing rules defined locally (defined in the same grammar as contains the reference), always use a simple rulename reference which consists of the local rulename only. The ABNF Form and XML Form have a different syntax for representing a simple rulename reference. ABNF Form The simple rulename reference is prefixed by a "$" character.$city $digit XML Form The rulerefelement is an empty element with a uriattribute that specifies the rule reference as a same-document reference URI [RFC2396]: that is, the attribute consists only of the number sign ('#') and the fragment identifier that indicates the locally referenced rulename.<ruleref uri="#city"/> <ruleref uri="#digit"/> implicitly targets the root rule of the external grammar. Any externally-referenced rule may be activated for recognition. That is it may define the top-level syntax of spoken input. For instance, VoiceXML [VXML2] grammar activation may explicitly reference one or more public rules (see Section 3.2) and/or implicitly reference the root rule (see Section 4.7). A URI reference is illegal if the referring document and referenced document have different modes. For instance, it is illegal to reference a "dtmf" grammar from a "voice" grammar. (See Section 4.6 for additional detail on modes). A resource indicated by an URI reference may be available in one or more media types. The grammar author may specify the preferred media-type via the type attribute (XML form) or in angle braces following the URI (ABNF form).When the content represented by a URI is available in many data formats, a grammar processor may use the preferred media-type to influence which of the multiple formats is used. For instance, on a server implementing HTTP content negotiation, the processor may use the preferred media-type to order the preferences in the negotiation. The resource representation delivered by dereferencing the URI refererence may be considered in terms of two types. The declared media-type is the asserted value for the resource and the actual media-type is the true format of its content. The actual media-type should be the same as the declared media-type, but this is not always the case (e.g. a misconfigured HTTP server might return text/plain for an application/srgs+xml document). A specific URI scheme may require that the resource owner always, sometimes, or never return a media-type. The declared media-type is the value returned by the resource owner or, if none is returned, the preferred media type given in the grammar. There may be no declared media-type if the resouce owner does not return a value and no preferred type is specified. Whenever specified, the declared media-type is authoritative. Three special cases may arise. The declared media-type may not be supported by the processor; this is an error. The declared media-type may be supported but the actual media-type may not match; this is also an error. Finally, there may be no declared media-type; the behavior depends on the specific URI scheme and the capabilities of the grammar processor. For instance, HTTP 1.1 allows document intraspection (see RFC 2616, section 7.2.1), the data scheme falls back to a default media type, and local file access defines no guidelines. The following table provides some informative examples: See Appendix G for a summary of the status for media types for ABNF Form and XML Form grammars. ABNF Form In ABNF an external reference by URI is represented by a dollar sign ('$') followed immediately by either an ABNF URI or ABNF URI with media type. There must be no white space between the dollar sign and the URI.// References to specific rules of an external grammar $<> $<> // Implicit reference to the root rule of an external grammar $<../date.gram> // References with associated media types $<>~<application/srgs> $<../date.gram>~<application/srgs> Note: the media type of "application/srgs"has been requested for ABNF Form grammars. See Appendix G for details. XML Form An XML rule reference is represented by a rulerefelement with a uriattribute that defines the URI of the referenced grammar and rule within it. If a fragment identifier is appended then the identifer indicates a specific rulename being referenced. If the fragment identifier is omitted then the reference is (implicitly) to the root rule of the referenced grammar. The optional typeattribute specifies the media type of the grammar containing the reference.<!-- References to specific rules of an external grammar --> <ruleref uri=""/> <ruleref uri=""/> <!-- Implicit reference to the root rule of an external grammar --> <ruleref uri="../date.grxml"/> <!-- References with associated media types --> <ruleref uri="" type="application/srgs+xml"/> <ruleref uri="../date.grxml" type="application/srgs+xml"/> Note: the media type "application/srgs+xml"has been requested for XML Form grammars. See Appendix G for details on media types for grammars. Several rulenames are defined to have specific interpretation and processing by a speech recognizer. A grammar must not redefine these rulenames. In the ABNF Form a special rule reference is syntactically identical to a local rule reference. However, the names of the special rules are reserved to prevent a rule definition with the same name. In the XML Form a special rulename is represented with the special attribute on a ruleref element. It is illegal to provide both the special and the uri attributes. ABNF Form: $NULL XML Form: <ruleref special="NULL"/> ABNF Form: $VOID XML Form: <ruleref special="VOID"/> ABNF Form: $GARBAGE XML Form: <ruleref special="GARBAGE"/> $location = $city $GARBAGE $state; <rule id="location"> <ruleref uri="#city"/> <ruleref special="GARBAGE"/> <ruleref uri="#state"/> </rule> The W3C rulename when referencing ABNF Form and XML Form grammars) identifies a start symbol as defined by the N-Gram specification. If the start symbol is absent the N-Gram, as a whole, is referenced as defined in the N-Gram specification. ABNF Form URI references to N-Gram documents follow the same syntax as references to other ABNF or XML Form grammar documents. The following are examples of references to an N-Gram document via an explicit rule reference and an implicit reference to the root rule.$<> $<> XML Form URI references to N-Gram documents follow the same syntax as reference to other ABNF Form and XML Form grammar documents. The following are examples of references to an N-Gram document via an explicit rule reference and an implicit reference to the root rule.<ruleref uri=""/> <ruleref uri=""/> A sequence of legal rule expansions is itself a legal rule expansion. The sequence of rule expansions implies the temporal order in which the expansions must be detected by the user agent. This constraint applies to sequences of tokens, sequences of rule references, sequences of tags, parentheticals A A sequence of XML rule expansion elements ( <ruleref>, <item>, <one-of>, <token> <tag>) and CDATA sections containing space separated tokens must be recognized in temporal sequence. (The only exception is where one or more "item" elements appear within a one-ofelement.) An itemelement can surround any expansion to permit a repeat attribute or language identifier to be attached. The weightattribute of itemis ignored unless the element appears within a one-ofelement.<!-- sequence of tokens --> this is a test <!--sequence of rule references--> <ruleref uri="#action"/> <ruleref uri="#object"/> <!--sequence of tokens and rule references--> the <ruleref uri="#object"/> is <ruleref uri="#color"/> <!-- sequence container --> <item>fly to <ruleref uri="#city"/> </item> Special cases An empty item element is legal as is an item element containing only white space. Both forms are equivalent to a NULL reference and a grammar processor will behave as if the item were not present.<!-- equivalent sequences --> phone home phone <item/> home phone <item></item> home phone <item> </item> home. A weight may be optionally provided for any number of alternatives in an alternative expansion. Weights98] and [RAB93] are informative references on the topic of speech recognition technology and the underlying statistical framework within which weights are applied. Grammar authors and speech recognizer developers should be aware of the following limitations upon the definition and application of weights as outlined above. ABNF Form A set of alternative choices is identified as a list of legal expansions separated by the vertical bar symbol. If necessary, the set of alternative choices may be delimited by parentheses.Michael | Yuriko | Mary | Duke | $otherNames (1 | 2 | 3) A weight1 = word | $NULL; $rule2 = () | word; $rule3 = word | {TAG-CONTENT}; An empty alternative (white space only) is not legal.// ILLEGAL $rule1 = a | | b; $rule2 = | b; $rule3 = a |; The following construct is interpreted as a single weighted alternative.// Legal $rule1 = /2/ word; $rule2 = /2/ {TAG-CONTENT}; $rule3 = /2/ $NULL; XML Form The one-ofelement identifies a set of alternative elements. Each alternative expansion is contained in a itemelement. There must be at least one itemelement contained within a one-ofelement. Weights are optionally indicated by the weightattribute on the itemelement.<one-of> <item>Michael</item> <item>Yuriko</item> <item>Mary</item> <item>Duke</item> <item><ruleref uri="#otherNames"/></item> </one-of> <one-of><item>1</item> <item>2</item> <item>3</item></one-of> <one-of> <item weight="10">small</item> <item weight="2">medium</item> <item>large</item> </one-of> <one-of> <item weight="3.1415">pie</item> <item weight="1.414">root beer</item> <item weight=".25">cola</item> </one-of> Special cases A one-ofelement containing a single item is legal and requires that input match the single item. The single item may be optionally weighted.<one-of> <item>word</item> </one-of> <one-of> <item weight="2.0">word</item> </one-of> Is it legal for an alternative to be a reference to NULL, an empty item or a single tag. In each case the input is equivalent to matching NULL and as a result the other alternatives are optional.<one-of> <item>word</item> <item/> </one-of> <one-of> <item>word</item> <item> <ruleref special="NULL"/> </item> </one-of> <one-of> <item>word</item> <item> <tag>TAG-CONTENT</tag> </item> </one-of>leene. Where a number of possible repetitions (e.g. <m-> or <m-n> (n > 0) but not <0>) is expressed on a construct whose only content is one or more tag elements, the behavior of the grammar processor is not defined and will be specific to individual implementations. Any number of non-optional repetitions (e.g., <m. <0> or <0-0>) then the expansion is equivalent to NULL.nnn", ".nnnn" . Useful references on statistical models of speech recognition include [JEL98] and [RAB93]. ABNF Form The following are postfix operators: <m-n> <m-> <m>. " <0-1>". The following symbols are reserved for future use in ABNF: '*', '+', '?'. These symbols must not be used at any place in a grammar where the syntax currently permits a repeat operator.// the token "very" is optional [very] very <0-1> // the rule reference $digit can occur zero, one or many times $digit <0-> // the rule reference $digit can occur one or more times $digit <1-> // the rule reference $digit can occur four, five or six times $digit <4-6> // the rule reference $digit can occur ten or more times $digit <10-> // Examples of the following expansion // "pizza" // "big pizza with pepperoni" // "very big pizza with cheese and pepperoni" [[very] big] pizza ([with | and] $topping) <0-> Repeat probabilities are only supported in the range form. The probability is delimited by slash characters and contained within the angle brackets: <m-n /prob/>and <m- /prob/>.// the token "very" is optional and is 60% likely to occur // and 40% likely to be absent in input very <0-1 /0.6/> // the rule reference $digit must occur two to four times // with 80% probability of recurrence $digit <2-4 /.8/> XML Form The itemelement has a repeatattribute that indicates the number of times the contained expansion may be repeated. The following example illustrates the accepted values of the attribute.<!-- the token "very" is optional --> <item repeat="0-1">very</item> <!-- the rule reference to digit can occur zero, one or many times --> <item repeat="0-"> <ruleref uri="#digit"/> </item> <!-- the rule reference to digit can occur one or more times --> <item repeat="1-"> <ruleref uri="#digit"/> </item> <!-- the rule reference to digit can occur four, five or six times --> <item repeat="4-6"> <ruleref uri="#digit"/> </item> <!-- the rule reference to digit can occur ten or more times --> <item repeat="10-"> <ruleref uri="#digit"/> </item> <!-- Examples of the following expansion --> <!-- "pizza" --> <!-- "big pizza with pepperoni" --> <!-- "very big pizza with cheese and pepperoni" --> <item repeat="0-1"> <item repeat="0-1"> very </item> big </item> pizza <item repeat="0-"> <item repeat="0-1"> <one-of> <item>with</item> <item>and</item> </one-of> </item> <ruleref uri="#topping"/> </item> The repeat-probon the item element carries the repeat probability. Repeat probabilities are supported on any item element but are ignored if the repeat attribute is not also specified.<-- The token "very" is optional and is 60% likely to occur. --> <-- Means 40% chance that "very" is absent in input --> <item repeat="0-1" repeat-very</item> <-- The rule reference to digit must occur two to four times --> <-- with 80% probability of recurrence. --> <item repeat="2-4" repeat- <ruleref uri="#digit"/> </item> A tag is a legal rule expansion (a tag can also be declared in the grammar header - see S4. It is legal to use a tag as a stand-alone expansion. For example, a rule may expand to a single tag and no tokens. $rule = {TAG-CONTENT}; <rule id="rule"><tag>TAG-CONTENT</tag></rule> ABNF Form A tag whitespace.1 = this is a {TAG-CONTENT-1} test {TAG-CONTENT-2}; $rule2 = open {TAG-CONTENT-1} | $close {TAG-CONTENT-2}; $rule3 = {!{ a simple tag containing { and } needs no escaping }!}; XML Form A tagelement can be a direct child of the itemand ruleelements. The content of tagis CDATA.<rule id="rule1">this is a <tag>TAG-CONTENT-1</tag> test <tag>TAG-CONTENT-2</tag> </rule> <rule id="rule2"> <one-of> <item> open <tag>TAG-CONTENT-1</tag> </item> <item> <ruleref uri="#close"/> <tag>TAG-CONTENT-2</tag> </item> </one-of> </rule>ré Prév1 = (Michel Tremblay | André Roy)!fr-CA; // Handling language-specific pronunciations of the same word // A capable speech recognizer will listen for Mexican Spanish and // US English pronunciations. $people2 = Jose!en-US; | Jose!es-MX; /** * Multi-lingual input possible * @example may I speak to André Roy * @example may I speak to Jose */ public $request = may I speak to ($people1 | $people2); XML Form XML 1.0 [XML §2.12] defines the xml:langattribute for language identification. The attribute provides a single language identifier for the content of the element on which it appears. The xml:langattribute may be attached to one-of, tokenand item. It applies the token handling of scoped tokens.<?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE grammar PUBLIC "-//W3C//DTD GRAMMAR 1.0//EN" ""> <!-- the default grammar language is US English --> <grammar xmlns="" xmlns: <!-- single language attachment to tokens "yes" inherits US English language "oui" is Canadian French language --> <rule id="yes"> <one-of> <item>yes</item> <item xml:oui</item> </one-of> </rule> <!-- Single language attachment to an expansion --> <rule id="people1"> <one-of xml: <item>Michel Tremblay</item> <item>André Roy</item> </one-of> </rule> <!-- Handling language-specific pronunciations of the same word A capable speech recognizer will listen for Mexican Spanish and US English pronunciations. --> <rule id="people2"> <one-of> <item xml:Jose</item> <item xml:Jose</item> </one-of> </rule> <!-- Multi-lingual input is possible --> <rule id="request" scope="public"> <example> may I speak with André Roy </example> <example> may I speak with Jose </example> may I speak with <one-of> <item> <ruleref uri="#people1"/> </item> <item> <ruleref uri="#people2"/> </item> </one-of> </rule> </grammar>. - A rule reference, a quoted token, an unquoted token or a tag. - Parentheses ('(' and ')') for grouping and square brackets ('[' and ']') for optional grouping. - Repeat operator (e.g. " <0-1>") and language attachment (e.g. "!en-AU") apply to the tightest immediate preceding rule expansion. (To apply them to a sequence or to alternatives, use `()' or `[]' for grouping.) - Sequence of rule expansions. - Set of alternative rule expansions separated by vertical bars ('|') with optional weights. XML Form None required. XML structure is explicit. A rule definition associates a legal rule expansion with a rulename. rulename for each rule definition must be unique within a grammar. The same rulename may be used in multiple grammars. A rule definition is referenced by a URI in a rule reference with the rulename being represented as the fragment identifier. The core purpose of a rule definition is to associate a legal rule expansion with a rulename. A legal rulename in either the XML Form or ABNF Form is a character sequence that: Defined rulenames must be unique within a grammar. The Schema enforces this by declaring the rulename as an XML ID. Rulenames are case-sensitive in both XML and ABNF grammars. Exact string comparison is used to resolve rulename references. A legal rulenameExpansion; public $ruleName = ruleExpansion; private $ruleName = ruleExpansion; A rule definition is represented by the ruleelement. The idattribute of the element indicates the name of the rule and must be unique within the grammar (this is enforced by XML). The contents of the ruleelement may be any legal rule expansion defined in Section 2. The scopeattribute is explained in the next section.<rule id="city"> <one-of> <item>Boston</item> <item>"San Francisco"</item> <item>Madrid</item> </one-of> </rule> <rule id="command"> <ruleref uri="#action"/> <ruleref uri="#object"/> </rule> tagelement.<!-- Legal --> <rule id="rule"><item/></rule> <rule id="rule"><ruleref special="NULL"/></rule> <rule id="rule"><tag>TAG-CONTENT</tag></rule> <!-- ILLEGAL --> <rule id="rule"/> <rule id="rule"></rule> <rule id="rule"> </rule>. ABNF Form A rule definition may be annotated with the keywords "public" or "private". If no scope is provided, the default is "private".$town = Townsville | Beantown; private $city = Boston | "New York" | Madrid; public $command = $action $object; XML Form The scopeattribute of the ruleelement defines the scope of the rule definition. Defined values are publicand private. If omitted, the default scope is private.<rule id="town"> <one-of> <item>Townsville</item> <item>Beantown</item> </one-of> </rule> <rule id="city" scope="private"> <one-of> <item>Boston</item> <item>"San Francisco"</item> <item>Madrid</item> </one-of> </rule> <rule id="command" scope="public"> <ruleref uri="#action"/> <ruleref uri="#object"/> </rule> A documentation comment is a C/C++/Java comment that starts with the sequence of characters /**and which immediately precedes the relevant rule definition. Zero or more @exampletags may be contained at the end of the documentation comment. The syntax follows the Tagged Paragraph of a documentation comment of the Java Programming Language [JAVA §18.4]. The tokenization of the example follows the tokenization tokenization of the example follows the tokenization and sequence rules defined in Section 2.1 and Section 2.3 respectively.<rule id="command" scope="public"> <!-- A simple directive to execute an action. --> <example> open the window </example> <example> close the door </example> <ruleref uri="#action"/> <ruleref uri="#object"/> </rule> A conforming stand-alone grammar document consists of a legal header followed by a body consisting of a set of legal rule definitions. All rules defined within that grammar are scoped within the grammar's rulename namespace and each rulename must be legal and unique. It is legal for a grammar to define no rules. The grammar cannot be used for processing input since it defines no patterns for matching user input. A. A A legal header for a stand-alone ABNF document consists of a required ABNF self-identifying header including the grammar version and optional character encoding followed by these declarations in any order: - Language - Mode - Root rule - Tag format - Base URI - Pronunciation lexicon (any number) - Meta and http-equiv (any number) -Rule; tag-format FORMAT-STRING; base <>; lexicon <>; lexicon <>~<media-type>; $QuebecCities; XML Form: Header Summary A legal stand-alone XML Form grammar document consists of: - Legal XML Prolog - Root grammar element with the following attributes - XML namespace - Schema attributes - Version - Language - Mode - Root rule - Tag format - Base URI - grammarelement containing any number of the following elements in any order: - Pronunciation lexicon (any number) - Meta and HTTP-Equiv (any number) - Metadata (any number) - Tag (any number) Rule definitions follow the lexicon, meta, metadataand tagdeclarations. The following are examples of XML Form grammars headers each including all declarations permitted on the grammarelement and one with the DOCTYPE declaration.<?xml version="1.0" encoding="ISO-8859-1"?> <grammar version="1.0" xml:<?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE grammar PUBLIC "-//W3C//DTD GRAMMAR 1.0//EN" ""> <grammar version="1.0" xml: The ABNF self-identifying header must be present in any legal stand-alone ABNF Form grammar document. The first character of an ABNF document must be the "#" symbol (x23) unless preceded by an optional XML 1.0 byte order mark [XML §4.3.3]. The ABNF byte order mark follows the XML definition and requirements. For example, documents encoded in UTF-16 must begin with the byte order mark. The optional byte order mark and required "#" symbol must be followed immediately by the exact string "ABNF" (x41 x42 x4d x46) or the appropriate equivalent for the document's encoding (e.g. for UTF-16 little-endian: x23 x00 x41 x00 x42 x00 x4d x00 x46 x00). If the byte order mark is absent on a grammar encoded in UTF-16 then the grammar processor should perform auto-detection of character encoding in a manner analogous to auto-detection of character encoding in XML [XML §F]. Next follows a single space character (x20) and the required version number which is " 1.0" for this specification (x31 x2e x30). Next follows an optional character encoding. Section 4.4 defines character encodings in more detail. If present, there must be a single space character (x20) between the version number and the character encoding. The self-identifying header is finalized with a semicolon (x3b) followed immediately by a newline. The semicolon must be the first character following the version number or the character encoding if is present. For the remaining declarations of the ABNF header white space is not significant. A legal stand-alone XML Form grammar document must have a legal XML Prolog [XML §2.8]. The XML prolog in an XML Form grammar comprises the XML declaration and an optional DOCTYPE declaration referencing the grammar DTD. It is followed by the root grammar element. The XML prolog may also contain XML comments, processor instructions and other content permitted by XML in a prolog. for XML Form grammars is defined as. It is recommended that the grammar element also indicate the location of the grammar schema (see Appendix C) via the xsi:schemaLocation attribute from [SCHEMA1]. Although such indication is not required, to encourage it this document provides such indication on all of the examples: <grammar version="1.0" xmlns="" xmlns: ... </grammar> If present, the optional DOCTYPE must reference the standard DOCTYPE and identifier. <!DOCTYPE grammar PUBLIC "-//W3C//DTD GRAMMAR 1.0//EN" ""> The character encoding is defined on the XML declaration as defined by the XML specification. See Section 4.4 for detail. The language is defined by the xml:lang attribute on the grammar element. See Section 4.5 for details. The grammar mode is defined on the grammar element. See Section 4.6 for details. The root rule is defined on the grammar element. See Section 4.7 for details. The tag-format is defined on the grammar element. See Section 4.8 for details. The base URI for the document is defined by the xml:base attribute on the grammar element. See Section 4.9 for details.. Except for the different syntactic representation, the ABNF Form follows the character encoding handling defined for XML. XML grammar processors must accept both the UTF-8 and UTF-16 encodings of ISO/IEC 10646 and may support other character encodings. This follows from an XML grammar processor being a compliant XML processor and thus required to support those character encodings. For consistency, ABNF grammar processor must also accept both the UTF-8 and UTF-16 encodings of ISO/IEC 10646 and may support other character encodings. For both XML Form and ABNF Form grammars the declaration of the character encoding is optional but strongly recommended. XML defines behavior for XML processors that receive an XML document without a character encoding declaration. For consistency an ABNF grammar processor must follow the same behavior (with adjustments for the different syntax). (Note the character encoding declaration is optional only in cases where it is optional for a legal XML document.) ABNF Form The character encoding declaration is part of the self-identifying grammar header defined in Section 4.1 and is processed in combination with the byte order mark, if present, using the same procedure as XML 1.0 [XML §4.3.3]. The following are examples of ABNF self-identifying grammar headers with and without the character encoding declaration. Note: the ABNF Form syntax does not provide a character reference syntax for entry of a specific character, for example, one not directly accessible from available input devices. This contrasts with XML 1.0 syntax for character references [XML §4.1]. For development requiring character references the XML Form of the specification is recommended.#ABNF 1.0 ISO-8859-1;#ABNF 1.0 EUC-JP;#ABNF 1.0; XML Form XML declares character encodings as part of the document's XML declaration on the first line of the document. The following are examples of XML headers with and without the character encoding declaration.<?xml version="1.0" encoding="ISO-8859-1"?><?xml version="1.0" encoding="EUC-JP"?><?xml version="1.0"?> The language declaration of a grammar provides the language identifier that indicates the primary language contained by the document and optionally indicates a country or other variation. Additionally, any legal rule expansion may be labeled with a language identifier. The language declaration is required for all speech recognition grammars: i.e. all grammars for which the mode is "voice". (Note that mode defaults to voice if there is no explicit mode declaration in ABNF or mode attribute in XML.) If an XML Form grammar is incorporated within another XML document -- for example, as supported by VoiceXML 2.0 -- then the xml:lang attribute is optional on the grammar element and the xml:lang attribute must be inherited from the enclosing document. In DTMF grammars a language declaration must be ignored if present. The conformance definition in Section 5 defines the behavior of a grammar processor when it encounters a language variant that it does not support. ABNF Form The ABNF header must contain zero or one language declaration. It consists of the keyword " language", white space, a legal language identifier, optional white space and a terminating semicolon character (';').language en-US;language fr; XML Form Following the XML 1.0 specification [XML §2.12] the language identifier is indicated by an xml:langattribute on the root grammarelement.<grammar xmlns="" xmlns:<grammar xmlns="" xmlns: The mode of a grammar indicates the type of input that the user agent should be detecting. The default mode is " voice" for speech recognition grammars. An alternative input mode defined in Appendix E is " dtmf" input. The mode attribute indicates how to interpret the tokens contained by the grammar. Speech tokens are expected to be detected as speech audio that sounds like the token. Behavior with DTMF input, if supported, is defined in Appendix E. It is often the case that a different user agent is used for detecting DTMF tones than for speech recognition. The same may be true for other modes defined in future revisions of the specification. The specification does not define a mechanism by which a single grammar can mix modes: that is, a representation for a mixed " voice" and " dtmf" grammar is not defined. Moreover, it is illegal for a rule reference in one grammar to reference any grammar with a different mode. A user agent may, however, support the simultaneous activation of more than one grammar including both " voice" and " dtmf" grammars. This is necessary, for example, for DTMF-enabled VoiceXML browsers [VXML2]. (Note: parallel activation implies disjunction at the root level of the grammars rather than mixing of modes within the structure of the grammars.) ABNF Form The ABNF header must contain zero or one mode declaration. It consists of the keyword " mode", white space, either " voice" or " dtmf" optional white space and a terminating semicolon character (';'). If the ABNF header does not declare the mode then it defaults to voice.mode voice;mode dtmf; XML Form The modedeclaration is provided as an optional modeattribute on the root grammarelement. Legal values are "voice"and "dtmf". If the mode attribute is omitted then the value defaults to voice.<grammar mode="voice" version="1.0" xml:<grammar mode="dtmf" version="1.0" xmlns="" xmlns: Both the XML Form and ABNF Form permit the grammar header to optionally declare a single rule to be the root rule of the grammar. The rule declared as the root rule must be defined within the scope of the grammar. The rule declared as the root rule may be scoped as either public or private. An implicit rule reference to the root rule of a grammar is legal. The syntax for implicitly referencing root rules is defined in Section 2.2. It is an error to reference a grammar implicitly by its root if that grammar does not declare a legal root rule. Although a grammar is not required to declare a root rule it is good practice to declare the root rule of any grammar. ABNF Form The ABNF header must contain zero or one root rule declaration. It consists of the keyword " root", white space, the legal rulename of a rule defined within the grammar prefixed by the dollar sign ('$'), optional white space and a terminating semicolon character (';'). If the ABNF header does not declare the root rule then it is not legal to implicitly reference the grammar by its root.root $rulename; XML Form The rootrulename declaration is provided as an optional rootattribute on the grammarelement. The rootdeclaration must identify one rule defined elsewhere within the same grammar. The value of the root attribute is an XML IDREF (not a URI) and must not include the number sign ('#').<grammar root="rulename" ...> The tag-format declaration is an optional declaration of a tag-format identifier that indicates the content type of all rule tags and header tags contained within a grammar. The tag-format identifier is a URI. It is recommended that the tag format identifier indicate both the content type and a version. Tags typically contain content for a semantic interpretation processor and in such cases the identifier, if present, should indicate the semantic processor to use. Tag-format identifier values beginning with the string "semantics/x.y" (where x and y are digits) are reserved for use by the W3C Semantic Interpretation for Speech Recognition specification [SEM] or future versions of the specification. Grammar processor handling of tags is undefined if the tag format declaration is omitted. ABNF Form The ABNF header must contain zero or one tag format declaration. It consists of the keyword " tag-format", white space, a tag format identifier (an ABNF URI), optional white space and a terminating semicolon character (';'). Informative example ("semantics/1.0" is a reserved identifier) :tag-format <semantics/1.0>; XML Form The tag-formatis an optional attribute of the grammarelement and contains a tag format identifier.<grammar tag-format="semantics/1.0" ...> Relative URIs are resolved according to a base URI, which may come from a variety of sources. The base URI declaration allows authors to specify a document's base URI explicitly. See Section 4.9.1 for details on the resolution of relative URIs. The path information specified by the base URI declaration only affects URIs in the document where the element appears. The base URI declaration is permitted but optional in both the XML Form and the ABNF Form. Note: the base URI may be declared in a meta declaration but the explicit base declaration is recommended for both the ABNF Form and XML Form. ABNF Form The ABNF header must contain zero or one base URI declaration. It consists of the keyword " base", white space, a legal ABNF URI, optional white space and a terminating semicolon character (';').base <>;base <>; XML Form The base URI declaration follows [XML-BASE] and is indicated by a xml:baseattribute on the root grammarelement.<grammar xmlns="" xml:<grammar xmlns="" xml: User agents must calculate the base URI for resolving relative URIs according to [RFC2396]. The following describes how [RFC2396] applies to grammar documents. User agents must calculate the base URI according to the following precedences (highest priority to lowest): xml:baseattribute on the grammarelement or the basedeclaration in the ABNF header (see Section 4.9). A grammar may optionally reference one or more external pronunciation lexicon documents. A lexicon document is identified by a URI with an optional media type. The pronunciation information contained within a lexicon document is used only for tokens defined within the enclosing grammar. The W3C Voice Browser Working Group is developing the Pronunciation Lexicon Markup Language [LEX]. The specification will address the matching process between tokens and lexicon entries and the mechanism by which a speech recognizer handles multiple pronunciations from internal and grammar-specified lexicons. Pronunciation handling with proprietary lexicon formats will necessarily be specific to the speech recognizer. Pronunciation lexicons are necessarily language-specific. Pronunciation lookup in a lexicon and pronunciation inference for any token may use an algorithm that is language-specific. (See Section 2.1 for additional information on token handling and pronunciations.) ABNF Form The ABNF header may contain any number of pronunciation lexicon declarations (zero, one or many). The lexicon declaration consists of the " lexicon" keyword followed by white space, an ABNF URI or ABNF URI with media type, optional white space and a closing semicolon (';'). (Note that a lexicon URI is not preceded by a dollar sign as is the case for ABNF rule references.) Example:#ABNF V1.0 ISO-8859-1; language en-US; lexicon <>; lexicon <>~<media-type>; ... XML Form Any number of lexiconelements may occur as immediate children of the grammarelement. The lexiconelement must have a uriattribute specifying a URI that identifies the location of the pronunciation lexicon document. The lexiconelement may have a typeattribute that specifies the media type of the pronunciation lexicon document.<grammar xmlns="" xmlns: <lexicon uri=""/> <lexicon uri="" type="media-type"/> ... Grammar documents let authors specify meta data -- information about a document rather than document content -- in a number of ways. A meta declaration in either the ABNF Form or XML Form may be used to express metadata information in both XML Form and ABNF Form grammars or to reference metadata available in an external resource. The XML Form also supports a metadata element that provides a more general and powerful treatment of metadata information than meta. Since metadata requires an XML metadata schema which cannot be expressed in ABNF, there is no equivalent of metadata in the ABNF Form of grammars. A meta declaration in either ABNF Form or the XML Form associates a string to declared meta property or declares "http-equiv" content. The seeAlso property is the only defined meta property name. It is used to specify a resource that might provide additional metadata information about the containing grammar. This property is modelled on the rdfs:seeAlso property of Resource Description Framework (RDF) Schema Specification 1.0 [RDF-SCHEMA §2.3.4]. It is recommended that for general metadata properties that grammar authors follow the metadata properties defined in the Dublin Core Metadata Initiative [DC]. For example, "Creator" to identify the entity primarily responsible for making the content of the grammar, "Date" to indicate creation date, or "Source" to indicate the resource from which a grammar is derived (e.g. when converting an XML Form grammar to the ABNF Form, use "Source" to provide the URI for the original document.) ABNF Form The ABNF header may contain any number of meta declarations and http-equiv declarations (zero, one or many). Each declaration consists of the " meta" or " http-equiv" keyword followed by white space, the name string delimited by quotes, the keyword " is", white space, the content string delimited by quotes, optional white space and a closing semicolon (';'). The name string and the content string must be delimited by either a matching pair of double quotes ('"') or a matching pair of single quotes ("'"). Informative example:#ABNF 1.0; meta "Creator" is "Stephanie Williams"; meta "seeAlso" is ""; http-equiv "Expires" is '0'; http-equiv "Date" is "Thu, 12 Dec 2000 23:27:21 GMT"; XML Form A metadata property is declared with a metaelement. Either a nameor http-equivattribute is required. It is illegal to provide both nameand http-equivattributes. A contentattribute is required. The meta, metadataand lexiconelements must occur before all rule elements contained with the root grammarelement. There are no constraints on the ordering of the meta, metadataand lexiconelements. Informative example:<?xml version="1.0"?> <grammar version="1.0" xml: <meta name="Creator" content="Stephanie Williams"/> <meta name="seeAlso" content=""/> <meta http- <meta http- ... </grammar> The metadata element is container in which information about the document can be placed using a metadata schema. Although any metadata schema can be used with metadata, it is recommended that the Resource Description Format (RDF) schema [RDF-SCHEMA].). This specification only defines an XML representation for this form of meta-data declaration. There is no ABNF equivalent for metadata. A conversion of an XML Form grammar to the ABNF Form may extract the XML metadata into a separate document that is referenced with a "seeAlso" meta declaration in the ABNF document. Note: an agent that searches XML documents for metadata represented with RDF would be unable to locate RDF even if it were represented in ABNF. Thus, support for RDF in ABNF was considered low utility. XML Form Document properties declared with metadataelement can use any metadata schema. The metadata, meta, and lexiconelements must occur before all rule elements contained with the root grammarelement. There are no constraints on the ordering of the metadata, metaand lexiconelements. Informative: This is an example of how metadatacan be included in an XML grammar document using the Dublin Core version 1.0 RDF schema [DC] describing general document information such as title, description, date, and so on:<?xml version="1.0"?> <grammar xmlns="" version="1.0" xmlns: <metadata> <rdf:RDF xmlns: <!-- Metadata about the grammar document --> <rdf:Description <dc:Creator> <rdf:Seq <rdf:li>Jackie Crystal</rdf:li> <rdf:li>Jan Smith</rdf:li> </rdf:Seq> </dc:Creator> </rdf:Description> </rdf:RDF> </metadata> </grammar> A grammar may optionally specify one or more tag declarations in the header. The content of a tag in the header, just like a tag in rule expansions, is an arbitrary string which may be used for semantic interpretation. ABNF Form The ABNF header may contain any number of tag declarations (zero, one or many). The tag declaration consists a string delimited as described in S2.6 ABNF Form, followed by a closing semicolon (';'). The tag content is all text between the opening and closing delimiters including leading and trailing whitespace. The contents of the tag are not parsed by the grammar processor.#ABNF V1.0 ISO-8859-1; language en-US; {TAG-CONTENT-1}; {!{TAG-CONTENT-2}!}; $rule = . . .; ... XML Form Any number of tagelements may occur as immediate children of the grammarelement. The content of tagis CDATA.<grammar xmlns="" xmlns: <tag>TAG-CONTENT-1<tag> <tag>TAG-CONTENT-2<tag> ... Comments may be placed in most places in a grammar document. For XML, use XML comments. For ABNF there are documentation comments and C/C++/Java-style comments. ABNF Form C/C++/Java comments are permitted. Documentation comments are permitted before grammarand languagedeclarations and before any ruledefinition. Section 3.3 defines the format for representing examples in documentation comments before a rule definition.// C++/Java-style single-line comment /* C/C++/Java-style comment */ /** Java-style documentation comment */ XML Form An XML comment has the following syntax.<!-- comment --> The fetching and caching behavior of both ABNF Form and XML Form grammar documents is defined primarily by the environment in which the grammar processor operates. For instance, VoiceXML 1.0 and VoiceXML 2.0 define certain fetching and caching behaviors that apply to grammars activated by a VoiceXML browser. Similarly, any API for a recognizer that supports ABNF Form or XML Form grammars may apply fetching and caching behaviors. Grammar processors are recommended to support the following interpretation of "rendering" a grammar for the purpose of determining document freshness. Activation of a grammar is the point at which the recognizer begins detection of user input matching the grammar and is therefore analogous to the action of visual or audio rendering of system output. As with output rendering, grammar freshness should be checked close to the moment of grammar activation. ABNF keywords are case sensitive. The keywords of the ABNF language are not reserved. The keywords with specified meaning in ABNF are: Since keywords are not reserved they may be used as rulenames and as tokens. The following is a legal grammar that accepts as input a sequence of one or more "public" tokens. #ABNF 1.0 ISO-8859-1; language en-AU; root $public; mode voice; public $public = public $public | public; This section is Normative. Different sets of grammar conformance criteria exist for: An XML Form grammar document fragment is a Conforming XML Form Grammar Fragment if: xmlnsattributes which refer to non-grammar namespace elements are removed from the document, <?xml...?>) is included at the top of the document, grammarelement does not already designate the grammar namespace using the "xmlns" attribute, then xmlns=""is added to the element. A document is a Conforming Stand-Alone XML Form Grammar Document if it meets both the following conditions. The XML Form grammar specification and these conformance criteria provide no designated size limits on any aspect of grammar documents. There are no maximum values on the number of elements, the amount of character data, or the number of characters in attribute values. The grammar namespace may be used with other XML namespaces as per the Namespaces in XML Recommendation [XMLNS]. Future work by W3C will address ways to specify conformance for documents involving multiple namespaces. An XML Form grammar processor is a program that can parse and process XML Form grammar documents. Examples include speech recognizers and DTMF detectors that accept the XML Form. In a Conforming XML Form Grammar Processor, the XML parser must be able to parse and process all XML constructs defined by XML 1.0 [XML] and Namespaces in XML [XMLNS]. This XML parser is not required to perform validation of a grammar document as per its schema or DTD; this implies that during processing of an XML Form grammar document it is optional to apply or expand external entity references defined in an external DTD. A Conforming XML Form Grammar Processor must correctly understand and apply the semantics of each possible grammar feature defined by this document. A Conforming XML Form Grammar Processor must meet the following requirements for handling of languages: When a Conforming XML Form Grammar Processor encounters elements or attributes in a non-grammar namespace it may: A Conforming XML Form Grammar Processor is not required to support recursive grammars, that is, grammars in which rule references include direct or indirect self-reference. There is, however, no conformance requirement with respect to performance characteristics of the XML Form Grammar Processor. For instance, no statement is required regarding the accuracy, speed or other characteristics of a speech recognizer or DTMF detector. No statement is made regarding the size of grammar or size of grammar vocabulary that an XML Form Grammar Processor must support. An ABNF grammar document is a Conforming ABNF Document if it adheres to the specification described in this document (Speech Recognition Grammar Specification) including the Formal ABNF Specification. An ABNF Grammar Processor is a program that can parse and process ABNF grammar documents. Examples include speech recognizers and DTMF detectors that accept the ABNF Form. A Conforming ABNF Grammar Processor must correctly understand and apply the semantics of each possible grammar feature defined by this document. A Conforming ABNF Grammar Processor must follow the same language handling requirements as outlined in Section 5.4 for Conforming XML Form Grammar Processors. A Conforming ABNF Grammar Processor should inform its hosting environment if it encounters an illegal grammar document or other grammar content that it is unable to process. A Conforming ABNF Grammar Processor is not required to support recursive grammars, that is, grammars in which rule references include direct or indirect self-reference. There is, however, no conformance requirement with respect to performance characteristics of the ABNF Grammar Processor. For instance, no statement is required regarding the accuracy, speed or other characteristics of a speech recognizer or DTMF detector. No statement is made regarding the size of grammar or size of grammar vocabulary that an ABNF Grammar Processor must support. A Conforming ABNF/XML Form Grammar Processor must meet all the conformance criteria defined in Section 5.4 and in Section 5.6. Additionally an ABNF/XML Form Grammar Processor must be able to resolve and apply references from XML Form Grammars to ABNF Form Grammars, and references from ABNF Form Grammars to XML Form Grammars. A conforming user agent is a Conforming XML Form Grammar Processor, Conforming ABNF Form Grammar Processor or Conforming ABNF/XML Form Grammar Processor that is capable of accepting user input of the mode of a grammar (i.e. speech input in "voice" mode, DTMF input "dtmf" mode) and: Current speech recognition technology is statistically based. Since the output is not deterministic and cannot be guaranteed to be a correct representation of the input there is no conformance requirement regarding accuracy. A conformance test may, however, require some examples of correct recognition of speech input to determine conformance. This document was written with the participation of the members of the W3C Voice Browser Working Group (listed in alphabetical order): This appendix is Informative. The grammar DTD is located at This appendix is Normative. The grammar schema is located at Note: the grammar schema includes the no-namespace core schema (below). The no-namespace core schema for grammars is located at. It may be used as a basis for specifying XML Form Grammar Fragments embedded in non-grammar namespace schemas. This appendix is Normative. The notation used here follows the EBNF notation (Extended Backus-Naur Form) defined in the XML 1.0 Recommendation [XML §6]. The white space handling of the ABNF Form follows white space and end-of-line handling of XML (see Section 1.6). Lexical Grammar for ABNF The lexical grammar defines the lexical tokens of the ABNF format and has single characters as its terminal symbols. As a consequence neither whitespace characters nor ABNF comments are allowed in lexical tokens unless explicitly specified. SelfIdentHeader ::= '#ABNF' #x20 VersionNumber (#x20 CharEncoding)? ';' [Additional constraints: - The semicolon (';') must immediately be followed by an end-of-line. ] VersionNumber ::= '1.0' CharEncoding ::= Nmtoken BaseURI ::= ABNF_URI LanguageCode ::= Nmtoken [Additional constraints: - The language code must be a valid language identifier. ] RuleName ::= '$' ConstrainedName ConstrainedName ::= Name - (Char* ('.' | ':' | '-') Char*) TagFormat ::= ABNF_URI LexiconURI ::= ABNF_URI | ABNF_URI_with_Media_Type SingleQuotedCharacters ::= ''' [^']* ''' DoubleQuotedCharacters ::= '"' [^"]* '"' QuotedCharacters ::= SingleQuotedCharacters | DoubleQuotedCharacters Weight ::= '/' Number '/' Repeat ::= [0-9]+ ('-' [0-9]*)? [Additional constraints: - A number to the right of the hyphen must not be greater than the number to the left of the hyphen. ] Probability ::= '/' Number '/' [Additional constraints: - The float value must be in the range of "0.0" to "1.0" (inclusive). ] Number ::= [0-9]+ | [0-9]+ '.' [0-9]* | [0-9]* '.' [0-9]+ ExternalRuleRef ::= '$' ABNF_URI | '$' ABNF_URI_with_Media_Type [Additional constraints: - The referenced grammar must have the same mode ("voice" or "dtmf") as the referencing grammar. - If the URI reference contains a fragment identifier, the referenced rule must be a public rule of another grammar. - If the URI reference does not contain a fragment identifier, i.e. if it is an implicit root rule reference, then the referenced grammar must declare a root rule. ] Token ::= Nmtoken | DoubleQuotedCharacters LanguageAttachment ::= '!' LanguageCode Tag ::= '{' [^}]* '}' | '{!{' (Char* - (Char* '}!}' Char*)) '}!}' ------------------------------------------------------------ ABNF_URI and ABNF_URI_with_Media_Type are defined in Section 1.6 Terminology. Name is defined by the XML Name production [XML §2.3]. Nmtoken is defined by the XML Nmtoken production [XML §2.3]. NameChar is defined by the XML NameChar production [XML §2.3]. Char is defined by the XML Char production [XML §2.2]. Note: As mentioned in Section 2.5 the symbols "*", "+" and "?", which are often used in regular expression languages, are reserved for future use in ABNF and must not be used at any place in a grammar where the syntax currently permits a repeat operator. Syntactic Grammar for ABNF The syntactic grammar has lexical tokens defined by the lexical grammar as its terminal symbols. Between two lexical tokens any number of white spaces or ABNF comments may appear. grammar ::= SelfIdentHeader declaration* ruleDefinition* declaration ::= baseDecl | languageDecl | modeDecl | rootRuleDecl | tagFormatDecl | lexiconDecl | metaDecl | tagDecl baseDecl ::= 'base' BaseURI ';' [Additional constraints: - A base declaration must not appear more than once in grammar. ] languageDecl ::= 'language' LanguageCode ';' [Additional constraints: - A language declaration must not appear more than once in grammar. - A language declaration is required if the grammar mode is "voice". ] modeDecl ::= 'mode' 'voice' ';' | 'mode' 'dtmf' ';' [Additional constraints: - A mode declaration must not appear more than once in grammar. ] rootRuleDecl ::= 'root' RuleName ';' [Additional constraints: - A root rule declaration must not appear more than once in grammar. - The root rule must be a rule that is defined within the grammar. ] tagFormatDecl ::= 'tag-format' TagFormat ';' [Additional constraints: - A tag-format declaration must not appear more than once in grammar. ] lexiconDecl ::= 'lexicon' LexiconURI ';' metaDecl ::= 'http-equiv' QuotedCharacters 'is' QuotedCharacters ';' | 'meta' QuotedCharacters 'is' QuotedCharacters ';' tagDecl ::= Tag ';' ruleDefinition ::= scope? RuleName '=' ruleExpansion ';' [Additional constraints: - The rule name must be unique within a grammar, i.e. no rule must be defined more than once within a grammar. ] scope ::= 'private' | 'public' ruleExpansion ::= ruleAlternative ( '|' ruleAlternative )* ruleAlternative ::= Weight? sequenceElement+ sequenceElement ::= subexpansion | subexpansion repeatOperator subexpansion ::= Token LanguageAttachment? | ruleRef | Tag | '(' ')' | '(' ruleExpansion ')' LanguageAttachment? | '[' ruleExpansion ']' LanguageAttachment? ruleRef ::= localRuleRef | ExternalRuleRef | specialRuleRef localRuleRef ::= RuleName [Additional constraints: - The referenced rule must be defined within the same grammar. ] specialRuleRef ::= '$NULL' | '$VOID' | '$GARBAGE' repeatOperator ::= '<' Repeat Probability? '>' This appendix is Normative. This section defines a normative representation of a grammar consisting of DTMF tokens. A DTMF grammar can be used by a DTMF detector to determine sequences of legal and illegal DTMF events. All grammar processors that support grammars of mode "dtmf" must implement this Appendix. However, not all grammar processors are required to support DTMF input. If the grammar mode is declared as "dtmf" then tokens contained by the grammar are treated as DTMF tones (rather than the default of speech tokens). There are sixteen (16) DTMF tones. Of these twelve (12) are commonly found on telephone sets as the digits "0" through "9" plus "*" (star) and "#" (pound). The four DTMF tones not typically present on telephones are "A", "B", "C", "D". Each of the DTMF symbols is a legal DTMF token in a DTMF grammar. As in speech grammars, tokens must be separated by white space in a DTMF grammar. A space-separated sequence of DTMF symbols represents a temporal sequence of DTMF entries. In the ABNF Form the "*" symbol is reserved so double quotes must always be used to delimit "*" when defining an ABNF DTMF grammar. It is recommended that the "#" symbol also be quoted. As an alternative the tokens "star" and "pound" are acceptable synonyms. In any DTMF grammar any language declaration in a grammar header is ignored and any language attachments to rule expansions are ignored. In all other respects a DTMF grammar is syntactically the same as a speech grammar. For example, DTMF grammars may use rule references, special rules, tags and other specification features. The following is a simple DTMF grammar that accepts a 4-digit PIN followed by a pound terminator. It also permits the sequence of "*" followed by "9" (e.g. to receive a help message). #ABNF 1.0 ISO-8859-1; mode dtmf; $digit = 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9; public $pin = $digit <4> "#" | "*" 9; <?xml version="1.0"?> <grammar mode="dtmf" version="1.0" xmlns: <rule id="digit"> <one-of> <item> 0 </item> <item> 1 </item> <item> 2 </item> <item> 3 </item> <item> 4 </item> <item> 5 </item> <item> 6 </item> <item> 7 </item> <item> 8 </item> <item> 9 </item> </one-of> </rule> <rule id="pin" scope="public"> <one-of> <item> <item repeat="4"><ruleref uri="#digit"/></item> # </item> <item> * 9 </item> </one-of> </rule> </grammar> This appendix is Informative. The transformation provided below is illustrative of the conversion of an XML Form grammar to the Augmented BNF Form. Known limitations: The source for this transformation is located at. This appendix is Informative. The W3C Voice Browser Working Group has applied to IANA to register a media type each for the ABNF Form and XML Form of this Speech Recognition Grammar Specification. The ABNF media type identifies ABNF grammars. The media type applied for is "application/srgs". Similarly, the XML Form grammar media type identifies XML Form grammars. The media type applied for is "application/srgs+xml". The W3C Voice Browser Working Group has adopted the convention of using the ".gram" filename suffix for ABNF grammar documents and the ".grxml" filename suffix for XML Form grammar documents. This appendix is Informative. This section defines an informative representation of a parsed result of speech recognition or other user agent processing. This representation may be used as the basis for subsequent processing of user input, in particular, semantic interpretation. For instance, the W3C Semantic Interpretation for Speech Recognition specification [SEM] is defined around the logical parse structure. This Appendix adopts the terminology and nomenclature of Introduction to Automata Theory, Languages, and Computation [HU79]. Denote the tokens of the alphabet of all tokens accepted by a grammar as t1, t2.... An input or output token sequence is a space separated string of tokens. The logical parse structure contains white-space-normalized tokens. The tokens in the logical parse structure are optionally delimited by double quotes so that white space and others characters can be parsed unambiguously. e.g. t1,t2,"t3 with space". (For consistency, all examples in this Appendix include double quotes.) Let ε (epsilon) or "" denote the unique string of length 0, also known as the empty string. Denote the tags of the alphabet of all tags accepted by a grammar as {tag1}, {tag2}, .... Denote a legal expansion as E. (A legal expansion is defined in Section 2.) The expressive power of a rule expansion is a Regular Expression (see HU79) and has an equivalent Finite Automaton (see HU79). [The handling of rule references requires special treatment: see Section H.2.] The expressive power of the grammar specification consists of: We formalize the logical parse structure by creating a Finite Automaton with Output (see HU79). This construct is also referred to as a Finite State Transducer. We define the transitions for tokens and tags as producing an output symbol. We represent parse output as an ordered array of output entities: [e1,e2,e3,...]. An entity e may be a token, a tag or a rule expansion (see H.2). The empty output array is represented as [ε] or simply []. A $NULL reference is equivalent to a transition that accepts as input ε and produces as output ε. In the notation of HU79: ε/ε. A $VOID reference is logically equivalent to a missing transition. It accepts no input and produces no output. A $GARBAGE reference is equivalent to a transition that accepts platform specific input and produces as output ε. An ambiguity occurs when for a specified sequence of input tokens matched to a specified rule of a grammar there is more than one distinct logical parse structure that can be produced. An ambiguity can occur at points of disjunction (choice) in a grammar. Disjunction exists with the use of alternatives and repeats. A grammar processor may preserve any number of ambiguous logical parse structures to create a set of alternative logical parse structures for the input. It is legal for a grammar processor to maintain all possible logical parse structures or to dispose of all but one of the alternatives. There is no specified behavior for selection of ambiguities amongst possibilities by a grammar processors. As a result grammars that contain ambiguity do not guarantee portability of performance. Developers and grammar tools should be ambiguity-aware. This Appendix does not illustrate all forms of ambiguous expansions but provides examples of some of the form common forms. Matching a token to a token produces an array of 1 token A $NULL reference is matched by an empty input sequence and output is an empty array. A tag is matched by an empty input sequence and output is an array of 1 tag Concatenation: An expansion consisting of a token and a tag is matched by input containing the token and produces as output a token, tag array. Concatenation: an expansion consisting of a sequence of tokens, tags and $NULLs is matching by input that consists of the contained tokens. Output consists of the sequence of tokens and tags with order preserved. e.g. Parenthetical structure is not preserved in the result. The following is the same sequence as the previous example but with parentheticals added to the expansion definition. Alternatives: a set of many alternative tokens is matched by input of a single token and produces as output a single token. Alternatives: if any single expansion in a set of alternatives can be matched by null input then the set of alternatives may be matched by null input and the output is the output of null-accepting expansion. ($NULL, {tag} and repeat counts of zero all permit null input.) With a different null-accepting expansion: Alternatives and ambiguity: several examples of ambiguous expansions with the ambiguity arising from alternatives that accept the same input but produce different output. In this example null input is ambiguous. The following is not ambiguous because the different paths through the expansion produce the same output. Repeats: an optional expansion can be either matched by an empty token sequence or by any token sequence that matches the expansion contained within the optional. Repeats: order is preserved upon multiple expansions. Repeats and null input: If the contents of an optional expansion can be matched by an empty input sequence AND the output of matching the contained expansion is always an empty array then the output of matching the optional expansion by an empty sequence is also an empty array. Ambiguous repeats: If a repeated or optional expansion can be matched by an empty input sequence BUT the output of matching the contained expansion may contain tags then the parse is ambiguous. It is recommended that the parse be minimal: Output 1 is preferred. A similar ambiguity arises if the repeated expansion contains a alternative expansion that has a null-accepting expansion. A sequence with two repeat expansion can be ambiguous if the two repeated expansions can accept the same input but produce different output. A rule reference is a legal rule expansion (see Section 2.2). We denote output obtained by matching the token sequence "t1,t2,..." against the expansion $rulename as $rulename[e1,e2,...] where "e1,e2,..." is the entity sequence obtained by matching that token sequence against the rule expansion defined for $rulename. Where a rule reference to an external rule is used the ABNF syntax for the rule reference is used (without any media type). For example, $<">[e1,e2,...] or an implicit root rule reference $<">[e1,e2,...]. For brevity, all the examples below use only local rule references. The rulename of the top-level rule should enclose the logical parse structure. A distinct structure for matching rule references maintains the parse tree for the result. This structure may be utilized in the semantic interpretation process or other computational processes that derive from the parse output structure. There is no distinction between local rule references (within the same grammar) and external rule references. There is no distinction between a root reference and a reference to a named grammar. The following is a simple rule reference example. The following is a rule reference in sequence. The following includes a reference to a rule that outputs a tag. Multiple references to the same rule are permitted. Rule references may be repeated. The Speech Recognition Grammar Specification has the expressive power of a Context Free Grammar. This arises because the language permits a rule to directly or indirectly reference itself. [Note: a Conforming XML Form Grammar Processor or Conforming ABNF Form Grammar Processor is not required to support recursive grammars.] There is no distinct representation for a recursive rule reference. Simple right recursion. Note: this grammar can be written in a non-recursive (regular expression) form. Embedded recursion. Note that this matches any sequence of n t1's followed by n t2's. This appendix is Informative. The following features are under consideration for versions of the Speech Recognition Grammar Specification after version 1.0: This appendix is Informative. The following shows a simple grammar that supports commands such as "open a file" and "please move the window". It references a separately-defined grammar for politeness which is not shown here. ABNF Form#ABNF 1.0 UTF-8; language en; mode voice; root $basicCmd; meta "author" is "Stephanie Williams"; /** * Basic command. * @example please move the window * @example open a file */ public $basicCmd = $<> $command $<>; $command = $action $object; $action = /10/ open {TAG-CONTENT-1} | /2/ close {TAG-CONTENT-2} | /1/ delete {TAG-CONTENT-3} | /1/ move {TAG-CONTENT-4}; $object = [the | a] (window | file | menu); XML Form<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE grammar PUBLIC "-//W3C//DTD GRAMMAR 1.0//EN" ""> <grammar xmlns="" xml: <meta name="author" content="Stephanie Williams"/> <rule id="basicCmd" scope="public"> <example> please move the window </example> <example> open a file </example> <ruleref uri=""/> <ruleref uri="#command"/> <ruleref uri=""/> </rule> <rule id="command"> <ruleref uri="#action"/> <ruleref uri="#object"/> </rule> <rule id="action"> <one-of> <item weight="10"> open <tag>TAG-CONTENT-1</tag> </item> <item weight="2"> close <tag>TAG-CONTENT-2</tag> </item> <item weight="1"> delete <tag>TAG-CONTENT-3</tag> </item> <item weight="1"> move <tag>TAG-CONTENT-4</tag> </item> </one-of> </rule> <rule id="object"> <item repeat="0-1"> <one-of> <item> the </item> <item> a </item> </one-of> </item> <one-of> <item> window </item> <item> file </item> <item> menu </item> </one-of> </rule> </grammar> These two grammars illustrate referencing between grammars. The same grammar is shown in both XML Form and ABNF Form. ABNF: 1.0 ISO-8859-1; language en; mode voice; root $city_state; public $city = Boston | Philadelphia | Fargo; public $state = Florida | North Dakota | New York; // References to local rules // Artificial example allows "Boston, Florida!" public $city_state = $city $state; ABNF: 1.0 ISO-8859-1; language en; mode voice; // Reference by URI syntax public $flight = I want to fly to $<>; // Reference by URI syntax public $exercise = I want to walk to $<>; // Implicit reference to root rule by URI public $wet = I want to swim to $<>; XML Form Grammar:<?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE grammar PUBLIC "-//W3C//DTD GRAMMAR 1.0//EN" ""> <grammar xmlns="" xmlns: <rule id="city" scope="public"> <one-of> <item>Boston</item> <item>Philadelphia</item> <item>Fargo</item> </one-of> </rule> <rule id="state" scope="public"> <one-of> <item>Florida</item> <item>North Dakota</item> <item>New York</item> </one-of> </rule> <!-- Reference by URI to a local rule --> <!-- Artificial example allows "Boston, Florida"! --> <rule id="city_state" scope="public"> <ruleref uri="#city"/> <ruleref uri="#state"/> </rule> </grammar> XML Form Grammar:<?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE grammar PUBLIC "-//W3C//DTD GRAMMAR 1.0//EN" ""> <grammar xmlns="" xmlns: <!-- Using URI syntax --> <rule id="flight" scope="public"> I want to fly to <ruleref uri=""/> </rule> <!-- Using URI syntax --> <rule id="exercise" scope="public"> I want to walk to <ruleref uri=""/> </rule> <!-- Implicit reference to root rule of a grammar by URI --> <rule id="wet" scope="public"> I want to swim to <ruleref uri=""/> </rule> </grammar> The following two grammars are XML Form grammars with Korean yes/no content. The first represents the Korean symbols as Unicode characters and has UTF-8 encoding. The second represents the same Unicode characters using character escaping. ABNF Form Grammar with Unicode Characters in UTF-8 Encoding#ABNF 1.0 UTF-8; language ko; mode voice; root $yes_no_ko; /* * Simple Korean yes/no grammar * @example 예 */ public $yes_no_ko = 예 | 아니오 ; XML Form Grammar with Unicode Characters in UTF-8 Encoding<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE grammar PUBLIC "-//W3C//DTD GRAMMAR 1.0//EN" ""> <grammar xml: <!--> XML Form Grammar with Character Escaping of Unicode Characters<?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE grammar PUBLIC "-//W3C//DTD GRAMMAR 1.0//EN" ""> <grammar xml: <!--> The following two grammars are XML Form grammars with Chinese number content. The first represents the Chinese symbols as Unicode characters with the UTF-8 encoding. The second represents the same Unicode characters using character escaping. ABNF Form Grammar with Unicode Characters in UTF-8 Encoding#ABNF 1.0 UTF-8; language zh; mode voice; root $main; public $main = $digits1_9; /* * @example 四 */ private $digits1_9 = 一 | 二 | 三 | 四 | 五 | 六 | 七 | 八 | 九; XML Form Grammar with Unicode Characters in UTF-8 Encoding<?xml version="1.0" encoding="UTF-8"?> <> XML Form Grammar with Character Escaping of Unicode Characters<?xml version="1.0" encoding="ISO-8859-1"?> <> This Swedish XML Form grammar provides a comprehensive set of forms of "yes" and "no". All characters are contained within the ISO-8859-1 (Latin-1) character set. XML Form Grammar with ISO-8859-1<?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE grammar PUBLIC "-//W3C//DTD GRAMMAR 1.0//EN" ""> <grammar version="1.0" xmlns="" xmlns: <rule id="main" scope="public"> <example>ja det är rätt</example> <example>nej det är fel</example> <one-of> <item> <ruleref uri="#yes_rule"/> </item> <item> <ruleref uri="#no_rule"/> </item> </one-of> </rule> <rule id="yes_rule" scope="private"> <example>ja det är rätt</example> <one-of> <item>exakt</item> <item>javisst</item> <item> ja <item repeat="0-1"> <ruleref uri="#yes_emphasis"/> </item> </item> <item>jepp</item> <item>korrekt</item> <item>okej</item> <item>rätt</item> <item>si</item> <item>säkert</item> <item>visst</item> </one-of> </rule> <rule id="yes_emphasis" scope="private"> <example>det stämmer</example> <one-of> <item>det gjorde jag</item> <item> <item repeat="0-1">det</item> stämmer </item> <item>det är rätt</item> <item>det är korrekt</item> <item>det är riktigt</item> </one-of> </rule> <rule id="no_rule" scope="private"> <example>nej det är fel</example> <one-of> <item>icke</item> <item>fel</item> <item> nej <item repeat="0-1"> <ruleref uri="#no_emphasis"/> </item> </item> <item>nix</item> <item>no</item> </one-of> </rule> <rule id="no_emphasis" scope="private"> <example>det är fel</example> <one-of> <item>det gjorde jag inte</item> <item> <item repeat="0-1">det</item> stämmer inte </item> <item>det är fel</item> <item>absolut inte</item> <item>inte alls</item> </one-of> </rule> </grammar>
http://www.w3.org/TR/2003/PR-speech-grammar-20031218/
crawl-002
en
refinedweb
See also: IRC log <ivan> Slides for presentation [ Ivan introduces Raphael ] <RalphS> Multimedia Semantics Incubator Group <RalphS> Cover Slide <DanC> (which face is Raphael's?) <RalphS> Raphael: I am in the center with my hand on my belt <RalphS> [slide 2 - MMSem Pointers] <RalphS> kudos for using the public mailing list for most of the discussion! <RalphS> [slide 3 - MMSem Goals] <RalphS> [slide 4 - MMSem Activities] <RalphS> [slide 5 - MMSem Deliverables] <RalphS> Multimedia Semantics Incubator Group Patent Policy Status <RalphS> [slide 6 - Image Annotation on the Semantic Web] <RalphS> [slide 7 - Image Annotation on the Semantic Web] <RalphS> [slide 8 - Multimedia Annotation Interoperability Framework] <RalphS> [slide 9 - Multimedia Annotation Interoperability Framework] <RalphS> [slide 10 - Managing Personal Photos] <RalphS> [slide 11 - Facetting Music Songs] [slide 12 - Bringing NewsML in the Semantic Web] [slide 13 - Managing your Web 2.0 Personomy] [slide 14 - MPEG-7 and the Semantic Web] <IanJ> "It's an ISO standard, making it difficult to use." [slide 15 - Multimedia Vocabularies on the Semantic Web] [slide 16 - Multimedia Semantics: Relevant Tools and Resources] [slide 17 - Liaison] [slide 18 - MMSem Success] [slide 19 - MMSem Future] <RalphS> Raphael: we hope to submit a new XG charter to W3C in mid-July <RalphS> ... and would like to resume work in September --- Q&A session--- Raphael: at the XG we used some XML-based schema ... and the question is if W3C should publish new ontologies based on these schemas ... and what we should do with that? Ralph: thanks Raphael for an informative session ... have the XG considered publishing the ontologies in XG work space? Raphael: yes, we have <DanC> URIs for W3C Namespaces Raphael: ... but we are not sure DanC: wich community cares for continuing with this activity? Ralph: the other interesting question is about maintainance ... and who would do it ... and I also wonder about the output of the XG ... what is the challenge for W3C about this output? <DanC> DanC: if the community around the ontology is the XG, then the XG is more than welcome to publish the ontology at /YYYY/MM/blah-blah , per Raphaelif we publish an ontology, should we have just the OWL or a document also? <ivan> +1 Raphael: the feeling within the group is not to follow a Rec track now (could be a target in another year) ... we would rather try to see if the existing technologies are enough ... we think they are not <RalphS> Ralph: the consensus of both the SemWeb Best Practices WG and the SemWeb Deployment WG, as documented in the Vocabulary Management Working Draft(s) is that both forms -- the machine-readable OWL and the human-readable HTML are desirable <Zakim> DanC, you wanted to note that the level of endorsement around a namespace is normally orthogonal to its URI DanC: notes that the level of endorsement around a namespace is normally orthogonal to its URI Ralph: is useful for the XG to point out what things can be re-used ... the XG can publish its work, new ontologies ... but what part of the community interested would like to see the REC stamp on it? Raphael: we wondered also about that within the group ... about the formality, I am not sure they fully understand the W3C process ... currently we don't need to go to the formality of a WG but we might need it in the future <Zakim> DanC, you wanted to ask about the "OWL expression of other work" case; remind me of one example? did the XG negotiate rights to do a derivative work? is the original org interested <IanJ> I note that Raphael just uttered one of the things we are concerned about: no distinction between TR and xG report <RalphS> Raphael: IPTC pretty much happy with things being published somewhere on w3.org <RalphS> ... DIG35 not sure what level of formality we might need DanC:do we have the right for derivative work (related to mpeg-7)? Raphael: yes, we have rights to do derivative work ... I am not sure if we can publish this ontology, I need to check on that <RalphS> Raphael: we made an official liaison with ISO and they're commenting on our work DanC: is either of those organizations interested on publishing the OWL? <raphael> I3A: I do not know <RalphS> Raphael: ISO not interested in publishing OWL International Imaging Industry Association (I3A) Ivan: who is in discussions with them? <RalphS> ... not sure if I3A is interested, but they're OK with W3C publishing when an agreement is signed Raphael: Daniel Dardailler is involved on it <SusanL> <RalphS> Raphael: our other question is how to get more industry involvement in follow-on work <DanC> (which row in ? a text search for "multimedia" fails) <SusanL> search for Raphael :-) <DanC> (ah... ) Ivan: my initial reaction is that you should try to identify those members that would like to be involve, and then they should contact their AC Rep ... I would be happy to do such contacts <raphael> yes DanC Raphael: ok <RalphS> Ian: we've discussed the potential of confusion between different kinds of TRs; in particular RECs and XG reports Raphael: yes, sometimes <RalphS> ... is it the case that [some of the liaisons] really don't care about the difference? <RalphS> Raphael: they don't seem to care Raphael: to participate in the group was a great experience, time consuming but great Kaz: thanks for a good presentation, it was a pity I couldn't see the presentation at WWW2007, I attended W3C Track ... the Multimodal Interaction WG, which develops MMI Architecture etc., would like to collaborate Raphael: that would be great <kaz> Ivan: when do you plan to finally publish all the documents? <SusanL> <SusanL> (public page is) Raphael: next week we will send it out <IanJ> <SusanL> Yes we will be happy to make the publication news. Ivan: can you provide us with a paragraph for the publication? ... SusanL will then *polish* it Raphael: yes, will do IanJ: is it ok to set expectation for next steps on that publication? <IanJ> leave open public list <IanJ> close member list IanJ: what happen with mailing lists, for example? <IanJ> close XG at the same time as publication <IanJ> leave wiki open <IanJ> also, please draft an email for the ac with expectations about next steps Raphael: yes, the public mailing list stays open, member mailing list closes, wiki stays open, the reference to the XG dissapears Ivan: thanks Raphael for your presentation <RalphS> ACTION: Raphael review the record to be sure we accurately recorded his references to interests of other organizations [recorded in] Ivan: thanks everybody, motion to adjourn MEETING ADJOURNED
http://www.w3.org/2007/06/14-muse-minutes.html
crawl-002
en
refinedweb
Copyright ©2001 W3C® (MIT, INRIA, Keio), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply. This document specifies the "decryption transform", which enables XML Signatures verification even if both signature and encryption operations are performed on an XML document. This is the first draft of the "Decryption Transform for XML Signature" from the XML Encryption Working Group (Activity). Comments and implementation experience of this proposal are (<imamu@jp.ibm.com>, <maruyama@jp.ibm.com>) and cc: the list xml-encryption@w3.org(archives) Patent disclosures relevant to this specification may be found on the Working Group's patent disclosure page in conformance with W3C policy. It has been noted by David Solo in [Solo] that both signature [XML-Signature] and encryption [XML-Encryption] operations may be performed on an XML document at any time and in any order, especially in scenarios such as workflow. For example, suppose a scenario that Alice has Bob send money to her by postal transfer. Alice draws up a document including a statement, "Bob pays $100 to Alice", and the number of her bank account and sends it to Bob. The number is encrypted with a bank's public key for guarding her privacy. Then, Bob includes the number of his bank account in the document, signs it for proving that he approves the statement, and sends it to the bank. The number is also encrypted. As a result, encryption, signature, and encryption are performed in this order on the document the bank receives. [XML-C14N] or XPath . This document makes use of the XML Encryption [XML-Encryption] and XML Signature [XML-Signature] namespaces with the following prefixes: xmlns:enc="" xmlns:ds="" The XML Encryption namespace is also used as the prefix for an algorithm identifier defined in this document. While applications MUST support XML and XML namespaces, the use of our " enc" and " ds" XML namespace prefixes is OPTIONAL; we use this facility to provide compact and readable exposition. This transform takes as a parameter a list of references to encrypted portions that are not to be decrypted by this transform. These references are expressed by enc:DataRef elements that appear as the direct child elements of the ds:Transform element. This transform requires an XPath node-set [XPath] for input. If an octet stream is given as input, it must be converted to a node-set as described in 4.3.3.2 The Reference Processing Model of the XML Signature specification [XML-Signature]. This transform decrypts all the enc:EncryptedData elements (as defined in the forthcoming XML Encryption standard [XML-Encryption]) except for those specified by enc:DataRef elements. The output of this transform is a node-set. In order to help an unambiguous definition of the exact semantics of this transform, we define the following two functions: enc:EncryptedDatain X. <dummy>and </dummy>) as described in [Tobin]. enc:DataReferenceelements in R in the context of the node-set X. N is a set of nodes referenced by R. This transform performs the following steps: enc:DataReferenceelements given as the parameter of this transform. EncryptedDataand whose namespace URI is the one defined in XML Encryption [XML-Encryption], such that e is not a member of noDecryptNodes(X,R). If such e cannot be selected, the algorithm terminates and the result of the transformation is X. Note: this transform does not deal with any detached enc:EncryptedKey elements. When an enc:EncryptedData element is decrypted, some enc:EncryptedKey elements detached from the enc:EncryptedData element have to be removed if the enc:EncryptedKey elements are in the scope of signature being validated. However, it is unclear how this transform should deal with the enc:EncryptedKey elements, and hence it is not recommended in this document to detach enc:EncryptedKey elements from an enc:EncryptedData element or to include detached enc:EncryptedKey elements in the scope of signature. It is out of scope of this document how to create a ds:Transform element and where to insert it in a transform sequence. In this section, we just show a way to create the element as advisory. A ds:Transform element can be created by the following steps: enc:EncryptedData, create an enc:DataReferenceelement referencing the node. ds:Transformelement, including the algorithm identifier of this transform and all the enc:DataRefelements created in Step 3. Suppose the following XML document is to be signed. Note that the part of this document ( [12]) is already encrypted prior to signature. In addition, the signer anticipates that some parts of this document, for example, the cardinfo element ( [07-11]) will be encrypted after signing. [01] <order Id="order"> [02] <item> [03] <title>XML and Java</title> [04] <price>100.0</price> [05] <quantity>1</quantity> [06] </item> [07] <cardinfo> [08] <name>Your Name</name> [09] <expiration>04/2002</expiration> [10] <number>5283 8304 6232 0010</number> [11] </cardinfo> [12] <EncryptedData Id="enc1" xmlns="">...</EncryptedData> [13] </order> In order to let the recipient know the proper order of decryption and signature verification, the signer include the decryption transform ( [06-08] below) in the signature. Assuming that an additional encryption is done on the cardinfo element ( [22]), the recipient would see the following encrypt-sign-encrypt document: [01] <Signature xmlns=""> [02] <SignedInfo> [03] ... [04] <Reference URI="#order"> [05] <Transforms> [06] <Transform Algorithm=""> [07] <DataReference URI="#enc1" xmlns=""/> [08] </Transform> [09] <Transform Algorithm=""/> [10] </Transforms> [11] ... [12] </Reference> [13] </SignedInfo> [14] <SignatureValue>...</SignatureValue> [15] <Object> [16] <order Id="order"> [17] <item> [18] <title>XML and Java</title> [19] <price>100.0</price> [20] <quantity>1</quantity> [21] </item> [22] <EncryptedData Id="enc2" xmlns="">...</EncryptedData> [23] <EncryptedData Id="enc1" xmlns="">...</EncryptedData> [24] </order> [25] </Object> [26] </Signature> The recipient should first look at the Signature element ( [01-26]) for verification. It refers to the order element ( [16-24]) with two transforms: decryption ( [06-08]) and C14N ( [09]). The decryption transform instructs the signature verifier to decrypt all the encrypted data except for the one specified in the DataRef element ( [07]). After decrypting the EncryptedData in line [22], the order element is canonicalized and signature-verified. When this algorithm is used to permit subsequent encryption of data already signed, the digest value of the signed resource still appears in clear text in a ds:Reference element. As noted by Hal Finney in [Finney], such a signature may reveal information (via the digest value) over encrypted data that increases the encryption's vulnerabaility to plain-text-guessing attacks. This consideration is out of scope of this document and (if relevant) should be addressed by applications. For example, as proposed by Amir Herzberg in [Herzberg], one may include a random 'salt' in a resource being signed to increase its entropy. Another approach is that when a signature referent is encrypted, one may also encrypt the signature (or at least the ds:DigestValue elements). As noted by Joseph Reagle in [Reagle], this latter solution works only if signature and encryption are well known by each other. For example, the signature may not be known of because it is detached. Or, it may it's already encrypted! Consider, Alice Encrypts element A and the Signature over the parent of A. Bob Encrypts element B (sibling of A) but not the Signature since he doesn't know about it. Alice then decrypts A and it's Signature, which may provide information to a subsequent plain text attack on the encrypted B.
http://www.w3.org/TR/2001/WD-xmlenc-decrypt-20010626
crawl-002
en
refinedweb
Joy, frustration, excitement, madness, aha's, headaches, ... codito ergo sum! This is just a test comment... ;-) Pretty cool stuff. I helped design a "transparent prevalence" engine that sits on top and automatically builds the command objects when using a contextbound objects (might be in there now). Still, I prefer a solid relational DB, but it is fun for experimental stuff. --Jesse Jan, Sounds like a sweet project. I myself wanted to take a look at it but there were no files to download and evaluate. Also, I know a manager @ Cox Communications who would love this. He wants to automate the builds of all of his projects. Is there anyway I could look at it. Regards Mathew Nolton More about Hippo.NET : Jan Tielens' Bloggings Hippo.NET, sounds like a killer tool! : Paul Gielens Blog So would the loop variable still be valid after the loop completes, or is the variable out-of-scope outside the loop? For Each item As Customer In CustomerCollection ' Do Stuff Is item valid here? Also, what if the outer scope also has an item variable, like this? Would this be a compile-time error? Dim item As Integer ' Do Some Stuff For Each item As Customer In CustomerCollection ' Do Stuff So would the loop variable still be valid after the loop completes, or is the variable out-of-scope outside the loop? -> The variable is out-of-scope outsite the loop. Also, what if the outer scope also has an item variable, like this? Would this be a compile-time error? -> Yes, this would result in an compile error. Looks excellent. A minor point. In the second screenshot it says builded. It should be built. Hi, I have created the same Macro for C# on. There are also some other macros for adding try/catch/finally blocks and using blocks. The SortCode marco might be a bit buggy. Regards, Fons ShowUsYour-Blog! More on VS.NET macros : Chad Osgood's Blog Cool Macros Snipped : ShowUsYour-Blog! Macro-economics : Jimski's Blog I've also added some comments about this to my blog if you are interested in reading them at. VERY slick! That one is going in my library :) Don't forget to add a second dataset with the column schema matching your xml elements. Then you can merge the two. AND don't forget caching the results so as not to put too big a burden on the rss server :) I've not used the SoapExtension class before, but I have done encrypted web services with Web Service Enhancements (WSE). You can download it and read articles here: It's not quite as easy as just adding encrypt:=EncryptMode.Response) to the message, but it follows the WS-Security spec and should interop with other implementations. -Dustin When testing SoapExtension, do not use IE (because it uses the GET protocol), create a simple client in vs.net (it could be console app) and test (that will use SOAP protocol) Jan Tielens' Bloggings This is nothing compared to the type of completion you can get from shells like bash under linux or cygwin, but it's definitely something. I wonder why it isn't turned on by default though (remembering the key is a pain)... See ya, Dumky You can use TweakUI to add this, and it's enabled by default in WinXP. congrats! I'm getting married on the 18th of July. Wes Gefeliciteerd, Jan! I married on May-30th, 2002, and I can honestly say: it's great. :) As a tip: I created a website where every guest of our wedding could upload his/her pictures. This was a big success, others could now fast and easy see other people's pictures about the wedding. If it's not too much work, I'd surely do that (make it protected with a username/password of course :)) You went from Overpelt to Brussels... guess I could attended as well. Attributes! Which attributes ;-) Thx Jan Hi Jan. I'm one of the guys at Microsoft who works on the Power Toys. I wanted to let you know a little bit about the PowerToys effort. I promise that we didn't work on these projects to displace third party tools. I actually think that all of the commenting tools you mention have their plusses and minuses. And at the moment some are more polished than the one our team released. One of the motivations included giving people samples from Microsoft for extending the shell. You can never have too many samples. Another is that we just thought it would be a cool project to work on and provide a forum for other developers in the community to also work on. Actually, we’ve been in contact with the writer of the VBXC tool you mentioned and he has recently joined our workspace project on gotdotnet that allows for other people to modify the code. If your interested feel free to check out our workspaces.. Thanks, josh A bonus of a lot of the third party tools is that they work in VS 2002 and 2003. The powertoys are just for 2003. It is opensource as you say so I do wonder how hard a 2002 port would be. Good one! Nice. Thanks By the way, this only works in VS 2003. In VS 2002 it chokes on ass = Reflection.Assembly.LoadFile(ref.Path) LoadFile is not a member of Assembly. Jan, This sounds like an interesting little project. Any plans for NAnt support? You must add at least a little more information for a quick start if you want to be able to catch peoples attention and hold it. seems like its only for windows XP... It was my pleasure ;-) After running the installutil on the server it still didn't show up in the Services list. Can't the server just run as a normal program as well. It's kinda lame that I have to jump thru so many hoops just to try it out. You should post in the MS newsgroups. It helps a lot. Start threads on forums. Add the site to google. I pimped your tool on a dutch forum (gathering.tweakers.net) in a thread about buildtools. Advertise more through simply PR. :) Look at Eric Smith. He marketed his tool this way and his tool is now one of the most well known ones. (I did the same with LLBLGen, and it works :)) Sorry Jan, I have some problems in vs.NET 2003. I keep getting a "Could not find file "" in the referenced libraries." Even with Sorry Jan, I have some problems in vs.NET 2003. I keep getting a "Could not find file "" in the referenced libraries." Even with your own example of xmlreader. And in C# and in VB.net Personnaly I think a service is the correct way to run the server. If the install and the uninstall work, there should not be any hoops just to try it out. Y Hi Yves, I think you have a small problem with the selection of the Type to be found. My macro is written so that it searches for the type that is typed in front of the cursor. So you don't need to select anything. Just type the name of the type, without a space after it, and call your macro. Let me know if you have any problems, Jan Btw: I'm really looking forward to your blog! he-he. ass.GetTypes First of all, congratulations!! I browsed through the article and liked it, but there is one aspect of collection usage which you neglected to mention- serializability. Quite often the collections must be portable enough to be remoted or web-serviced. This type of requirement is often known in advance and should be factored in when choosing/designing data structures. Thx Addy Thanks for your tip! The next part of the article series will cover building strong typed collections, so Serializability will certainly be mentioned. Jan What about integration with SourceGear Vault??? We've opted for this instead of VSS... Well, generating files for still changing projects only online does not help at all. Great macro, however, I've made to changes. First, the namespace directive is added AFTER existing directives, and second, the macro will check first, whether the namespace is already added: Public Sub AddDirective() Dim text As TextSelection = DTE.ActiveDocument.Selection() text.WordRight(True) While text.Text.StartsWith(keyword) And Not alreadyUsed Dim startpt As EditPoint = text.BottomPoint.CreateEditPoint() Dim endpt As EditPoint = text.BottomPoint.CreateEditPoint() endpt.EndOfLine() endpt.CharLeft() If endpt.GetText(startpt) = t.Namespace Then alreadyUsed = True End If text.LineDown() text.StartOfLine() text.WordRight(True) End While If alreadyUsed Then DTE.StatusBar.Text = "Namespace " & t.Namespace & " is already imported." DTE.StatusBar.Highlight(True) text.MoveToLineAndOffset(line, 1) Else this thing doesn´t work!!!! this thing doesn´t work the site doesn´t work the tool doesn´t work i am frustrasted digitald0815@freenet.de I tried it with one of my .csproj files and got: System.NullReferenceException: Object reference not set to an instance of an object. at Hippo.BuildFileBuilder.Builder.ReadProjectFile(String projectContents) in C:\Visual Studio Projects\Junk\Hippo.BuildFileBuilder\Hippo.BuildFileBuilder\Builder.vb:line 276 at Hippo.BuildFileBuilder.Builder..ctor(String projectContents) in C:\Visual Studio Projects\Junk\Hippo.BuildFileBuilder\Hippo.BuildFileBuilder\Builder.vb:line 233 at Hippo.BuildFileBuilder.Web._default.Button1_Click(Object sender, EventArgs e) in C:\Visual Studio Projects\Junk\Hippo.BuildFileBuilder\Hippo.BuildFileBuilder.Web\default.aspx.vb:line 38 Hey I live in Belgium and I blog. Just venturing into dot net. Hope to blog about it... I thought that Belgium was barred from TechEd? Supposedly Bill Gates is getting pressure from the US DOD. Well anyway maybe they can all say they were crazy for the day and had stood down. Sort of like the Belgian King did. Amsterdam is not in Belgium, Don ;) It's the capitol of The Netherlands, you know, windmills, wooden shoes, tulips (and legalized pot and prostitution, the Amsterdam red light district, gays can marry ) I'm happy it's gonna be close to home (I'm in the hague, 45KM's from amsterdam) :) Now I just have to save some cash to pay the entrance fee :) Would they give free tickets to Uni students? They did for the Devdays in november...One more reason to stay in college :) Thanks for sharing this! I'm new to .NET and this is an immense help. If the flight was the only problem, you could just as well have gone this year. We were offering free flights to U2U clients... y Yeah - that was circulating awhile back. I sent it to my CS professor who was quite a nut about maintainable code and style. I thought it was a lot funnier until I dealt with code that prescribed to a few of the techniques mentioned. Apparently, whoever wrote the code thought this article was not being sarcastic :) Yup, VMWare rocks - with one caveat. You need a pretty beefy machine. I'm running vmware on a Pentiu III 1 Ghz w/ a half gig of RAM, and it's a little slow. It's a bit snappier on a faster machine, though. If you run vmware, invest in plenty of RAM, and disk space for OS images. I do a lot of release engineer / setup.exe kinda stuff, and tools like this and ghost are extremely valuable. "Connectix is a product with the same features and is recently bought by Microsoft. I haven't tried it out, so any experiences/comparisons would be appreciated!" Jan: I had problems installing 9.1 with it. :( I have always thought VB.NET was missing a programming syntax, on top of a few thousands other things. Glurp! Wow, what a nice comment Stephane! It's really adding something to the discussion. Thx! Huh. Forgot about edit and continue, how that for forgetful?? Its my single most important gripe that its not there.... ISerializable Hi Jan. I've been a long-time fan of VMWare too, and I've been using it for all sorts of things, but recently I've converted to Connectix Virtual PC. My three main reasons are: 1) CVPC doesn't make such a big assault on your machine. There are no additional netwerk adapters that appear after you've installed it, and there are no services that run all the time, even if you're not using it. 2) It allows you to resize the screen of your virtual machines to almost any size you can drag the window to, instead of the "standard" sizes that VMWare allows. 3) On my machine, I'm pretty sure CVPC is faster than VMWare (haven't doen any measurements, it just feels snappier...) Apart from that, they're pretty much alike. Let me the first of hopefully many to correct you both in that C# has Edit and Continue in VS.NET 2003 (which has been out for some months). And Visual Assist.NET () adds #1, #3, #4 among many other things. I find it a must have. Hey, I have an idea...why the fuck don't you just turn C# into VB.NET...that seems to be what you are trying for... I've installed the current Debian distribution on VMware 4. No problems so far. Since I am a C# developer with C++ background, please let C# case sensitive! Regarding the With statement, it's hard to convert VB.NET samples to C# because of the missing WithEvents statement. Hi, i'm trying to implement the same thing but it still doesn't work !!! When there is only the remoting "concept" everything is fine but when I introduced events nothing works anymore !!!! Furthermore, the client application have to be in the same directory as the server... Can somebody help me ? Thanks It would be nice to have: 1. the build requester in the build loggings. (windows account on client?) 2. A possibility to copy the directory with the dll's from the server to the clients from within Hippo. 3. Does the path has to be set on the server. This requires rebooting of the server. (for the system account in the service to get it) Can it be done in the config file? 4. some errors result in a crash of the client programm. (servfer down) 5. The error you get when the path to nant isnt set on the server could be better. Thanks. Which reminds me of Joe Reich's tip: C# is also missing a background compiler, which helps greatly with IntelliSense. VB.NET is missing operator overloading. C# is missing a simple mechanism for late binding. VB.NET is missing unsigned types. C# is missing optional arguments. VB.NET is missing unsafe code. The list goes on and on. And to clarify a previous post, VS 2003 does NOT provide Edit and Continue for VB.NET or C#. Request: It would be nice if you could update the server code from the client without requesting a build. (You also can do a "get latest version" on the server, but there are no working folders set.) Automatically creating a buildfile for the project and the possibility to manage the main buildfile (add project, move project in buildsequence, comment project, ...) would be nice options. Very minor issues: If you start the Hippo client with a shortcut, and the windowstate is minimized, you regulary get errormessages but everything works. The Hippo buildfile builder (beta version!) always sets references to c:\Windows somtimes this is winnt. I do not think this can be changed in the website. But who knows, maybe sometimes in the client? The download for 1.2.0 of hippo.net doesn't contain a hippo.server.exe.config file. Is there one available somewhere else? Thanks, Eric I extracted the files to C:\Hippo, went to C:\Hippo\Server and typed: installutil hippo.server.exe and got the following error: C:\Hippo\Server>installutil Hippo.Server.exe Error dispatching command/test named 'Hippo.Server.exe' Here is the list of valid command/test/option names: +------------------------------------------------- | registerdatapath | registersqlanywherepath | registersqlanywherebinpath | getdatapath | getsqlanywherepath | getsqlanywherebinpath | registersqlanywhereodbcdriver | registeroracleoptions | getoracleoptions | validateschemarepovendorinfo | validateuservendorinfo | validatedbisempty | describelastinstalledschemarepo | initemptyschemarepo | relocateschemarepo | relocateuserdb | convertuserdb | convertschemarepo | copyuserdb | copyschemarepo | clientregisterschemarepo | registerschemarepofromfile | adddbset | dropdbset | renamedbset | copydbset | disconnectuserdb | reconnectuserdb | exportschemarepo | exportoldschemarepo | importschemarepo | exportschemafromuserdb | upgradeschemareposystemversion | upgradeuserdbsystemversion | unlockschemarepo | unlockuserdb | catchuptestdbs | cloneschema | deleteschema | purgedeleteduserdb | restoreuserdbschema | clearstatemachine | setschemarestriction | setrecursionlimit | dropchoicelist | checkin | uncheckout +------------------------------------------------- I'm guessing this is because the download doesn't include a hippo.exe.config file. Bored??? on your honeymoon??? You are a true geek ;) Congratulations on the wedding! And Roy's right - leave the books at home. If you're bored, that's sad :) Which time zone are you in...? It's still July here in the UK. 8-) Jan, I actually found your Blog site from reading your article. Yep, like the summary of Collections alot. Explained the difference better than reading 5 times that in a book or working it out for myself reading the .NET reference. thanks, Adam Definately a cool feature that we need for our projects but sadly not yet integrated with the VSS piece. Any plans to release a beta version packaged together in the near future? The big win of course is that the nANT script is always in sycnch with your project. What about being able to generate a script for multiple projects within a solution, any plans there? This is something we could definately use right now and as long as its fairly solid we'd be willing to demo it and give you feedback. keep up the good work, Scott Agree fully with Scott. Will the buildfile builder be able to handle multiple project files in a solution? has the object reference bug been fixed yet? If that is this the case it could be very useful tool for our project. My solution to this when I first ran into it was to write a quick article about it on my ASPAlliance column. You'll note that my first articles at are REALLY basic ASP stuff. This is because I was a consultant learning and bouncing around between client locations, so this provided me with a central repository for such snippets that I wanted in my virtual toolbox, and had side benefits of helping others and providing me with great exposure. You might use the articles feature here, your own site, or sign up to become an ASPAlliance columnist and get free space there plus a lot of exposure in the community. Steve When will the buildfile generator be implemented in Hippo? I would like to try to push my company to using this, but since our project files change frequently, it is not an option to update the buildfile using the leadit website. If it's not going to be included anytime soon, is there a command line generator or are the classes available in cvs to do this? Most of what I have seen requires a desktop app that uses a web service to get updates. Is there a .Net1.0 build of HippNet? freeware :> Imports EnvDTE Imports System Imports System.Collections Imports System.Diagnostics Imports System.Reflection Imports System.IO Public Module TypeFinder Private Class TypeCache Public Shared Cache As Hashtable = New Hashtable() End Class Private Function SearchTypeInAssembly(ByVal typename As String, ByVal ass As Reflection.Assembly) As Type DTE.StatusBar.Text = "Searching for '" & typename & "' " & ass.GetName.Name & "..." If (Not TypeCache.Cache.ContainsKey(typename)) Then Dim t As Type For Each t In ass.GetTypes If t.Name.ToLower = typename Then TypeCache.Cache.Add(typename, t) Return t End If Else Return TypeCache.Cache.Item(typename) End If End Function Private Function SearchType(ByVal typename As String) As Type typename = typename.ToLower.Trim.Replace(";", "").Replace(".", "") Dim ass As [Assembly] Dim currentAss As String Dim currentPath As String Dim t As Type Dim p As [Property] 'search in principal assembly ass = Reflection.Assembly.LoadWithPartialName("mscorlib") t = SearchTypeInAssembly(typename, ass) If Not t Is Nothing Then Return t 'search in assemblies in solutions Dim proj As Project For Each proj In DTE.ActiveSolutionProjects 'search in references of current project Dim ref As VSLangProj.Reference For Each ref In proj.Object.References ass = Reflection.Assembly.LoadFrom(ref.Path) t = SearchTypeInAssembly(typename, ass) If Not t Is Nothing Then Return t currentAss = "" currentPath = "" 'obtain properties to get assembly associated with project For Each p In proj.Properties If (p.Name = "OutputFileName") Then currentAss = p.Value ElseIf (p.Name = "LocalPath") Then currentPath = p.Value End If If currentAss.Length > 0 And currentPath.Length > 0 Then Exit For 'search in the assembly associated with the project currentAss = currentPath + "bin\" + DTE.Solution.SolutionBuild.ActiveConfiguration.Name + "\" + currentAss Dim tempAss As String = currentAss + "2" Try Try If (File.Exists(tempAss)) Then File.Delete(tempAss) End If Catch End Try 'we copy de assembly to be sure that when the compilation will be launch the file could be written File.Copy(currentAss, tempAss) ass = Reflection.Assembly.LoadFrom(tempAss) t = SearchTypeInAssembly(typename, ass) Try File.Delete(tempAss) Catch End Try If Not t Is Nothing Then Return t End Try DTE.StatusBar.Text = "Could not find type '" & typename & "' in the referenced libraries. Make sure your cursor is right behind the text or selected!" DTE.StatusBar.Highlight(True) Return Nothing End Function Public Sub AddNamespace() Dim text As TextSelection = DTE.ActiveDocument.Selection If (text.Text.Length = 0) Then text.WordLeft(True) Dim t As Type = SearchType(text.Text) If Not t Is Nothing Then text.Text = t.FullName text.EndOfLine() DTE.StatusBar.Text = "Ready" End If End Sub Public Sub AddDirective() Dim text As TextSelection = DTE.ActiveDocument.Selection If (text.Text.Length = 0) Then() 'Skip the headers text.WordRight(True, 2) While text.Text.StartsWith("/*") Or text.Text.StartsWith(" *") Or text.Text.StartsWith("//") text.LineDown() text.StartOfLine() text.WordRight(True, 2) End While text.StartOfLine() text.WordRight(True) Dim lineTarget As Integer = text.AnchorPoint.Line Dim correctPlace As Boolean = False While text.Text.StartsWith(keyword) And Not alreadyUsed And Not correctPlace Dim startpt As EditPoint = text.BottomPoint.CreateEditPoint() Dim endpt As EditPoint = text.BottomPoint.CreateEditPoint() endpt.EndOfLine() endpt.CharLeft() Dim currentText As String = endpt.GetText(startpt) If currentText = t.Namespace Then alreadyUsed = True ElseIf (currentText < t.Namespace) Then text.LineDown() text.StartOfLine() text.WordRight(True) Else correctPlace = True lineTarget = text.AnchorPoint.Line End If End While If alreadyUsed Then DTE.StatusBar.Text = "Namespace " & t.Namespace & " is already imported." DTE.StatusBar.Highlight(True) text.MoveToLineAndOffset(line, 1) Else If (correctPlace) Then text.GotoLine(lineTarget) End Module Nifty and useful especially for middle & backend people (like myself) when they need to design a colorful presentation. Cool! Thanks for the credits :-) However, I did some more customization some time ago. I will compare it to your new version and maybe suggest some more improvements ;-) Regards, Thomas Really good macro...I don't know how easy it will be, but is there any way to have it look up namespaces in the same assembly - so, say for instance you defined a struct in say MyProject.Structs and in MyProject.Classes.Test you're typing away and want to use one of the structs in MyProject.Structs, it'd be very useful if the macro could look in there too (or am I missing something and it already does it??) Umm...ignore me, I see it already tries that...however for me at least (using VS.NET 2003) it fails - seems to get stuck in a specific Assembly and get nowhere else so doesn't find the correct NameSpace Hi Jan, Other enhancements I have added in my version is to wrap the process in an Undo context, so a single undo will change everything back. simply add this before your changes: Dim CloseUndoContext As Boolean = False If DTE.UndoContext.IsOpen = False Then CloseUndoContext = True DTE.UndoContext.Open("TypeFinderAddDirectiveMacro", False) End If and add this after your changes: If CloseUndoContext Then DTE.UndoContext.Close() Another change I made was to check for the "Option" statement (VB only) and make sure the Imports statement was after that. With the changes that now check for existing namespaces that might not be needed, but worth looking into. Keep up the great work. From a performance standpoint, your conclusion misses the mark IMHO. Client-side validation is simply not minimal nor trivial. Sure, business logic tiers can do the most sophisticated validation possible and each tier should be responsible for complete error and validation checks. Sure, point by point everything you say before your conclusion is dead on. But to conclude with the phrase "...some minimal trivial validation at the client side..." is to completely miss the consideration every good developer can never forget: performance. A much better way to conclude this excellent post is "... as much validation as the client side can handle...". Dave You are completely right! I maybe did not speak about performance. But I do think not all validation can be or should be on the client. I'll make some adjustments to my conclusion. Thanks for your reaction! Especially in a SOA, you'll need some sort of client-side validation. Validate the message against an XSD for example before sending it to the server. If you're using Enterprise Services or Remoting and your entity objects are [Serializable] .NET objects, then you can also work alongside the metadata level by defining some new custom attributes, like public interface IValidatable { // empty. This is just a marker. } [Serializable] public class Customer, IValidatable { [MaxLength(30), NonNull] public String Name; [MaxLenght(35), NonNull] public String Firstname; [NonNull, RegExp("...")] public String SSN; ... } You can then generate a generic routine public ValidationResult Validate(IValidatable obj) { ... } this routine would check for all fields/properties and their custom attributes. It would then extract the current values of these fields and compare them to the definitions. The ValidationResult object works like a collection which contains as many validation errors as found. That is, if FirstName == null and SSN doesn't match the regexp, then ValidationErrors would contain two entries pointing to the FieldDefinition object and to the validation rule which failed. These single ValidationError objects will also contain an human readable message. The ValidationResult would also include some generic Enum value which specifies "Ok", "MightBeOk", "IsNotOk" or whatever ;-) The cool thing is that you can develop different validation routines for server and client while essentially reusing most of the logic. And you can generate some nice documentation by using Reflect - this documentation would then also include all your validation rules. Just an idea, though ... -Ingo Oh ... you can make it even more flexible if the new custom attributes (like the NonNullAttribute) contain the logic to check the current values. let's say: public class ValidationAttribute: Attribute { public abstract ValidationError Validate(object currentvalue, string fieldname); } public class NonNullAttribute: Attribute { public override ValidationError Validate(object currentvalue, string fieldname) { if (currentValue == null) { return new ValidationError(fieldname + " should not exactly be null."); } return null; } } The generic Validate(IValidatable obj)-method would then in fact first iterate over all existing fields and properties, then iterate over each custom attribute of the given field/property, checks if the attribute derives from ValidationAttribute, and calls Validate() on each attribute, passing the current value and the field name into it. -Ingo I would turn this upside-down. Given the current platform, by default full validation should be implemented both client-side AND service-side. Of course deviations form this scenario do apply, but each one should be a consious descision. Unless we can build a Palladium style trusted distributed computing environment, there can be no exceptions to full validation on the service-side. You can not trust your clients integrity. Client side there can be lots of reasons to skip full validation, as various trade-offs apply, but the principle should remain: "full validation unless ...". I am speaking off the top of my head here and this was awhile back... I vaguely remember some issue I was dealing with then the browser didn't support JScript for whatever reason and thus client-side validators weren't rendered out. I assumed, incorrectly, that the validation would automatically be pushed to the server. If I recall correctly, the server-side validation is only done with an explicit call. So, as long as my memory isn't playing tricks on me, that's one more reason why client-side validation shouldn't have too much weight put on it. I think you summed it up nicely - "as much validation as possible on the client-side". It's best for perceived performance to the user and scalability by distributing the (arguably minimal) cost of validation to the client. But, you still shouldn't trust that necessarily. Remember, it's also possible for me to post a malicious HTTP request to the page w/o using your form at all. Your client-side validation doesn't help much there! For End-2-End .Net apps I tend to put validation code (not strictly business rules) in a DLL and I use that DLL both on the client and on the server. On the server I also check all the business rules. For other client technologies I tend to implement some validation, it depends on the technology and on the time that I have. Jan, for the record I totally agreed with everything you posted up to the conclusion. I just thought a few adjectives in that conclusion changed the entire meaning of it. Thanks for understanding what I was saying! As for the need for server-side validation, I agree with all thoughts presented here. Tim... yes, malicious HTTP requests must be blocked on the server end. Jan... client-side validatation most certainly doesn't lessen the need for server-side too. Truth is, every single piece of n-tier development should have all validation and error-checking possible. The objective at each tier may be quite different, but the reasoning behind it isn't. Client-side should always be trying to minimize traffic over the wire. Business-side should always be concerned with logic integrity. Database-side should always be concerned with security. Obviously there's more concerns too... data types, invalid keys, transaction rollbacks, even keeping state of what IS valid. Dave sums it up nicely... so, who are you and do you have a weblog? :) (I'd like to subscribe) Nope. I believe I'm becoming well-known around these parts since I do NOT have a weblog yet comment! :P I almost started one once but realized I lacked the discipline to post regularly, plus the focus to speak consistantly. Um, I lurk loudly instead! This is an interesting topic. (Jan,I actually came to your blog to ask you a few offline questions about hippo.net which i will submit to you seperately ). But back to validation. I am currently in the process of creating a Web services framework where I am consulting (a large cable company in the southeast). We are going over security, exception handling, configuration and last but not least validators. Ingo, i was actually approaching the solution in a very similar manner to what you described; however, I was looking hard at creating a base business entity class (for all of my business objects ) that is derived from ContextBoundObject. The advantage of using ContextBoundObject is that I can create attributes (much like you describe, but they implement IContextAttribute). I can then decorate my properties and methods with these attributes. The key thing about ContextBoundObject is that it enables me to intercept the calls to the assignment of a property or method parameter and evualuate the assignment based on my custom attributes. I can then throw an error if they fail validation. What are your thoughts on this everyone? Has anyone done much with this? Mathew Nolton Jan, I was going through the installation of hippo.net and I noticed a couple of things: 1) Your readme talks about a config file for your server but the zip file does not have one. 2) When I start up the client I get the error Requested Service not found (its probably a configuration issue?). I traced it to the line of code in main.vb If Me.GetRemoteControl.Notifications.Length > 0 Then Any thoughts. Mathew Nolton Oh man. What a neat macro. (All four) thumbs up. vb project file has ..\WINNT in the path. Visual studio translates this when different users open the project (i.e. WinXP , Win2k). The buildfile generator uses the ..\WINNT verbatim, instead of checking if it shouls use ..\Winnt or ..\Windows. maybe should use the System.Environment Static properties? I am having the same issues as Scott... this macro just hangs for me saying it's searching assembly xyz. Cool idea though. The service had problems accessing a VSS store on a network drive. I had to download the source and copy the code from the service into a form. Once it was running under my username it worked fine. If you have more than one project defined, you should be able to view them all from one client, instead of having to change the Hippo.Client.exe.config file to point to a different project name every time. You should also be able to change the configuration from the client (change the VSS path, username, client directory, etc.) Sorry Jan.. I think this is a bit premature! :-( Will Michiel come and talk about something else? Testing... The joke is quite funny but worth thinking. Agree with Adam, It would be really useful if more than one project can run in the client without having to change the config file. Have you any planst to implement this? Bert-Jan Can only mean good things for Mono, not sure the reporting of 'Mono as a "Linux/Unix" port of Microsofts .NET web services platform' :) .NET does more than web services and Mono runs on more than Linux/Unix :) Thanks for sharing the information Jan. :) This is excellent news, there go my savings ;) Jan, ik wou vanmorgen je artikel "Exposing custom-made classes through a Webservice ..." lezen maar het lijkt verplaatst te zijn. De huidige link is : Met vriendelijke groeten. Thx! wow! good posting... I'll have to print this one out (& scratch head)... Ques: any thoughts on exception (error) handling) on this one? I've come to love the power and flexibility that CodeDom/Reflection.Emit and Reflection can provide as well! Error handling should be implemented (in my opinion) at, at least, 2 places: 1) When invoking the GetValue method: return (string)getValueMethod.Invoke( tempClassInstance,new object[] {this}); 2) After compilation: CompilerResults results = compiler.CompileAssemblyFromDom(compilerParams, unit); When there are compilation errors, no exceptions are thrown. But you need the results.Errors collection to check if everything went ok. Jan This is an inefficient way to do this (IL and emit would be far better). However, you need to keep in mind that there is currently no way to unload an automatically generated class from the app domain, so if actually utilize this customization, you are eventually going to run out of memory if you create enough customer objects, regardless of whether they have been garbage collected. Of course, the proper way to do something like this would be to implement a custom format provider or just use String.Format when you were actually making use of the data (probably the best option)... It makes a lot more sense that way anyway and will result in much cleaner code that is easier to manage. Jesse Ofcourse the example I showed is quite trival and is only for illustrating how the CodeDom and Reflection can be used. Altough I aggree with your statements, I think that there are limitations with implementing custom format providers. If my sample would be used in real life, the _toString would not be a property of each Customer instance, but it should be in a config. file or something like that, so only a very limited number of assemblies would be compiled in memory. Thanks for your thoughts! Jan Can you debug the Emitted IL? I guess you can debug it as IL, but that's not what I'd consider exactly convenient, versus debugging C# code. Also, can't you achieve pretty much the same flexibility with regular OO programming and good use of patterns? I love Skype! I gotta add one of those to my blog. What about the adware/spyware that Skype could be installing? Their license states that you give them and 3rd parties the right to install unrelated software on your machine without asking you first. My pleasure... I think this is a great tool! I was going to manually run source safe batch commands and then run nAnt when my co-worker found this tool (I had found Drago.NET at about the same time). This tool will let each developer run builds on the integrated testing server once they have checked in the code to sourcesafe without having to give them remote/telnet access to the server. A couple of suggestions. The VSS database and login/password should be passed from the client. This would be more flexible and secure. Currently, once someone has access to the port, they can get into the VSS db since all the credentials are on the server. Another suggestion is to support multiple projects. If I have time, I might try and implement these myself. If I am successfull, I'll send you the changes. There are some installation issues, but these have be previously documented and I was able to find the solutions in the forum. Correction. My last post should say Draco.NET, not Drago.NET It's very geeky - just beware of the dangers of recursive self-quotations in a stack-based world ;) Clements Vaster of Newtelligence has already implemented this kind of attribute based validation more than a year ago in their SDK. They call it constraints and have implemented 20 constraints. (with sourcecode) Yes, I'm aware of that! Many examples of the technique that I showed are available on the net. I just used the validation example to illustrate the use of attributes (which was the goal of this article). So I do not want to pretend I "invented" the technique, I just used it to explain attributes. Sure I did implement this already, but I didn't write an MSDN article. ;) I have installed Mandrake9.1 and added the line "MII_NOT_SUPPORTED=yes" but the only problem is that I can not find the PC/Net 32 network driver. It is not in the list during install. With the line added, I can get an IP adress but can't go on internet. Is this the driver? or am I doing something elese wrong? thanks, Jerom I think they goofed.... not there now. I hope you're watching those MSDN TV videos while *being driven* to work, and not while *driving* to work, Jan ? :-) Patrick, ofcourse I only listen to them while driving (I do not have a personal driver, but I do know some other Belgian .NET guys who sometimes have one!). But since almost all of the Belgian traffic is standing still most of the time, watching MSDN TV would be possible too! :-) CU Jan! I bought a PCMCIA hard disk for storing even more on my PocktPC... (5GB - 200 €) 5GB! Thats huge... I'm still praying MS will make the webcasts available for Pocket PC. Enjoy the PDC! Jan great! Belgian .NET guy with a driver... What a snob. :) Anyway, no more driver for me. Plaster was removed last Monday for my trip to LA. Should have thought about saving some webcasts to my notebook's HD. Then I could view them on the plane... Have you been spying on our company meetings? Excellent news ! If only the registration form could work ... well i guess it'll be fixed soon. See you there Jan :) Is there a target release for integrating the buildfile builder into the build tool? No there is no date set yet. There are still some issues that need to be solved (e.g. Webservice projects deployment). The best we can do at this point making a suggestion for a build file. Jan I agree here. I have a 45 minute drive to work and then back home and listen to the .Net Rocks audio and even the .Net Show on my iPaq while driving. It is the only time I would get to listen to the Webcasts. They are very informative. Dave Jan, I'm with you on that point. It would be nice to have available in download as well. I was a bit disappointed by LiveMeeting, of course it doesn't affect the quality of the webcast but some essential features are missing. Ben Hippo.NET build is not working correctly. Problems with parsing or reflection. Can you please post the exact error message and when it occurs on the Source Forge forums () ? Thx Jan Anyone get this to work? I've tried installing it through VMWare and on a standalone machine. When booting from the disc, it fails, and when trying from within a windows installation the setup screen just sits there for about 10-15 min and then closes.......... I'll try to fix my installation with the advice above, I have to admit that setting the network connection on Mandrake 9.1 under VMWare is an assault on the nervous system of a newbie like me. Ok, Question... We have been using VPCs here at my company for testing and such, but are having issues trying to log them into the local domain. We have searched all over the web and can't find any posts about anyone else having trouble using the Virtual Switch functionality. You see, our issue is this. We want to set up a VPC with Server 2003 and be able to access it from anywhere on our lan. Well, if we use NAT and try to ping the box we get a response from the Host box and not the VPC box. If we try to use Virtual Switch we can't even get the VPC to obtain its own IP using DHCP. We have tried this on numerous different Host PCs and still the same results. It doesn't even matter what OS the VPC is running, the only way we can get a network connection is using the NAT configuration and of course this will not work if were trying to run the VPC as a 2003 Web Server. This is also the case at my house in my network set up at home. Any ideas? Has anyone else had these issues with VPC? Other than that, the product is awesome! VPC has had no problems on my machine obtaining an IP from DHCP. Maybe an ipconfig /renew might help? Just grasping at straws... Well it's quite possible our issue is that we are using 5.2 the Connectix product and not the recently released MS version 2004. We are going to download the 2004 version, and I will keep you all updated. Nice Tip Jan! Since you are looking at that, maybe you'd be interested in my own IBuySpy portal modification. I created it mostly because i felt that these portals lacked customization. My portal is almost completely CSS based regarding it's looks! A lot of work still to be done, but you can take a look at it at my website, You could improve a little bit your stylesheet by specifying directly at the <xsl:for-each> level, that you want to loop on the first 5 items : <xsl:for-each ... </xsl:for-each> No more need for a nested <xsl:if> ! Hope this helps ! And here's another variant. This one displays all the titles, but only the description for the first title. On all the other titles the description is shown on mouseover. René <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet <xsl:output <xsl:template <!--Get max 25 articles --> <xsl:for-each <!-- Build compete A element --> <a> <xsl:attribute<xsl:value-of</xsl:attribute> <xsl:attribute_blank</xsl:attribute> <xsl:attribute<xsl:value-of</xsl:attribute> <xsl:value-of </a> <br/> <xsl:if <xsl:value-of <br/><br/> </xsl:if> </xsl:for-each> <!-- Show feed description at the bottom --> <br/> <xsl:value-of <br/> </xsl:template> </xsl:stylesheet> I noted a similar concern to the ASP.NET team last year when I first saw Whidbey. They are doing lots of good things, but it looks like they are working somewhat in isolation. I never heard back by the way. :) One thing I'm really curious is how Constraints (at least I think that's what they're called) work in VB.Net? class declaration should be: public class MyCollection<itemType>:System.Collections.CollectionBase right? That's right, seems the some tags are left out because of the HTML. Sorry for that. Jan Jan: Until MS implements your suggestion, you may be interested in WM Recorder:! But how to do that by configuration file? I tried again and again, the configuration file can't work. the config file is as: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.runtime.remoting> <application> <service> <wellknown mode = "SingleCall" type = "Etone.RegistBiz.RegistTx,RegistBiz" objectUri = "RegistTrans" /> <wellknown mode = "Singleton" type = "Etone.RegistBiz.RegistQry,RegistBiz" objectUri = "RegistQuery" /> </service> <channels> <channel ref="tcp" port="9000" /> <serverProviders> <formatter ref="binary" typeFilterLevel="Full" /> </serverProviders> </channels> </application> </system.runtime.remoting> </configuration> and how can we do that in IIS hosting? Neither program in Global.asax nor configuration file can do that. I am so sad, could you test it and help me? cool I like that you are adding new features, though getting binaries isn't high on my list. I would rather see some bug fixes, please. Do you monitor the sourceforge bug list feature? Or forums provided by sourceforge? They are very quiet. :) Jan, It is very useful samples. Where did you find syntax for it? I am struggling through converting C# sytnax of Generics to VB.NET. Thanks, Maxim Hi Maxim The C# syntax I found on the net (e.g. from the MSDN site), but the VB.NET syntax I had to "discover" myself. If you have problems, let me know and I'll see if I can help you out! Jan Actually, the syntax for multiple constraints will be changing, the one in the PDC build was just a preliminary syntax. The new syntax will look more like "Class MyCollection(Of T As {Entity, IComparable})". I really need to get to blogging about this... :-) Hi Paul, thanks for the information! Jan, Thanks for the refernce to the article. It does contains wealth of information. Maxim Paul, I think you should give us some good sample code on this topic. I think It is one of the great feature we are going to have with new release. By the way new syntax looks confusing {} do not really makes sense for VB developers Sorry I spilled the beans Jan :-). I didn't know juu were just setting this up... No problem Patrick! Every IT projects need a thight deadline. ;-) cu yea, this is really really sad, this is the kind of thing that makes me want to ditch .net altogether. I have been struggling for this for a week. EVerything is fine until i try to add events, then nothing works. Does anyone have a SHIM class example in vb.net ??(); Hello Jan, The page currently takes too long too load (+- 20 seconds). Do you already use caching for the RSS data? I am very new at ASP.NET this look as if its what I need to complete my assignment. But I dont know how to trigger it below is my problem. My purpose is to dynamically change all the text description on all the objects and all the DIM that contain Error messages by read a DB table to determine the language by user name and then read a DB table by ASP page and language to get all the objects that needs to be changes. Can I email you my code? Wow, that's a nice piece of work. Thanks so much!! Thank you!! Hi, Your blog contains good info. Keep it up. ok, since all the help files are in chinese traditional, i'll ask this installation question here-- does CodeLibrary install its own access db engine or does it require something already present on my machine? thanks! Jan, Did you try modifying stuff that comes out-of-the-box? When I played around with it, I had several things I didn't like: - I couldn't just modify the look of any page I wanted - I often found it hard to find out where I should look for certain things - Once I found out how to create a page with a certain template, I saw it indeed did use the template, but it still contained certain things like the stuff at the top and left. - The template of that page I added couldn't be changed anymore with the webinterface once it was added. Perhaps I tried to use it for the wrong thing: instead of using it as some 'document sharing tool', I wanted to see if I could use it as a base for building webapplications. From my experiences, I came to the conclusion that I shouldn't want to do such a thing. That's why I started with our own tool, which comes has the cool things in SharePoint, but not with the limitations I experienced. Also Just some FYI, you might not want to put your assembly into the GAC since it would give it FullAccess rights. To have a bit more security you can just install it into the bin directory where sharepoint is installed (usually c:\inetpub\wwwroot\bin). Jeff No offense, but first life will be great because you can do a lot, out of the box. Then you start that life will be even more great when you... and then A LOT of things come up you only can do with access to the server. I think that still kinda sucks! :) Considering deployment: You might want to have a go at the Microsoft WPPackaging tool. Get it from. Should save you a lot of tedious work! Jorg Hello, my name es David and now I'm studing the SoapExtension. I have a problem and I don´t found the solution. I do a client application that call a webservice. In my webservice I have a class that inherit of SoapExtension and save all the message that my web service receive. The problem is that never execute this class, and I have the correct configuration in a web.config file. (I use Visual basic.net) Where is my fault? There are another option. Thanks for all David Jan, all the best wishes to you too ! And don't forget, you can help making 2004 an exciting year by giving us more of these great articles of yours... Greetz, Faan At this point I still don't have any MS reaction... :-( After HOURS of trying to find a solution to the limiting the number of headlines, there you have it in two lines of code. THANK YOU! And what about having the possibility to post to WSE Web Service? Hi Laurant, what do you mean by WSE Web Service? DIME attachements, encryption, security, ... ? Jan And what about being able to submit rich text to for example a webservice? Last time checked I noticed it sends an XmlNode to the webservice. On the webservice however I could only get a certain bit of the text - if I would submit "hello <b>there</b>", I would only be able to get "hello", as if the rest was never sent. I think there was a workaround for this by doing some scripting. It would be nice however if this kind of trivial stuff would be easier. > it's a pity that you cannot use > managed code in InfoPath. Jan: You can! Yes Jan I mean exactly that! Thanks Phil for the pointer, I did not realized that there were a project toolkit for Visual Studio. I did use the Word toolkit to build the article publishing on my site but did not know that there were one for InfoPath. Thanks !!! Phil I don't think the InfoPath Project Toolkit for VS.NET is already available. Please correct me if I'm wrong! Right :-( They talk about it in the ppt presentation, but I could not find anywhere ! It looks great, i already added your fead to bloglines :) Curious to see future releases :p very cool - keep it up... Looks cool! I'll certainly use it whenever possible. Is that really the SQL code that ObjectSpaces generates? This code doesn't seem like it would work in SQL Server 2000: Declare @r int, @e int; Set @r=@@ROWCOUNT; Set @e=@@ERROR; If @e = 0 Begin if @r = 0 RAISERROR(''No rows effected'',16,1); If @r > 1 RAISERROR(''Change effected %d rows'',16,1,@r); End You need to set @r and @e in the same SELECT statement. Unfortunately, even a SET will reset those values. What product(s) did you use to capture the the screen shot displayed above? The new MS FrontPage 2003 provides some features that complements well with SharePoint sites - data views, mods, etc. A nice way to start the new year with a bang ! I guess this is the first article online that gives an introduction to objectspaces. And great that you use the mapper utility in your examples because this one makes objectspaces much easier to use than writing all this XML manually. My comments are only about the introduction : - There is at least one 'real-world' property that is easier to represent in a relational database than in an OO language : many-to-many relationships. This is easy to represent in an UML class diagram but once one starts coding you'll have to maintain collections on both sides of the relationship. - Real life taxonomies are sometimes difficult to map on current OO models. Like the well known example of geometrical figures : The class of squares is a subclass of the class of rectangles, nevertheless a rectangle has two properties (length and height of its sides) while a square has only one (because these lenghts are equal). Nevertheless, in the current OO implementations a subclass can only extend the number of attributes of its superclass. And this is only one example. - There are other database technologies that allow for an easier mapping with an OO runtime : object databases and XML databases. However, you're right in assuming that these are not widely used. Best greetings, Stefaan Thx for the positive comments! Garibaldi, I used Paint Shop Pro 7 to capture the screens. Bill, I copied the SQL from the Query Analyzer of SQL Server 2000. Stefaan, thanks for your comments! How would you implement many-to-many relationships in a relational DB? It would require some additional helper tables I assume, so it would involve quite some coding. I agree that sometimes real life taxonomies are difficult to implement in an OO model: you provide a nice example. But I do think in general OO models can reflect better a real life situation. Btw, did you have a chance to try ObjectSpaces yourself, I'd love to share some thoughts! Enjoy the holiday, Jan Yep, one uses an association tables for many-to-many relationships with at least the foreign keys of both related entities and possible some extra information about the relationship. And I agree that OO models and an OO centered middle-tier are great for the business applications must of us work on. But when used 'in the full richness of reality' it still has some limitations. And yeah, I played a little bit with ObjectSpaces at home. First without the mapping tool which was really hard because when one makes a mistake in one of the mapping files the error messages of OS are not very informative. Still didn't try the many-to-many relationships though. Would love to play more with this and share thoughts too ! :) Take care, Stefaan Hi Jan, There is a lot of information to be found in the C# language specification 2.0 draft. It is located at: As you were not sure whether the syntax of C# is correct, let me assure you it is. The <itemType> you used after IComparable is used only when the interface (or class) itself is also a Generic. Otherwise, you would not include it. BTW, did you also know that you can add constructor constraints as well. The following sample requires the V type to have a default parameterless constructor. In C# multiple constraints are added to a type in a comma separated list. public class SomeDictionary<K,V> where K: IPersistable, IComparable<K> where V: MyBaseClass, new() { // ... } Hope this helps. Thx Alex! Does it warn and ask confirmation when deleting records? I've created my own extended datagrid with this functionality, but that's all. If your grid has this + those other thing, I gladly use yours :) Geert, at this point: no. But if there is any sample coding available on the net that explains this feature, I could include it. I will send you a mail with my component included. How easy can life be :-) To garibaldi: in the HTML Help Workshop (default installed with Visual Studio .NET) you will find a Image Editor. With the image editor you can capture screens (to place in your HTML help files or for other use) Typical IT Project :-).... Great stuff, Jan! keep up the good work! Sweet! Nice work...keep it up! Really nice Jan. Or just the following will do the same too: myDataSet.ReadXml("") Thanks - great help :). evaluatiuon How would one use a tree or a datagrid that requires a popup button with the ExtendedDataGridControlColumn? How does one add custon columns to the add editor in the grid so they can be picked at design time; editing registry or xml maybe? Will you provide an EnumColumn and a column that uses the UITypeEditor to do the rendering and editing? I know your code isn't open source but how do you render a control that is associated with a column multiple times in a grid each with different state as you did in your example with the group of radio buttons and the progress bar? Jarrad 1) Just create a Custom Control and add a TextBox and a Button to it. 2) Check out this article: It comes down to creating a custom DataGrid class. 3) Mmm, can you explain a little bit more? 4) Each cell is an instance of the control. Nice work Jan! ;-) yukith. string assemblyFileName = Assembly.GetExecutingAssembly().GetName().CodeBase; compilerParams.ReferencedAssemblies.Add( assemblyFileName.Substring("".Length)); thanks dude.. the above thing made my life easier.. i had sorted out every thing except the above :) btw im working on a scripting engine in my application which lets the user to execute functions already built in my assembly :) cheers I used this with a WebClient that wasn't accepting a certificate and it worked fine. Great blog!!! Jan, Just thought I would point out that the link column's alignment property does not work. Every link column ALWAYS ends up left justified. Is there a way that I can correct this in my application? Other than that, it was very useful! Thanks! Jim thanks Yet another case where I am not able to "keep up with the Joneses" ;-) My (free) ExtendedDataGrid control gets quite some attention and every developer knows that the more people use your software the more bugs will be reported. To organize bug reporting, feedback, feature requests, ... I've created a GotDotNet Workspace:. So if you have something to say about this control; please use the workspace. New releases will be put in this workspace, so you can easily track them by using the RSS feed. Everybody who has contacted my concerning the ExtendedDataGrid: thanks for giving it a try. I'd like to ask you to post the reported bugs in the Bug Tracker. I'm sorry for the hassle. Good good and again excellent ;-) 'With' and 'WithEvents' are two completely different things! 'With' allows you to reference an object without having to specificy that object each time, eg: With Form1.Controls.TextBox1 .Text = ... .Enabled = ... End With (simple example, but sometimes you have to set a lot of properties on an object multiple levels deep). WithEvents is short hand for [event] += New System.EventHandler([eventHandler]) Tielens presents some ideas on how to do reporting in a SOA world and poses the question whether or not there are any other know solutions or products: In the "good old days" of pure two-tier Client/Server programming... Ingo Rammer is right, that your reporting will be a service, but that is not really the issue is it? In a SOA, your data doesn't necessarily come from a database, or if it does, it's not necessarily your database anyway. You need to mangle all your fragments of data into one homogenous lump, or have at least similar fragments, to be able to use one of the existing database reporting products. Infopath is ok, but it wasn't designed to work with more than a page or two of data, and ultimately it is more geared towards data entry and validation. The other problem you will run into is printing those reports. (in my unfortunately biased opinion)I think that the XSL-FO spec combined with XSLT is a workable solution, but the design tools are still in their infancy. need product key hope you can help Jan, Congratulations - I've read every fucking post from 100 guys who claim to know what the hell they are talking about - your solution works like a charm. THANKYOU for posting this info. I hope other people with https+httpwebrequest problems find it. good good try setting Option Strict On ...... Come On Microsoft, please do make the webcasts avialable for offline viewing. if so, thousands of developers will be benefitted by it. Hope MS Listens to the Community and makes it avaiable for downloading previous webcasts for the plug! Our little community of Office System enthusiast continues to grow! Chris hi,i want to know : Hippo.NET have support .net frmaework 1.0 verson????? my english is pool,sorry!! Maybe a good remark is also to first create an additional virtual server in IIS so that you can host your WSS sites on that one. Otherwise the default Web site will be used and you will not be able anymore to host 'normal' Web sites (due to the ISAPI filter installed by WSS).! Do you have this code in VB somewhere? We're using VB.Net for our ASP.Net pages and I can't get a decent conversion for this Nice info on WSS, but also the point on IIS 6.0 by Patrick is important. EROL I wish I had though of Patrick's tip when I installed here. Good job I've got a second server for "normal" sites. Nice! That's the reason I still like webservices. I don't see people using webservices yet for exchaning information across internet or anything. But building a Winforms client on top of your web interface seems great! Ofcourse, it's a lot of extra work. The web interface, the web service(s) _and_ the winforms client. But after that you've got some great functionality! Good example. Personally I like to build my own Web services layer around the object model or Web Services of SharePoint itself and not let client applications talk directly to SharePoint via the SharePoint Web Services. If you start for example consuming the Web Services for retrieving the items out of a list, you will start to experience the reason behind it. Yeah, I know... handling plain XML comming from the Lists webservice can be a royal pain in the *ss (been there, done that). I like the idea of a wrapper web service around them ... let's see if I can find some more time! At the website of DevHawk, you can also dfind a Sharepoint webpart for RSS.... It seems to me that VB6 nonsense like WithEvents and optional variables just gets recompiled, anyway. I can't see how you can have a concept like delegates and then break it by pretending events are sent by variable name, not by object. And as for optional arguments...i constructed a class in VB and another in C# that had nothing but a constructor which made three assignments. In VB, I made them optional. In C#, I made 8 seperate constructors for each of the possibile ways to call the constructor (which them called the most complete constructor with their own defaults). I compiled them both with the same switches and dependencies...the VB file was 20% larger. Implies to me that it's coding each calling signature as its own individual method. I'll stick with doing it myself, thanks. vslive is next month right? I have been able to see some of the pre beta 1 bits and they have come along way. I cna't wait to get my hand on them.... I completely agree with you on this! My hope is that these issues get addressed in v3. I suspect with the moving of WebParts to the core ASp.NET 2.0 plumbing WSS will be rearchitected and have some interesting features currently missing! I'm also hoping MS changes the way VS.NET can author to included non-isolated. I'd also like to see the ability to use the AD/AM directory ina FORMS based authentication scenario with WSS. Of course a native RSS/Atom option for all content would also be high on my shopping list for a WSS 2.1 feature! And I too get frustrated with the lack of alert capability on content posted to the main page. They should have included a 'whats new' WebPart which I once saw MikeFit demo in a web cast but it was never posted to as he mentioned it would be.. etc etc hey whats up>nothing here...how are you .im good...well i wanted the new version of msn messenger...if i could but it brought me here.. Jan, I'll be staying at the NH Hotel. Arriving Monday afternoon and staying until Thursday morning. I created a thread on the MsBeluxForums () for those wanting to meet, for example at the Geek Fest at the end of day 1. Anyway, I'll be seeing you soon! I'll be there as well! :p That's me :) I'll do my best to make intresting posts! ;) Thanks for introducing me Jan. The Geek Fest at the Developer & IT Pro Days 2004 will be great. No doubt. It's time I'll teach you to party! ;-) About my pizza restaurant: It still in Alpha :) Jan, I saw that you have installed it on our sharepoint server. I can only say : it's nice and thanks, it gives us a little bit more functionality. You've done a great job with this. terrific! Thanks.. They work a treat but I've not found an application for them on my server yet ;o) Dear Josh, I finally managed to solve my problem an posted a solution in VB under It handles certificate problems, user authentification, asynchron requests and a problem regarding keep alive connections. I hope this will be of some help to you. Karl A new CMS online book has been published - Building, Deploying, and Maintaining Intranet Sites for the Enterprise discusses how a small team within ... Thanks Dear Jan, great webparts. I would like to ask you if I can count on you for developing web parts for my company. In that case please send me your conditions. Regards, José Antonio... It is interesting how open source vs. commercial software is becoming a political issue. But there are ways to approach it. A political party who advocate the humiliation of people (whether admired or loathed, rich or poor) simply show themselves to be immature and unprofessional. I recently joined a University and was amazed at the level of integration Microsoft products have when working together. I can only imagine how this helps organise government institutions. Open source solutions simply aren't at this level yet, and perhaps without a corporate structure organising the teams and projects, will never be. I share your opinion completely! I really don't get how you can write such statements. And why is Spirit posting that video on their site? Doesn't that show of immaturity? Damn... Well, I'm not going to vote for this political party this spring! I'll sign a petition if there is one, or have my name added under an Open Letter if it's a good one. I really can't believe they're making such a stupid remarks, and a political party who is involved in humiliating people should have their own laws used against them. Like the recent law that stops funding parties who discriminate. So far for Spirit being a grown up party. Even better.. They state the example of Munchen in Germany which switched to Linux.:)) Aren't they allready way over they budget?? Yes they are and just because Linux is "free" and "easy" to use. Well except offcourse that installing a printer takes a day on Linux and in XP a minute. But yes I can see the "easy" in use. Shorty I'll say... this even goes way further. Ever seen the Bert & Steve TV?? (its in dutch) I've added my feedback If software needs to be free, I'll like my daily bread for free too thanks! If not, I'll have no money to buy it. Btw: its clear again that they do not seems to know the difference between Free Sofwtare (as in no license cost), Open Software (as in you can extend it), Open Source (as in you have the zillion lines of code), Open Standards (as in ratified by a offical organisation, btw .NET is a ISO standard =>), Proprietary Standards (COM, JAVA, J2EE) Aaah silly politicians, I wonder if they ever read their own job description; I'm sure it does not state: gotta get on TV a lot, no matter what. This is sickening at many levels. First, it is sickening to see that these technical decisions are not being made on any form of merit, but purely on false ideological grounds. The companies behind OSS (Novell (Suse), RedHat, IBM, MySQL ...) are pure commercial entities. They are no different than Microsoft or any other software company in that aspect. Surely no-one is suggesting government should take development and maintenance of basic software platform technology “in-house”? Or should they run on stuff that might or might not be available in future depending on some college kid’s “do I feel like it” 3 month support cycle things? Second it is sickening on a business level. The envisioned OSS solutions are equally and sometimes even more expensive even from a pure license perspective (look up the prices for RedHat Enterprise Linux or the Suse equivalent, StarOffice etc., and compare to the MS licensing). The envisioned OSS solutions require far more expensive consultancy and custom development for things that come out of the box in the equivalent Common Off The Shelve (COTS) offerings. The envisioned OSS solutions do not offer the same usability, both in terms of end-user, application developer as well as operational interfaces. The envisioned OSS solutions do not present a better opportunity for local Independent Solution Vendors (ISV's), who's skill set is more in line with commercial offerings. Typical commercial offerings present better performance on equivalent hardware than OSS solutions. OSS offering are generally supported for much less time (typically 1-3 years) than Microsoft solutions (typically 5-7 years) requiring more frequent upgrades with all the associated costs. Both platforms only tend to adhere to defacto market standards in their own fields, and only pay attention to "interoperation" when requested by market forces. Thirdly it is sickening to see these parties so comfortably meddling in affairs in which they have absolutely no expertise. They are willing to risk an entire sector of the economy that has performed exceptionally well in a free market, for the sole purpose of scoring some fringe left votes in the next election. If they are so concerned about the Belgian IT sector, why do they not follow the advice of that same sector? When asked they (Agoria if I remember correctly) unmistakably told the government to butt out and not meddle. Last but not least it is especially sickening that all the extra cash that will be needed for this “coup” in the future will have to be paid by us, through extra taxes that will need to be raised, making our economy even less competitive in future (*). Here’s a guess: Will a future “mauve” coalition let the blue partner score by having them cut expenses through “outsourcing” all of that expensive IT work to some cheap labor market? (*) for those readers not familiar with the Belgian scene, we have extremely high tax regimes already, and a frightening percent of the Belgian workforce is employed directly in government administration. The coup de grace of the socialist/liberal government here is that they declared a “tax amnesty”. While regular law abiding citizens payed almost half of their paychecks in income tax directly to the state government, those who where “less scrupulous” and worked in the “black” economy can now whitewash all there gains over the years by paying a onetime single 6% fine. I reacted, lets see if it comes up.. Well.. I am gonna vote for SP... this way I can safely download any book, any cd and any movie from the internet! Go for it Spirit! Everything for free! Why pay for intellectual property? And by the way, why do we pay you guys for so much crap? I would really like to implement this into a project I am working on, but I dont know were to find the web.config file. Thx for the support guys! I don't get the point. Linux easy to use and install, well I'm not sure about this. And also, where is the integration like we have with Microsoft products. Jan, let me know, I want to sign also the petition or open letter. And why using an old item about Bill in Belgium ? It has no use. And then the petition. Well I'm not in politics, but I understand Jan (and all the others) very well. Is this what thos politic guys do during the day, time to put some new guys up there. I can only say this... Goe bezig ^o) ironical offcourse. So if software should be free, then i'll have no job anymore Brad, you can find the Web.Config in directory that is used for the website/virtual directory in IIS for the SharePoint site. Jose, I've sent you an email, if you didn't received me contact me (). Well euh Jan bring the aspx online and we will sign your petition amen. PS going for another round in the "Palieter ?" SPA-Spirit are those guys who say everything should be free. Just to get more votes, because they surely do know that nothing is for free (even not Linux - and I'm not talking about those ISO's you can download, I'm talking about enterprise level). And if they change everything to Linux, how much will this cost us tax-payers? Last month, a big e-gov project of the Fed failed (another one). How much does that cost? A lot... Strange that a anti-MS political party is running their site on ASP pages :) according to Gartner () , the TCO of Linux is more expensive than Windows : When they want to buy 8000 PC's : 41840000 Pounds for Windows 43840000 Pounds for Linux => difference of 2 MILLION Pounds !!!!!! (= 3.5 Million Euros) Nice way to spend our tax money ! It's still broken. I attempted to create a NAnt buildfile from a VSS C# project file and received the same error as Craig Wagner did above. What gives? Dat Great job, Jan. We'd like to license these for use on our hosting server -- drop me a line with pricing information. :) This post was very helpful! Thanks! How do you install these web parts? Just import them? I noticed a dll file ...where does that go?. The Infopath SP-1 Preview is available for download. How sweet is that news?? I am currently working on designing a solution that utilizes InfoPath, Sharepoint and Web Services to enhance our sales team workflow. So I was hoping that I could get my hands on the service pack as soon as possible since I already knew I could take advantage of some of the new features that were leaked/released a while ago. I have loved InfoPath since I first played with it during the Office System 2003 Beta. However I was always disappointed at some of the things I found that were missing. The release of the reasonably feature rich service pack helps me believe that there is a lot more to come for InfoPath and while it's potential has yet to be truly tapped...but it will. InfoPath & Sharepoint are tools that are ideal for individuals like myself that are obsessed about efficient processes and information sharing. I am now empowered to create the sites, forms, lists, and workflows that I would typically require a developer to implement. Given that we are a development company, there are certainly lots of developers around. However taking one off client work... Nice. And if you don't want to or can't install .NET assemblies on your (probably hosted) SharePoint site then just use the built-in XML Web Part with a custom XSLT style sheet as shown at ;-) Cheers, Siegfried) encroching on my turf ;-) -th Don't worry about reposting, Jan -- you probably have at least some different readers. I've been meaning to say Great Blog, by the way! Thx Laura! That's cool, though I was hoping for something that would discover sites no longer linked anywhere, mainly to discover orphaned sites (like on a dev server). There must be a list of SharePoint sites somewhere in the db, anyone know how to get at it? I get some error as below. A Web Part or Web Form Control on this Web Part Page cannot be displayed or imported because it is not registered on this site as safe. I'll be there ! I won't miss it! Count me in. :) I try to come also. I am having trouble as well. I can't seem to get the web part registered as safe - I've edited the 6 or so web.config files I could find, but no go. I totally agree! I would love to be able to set up one alert for each of our team's sites. I'd also like to be able to (as an administrator) add alerts for other users. Many many thanks Jan, all of a sudden my SharePoint site has opened up. Its like springtime has come! What is a breadcrumb though? Thanks :-) Breadcrumbs: Microsoft has released SQL Server 2000 Reporting Services. What do You think about using it as a reporting engine? :-). Only 2 days after opening the registration, the session is already full! Way to go, Jan! PS. I'll be there too!... hi My subsites don't show (everything is on the first level). Level is set to 5. Could it be because of a German Sharepoint Portal?. terrific stuff... thanks.. ddd have an application which invokes(calls) a webservice (on IIS 5.0 , windows 2K) every 10 seconds. After a long time(about 4 hours),the call to the webservice fails. Do you know how to resolve the problem. Agree.. I will carbon copy and add my letter to the collection... I used the webcasts to train myself and in-house developers on ASP.NET, and usually like to listen to a few on long flights, which I take often enough to care. Microsoft should enable the dev community to archive these or download them much the same as the .NET show, etc. Especially since they are in WindowsMedia format now anyway. I used to use Anakrino until a friend of mine showed me Reflector. Overall a much better product. Sorry, used this in the Portal (where it does not show the structure of subareas). Works fine on Sites.. Thanks for all this infos.. Nice Info! I too have found the documnetation very limiting. Thanks for this great example. ~Jim. Looking for gives a 404. Can this be downloaded as an .exe from anywhere else? Thanks. You can find it here:... are user defined macros allowed? I was trying to use %src_filename% (from 2002 documentation) but it is not defined for 2004. You have been Taken Out! Thanks for the good post. don't think so Mike... You rock... Thanks for taking the time to do the polishing... ); } What if I want to keep the same name of a file. For example, if the file on the recieve port is foo.xml I want the send port to send the file as foo.xml. Can this be done? Very good! solved my problem I want to use IE Web Control to generate Tree struct of Document. Then, How can I deploy the IE web control? I want to generate Tree by IE Web Control!!!! please help me, I am crazy!!! My E-mail: defrostcn@yahoo.com.cn A great trick, thanks !!! You want to use %SourceFileName% - it's case sensitive.. Thank you so much! Now why didn't they put that in the documentation??? When I elect to display dates with the feeds, they all display as 1/1/2001 You MUST make sure that the assembly is known to the GAC. It worked for us without setting the wss security to medium. Maybe you have some typo in the web.config? Yes, works like a charm! For that one webservice that you want secured but no want to pay $400 dollar for every year. Like a charm! Thanks! Hi Jan, Thanks for the article, just wondering if the code is correct. The line: SetExternalFunctionName(GetType ().Assembly.FullName, "SecurityFunctoids.EncryptionFunctoid", "DecryptString"); refers to the class EncryptionFunctoid, but the class name is DecryptionFunctoid. Also, is there a rererence to the SymmetricEncryption class needed? (Would it be possible to post a ZIP of the project?) I am having a real hard time with custom functoids, are there any pitfalls to watch out for? All the best, Alan You're right! My mistake... I've used a library containing the encryption stuff. Because it wasn't the goal of my post to explain encryption/decryption i've not posted the code of that. If you're intrested you can send me an email using the Contact link. Excellent article! I hope that Christophe starts writing more articles and feed the excellent MSDN.BE site. Something to be proud of MS friends! BizTalk 2004 Custom Encryption/Decryption Functoid (via Jan Tielen) v.gud Jan, We are working with Microsoft and the issue of handling exceptions inside pipelines has come up. We are looking for a flexible way to catch errors in flat files where the serializer fails for instance. Any good ideas beyond using MOM to catch the error and then routing it back through biztalk? Shawn Shawn, you beat me... maybe something for in the newsgroups? how can I assign the value of %SourceFileName% to a variable in an orchestration? I'm hoping... :) Found the answer, place the following in an expression: sourceFileName = IncomingMessage(FILE.ReceivedFileName); Does anyone have any links to see these in action? Also, does anyone have any links to public sharepoint sites?... Unfortunately it looks like MOM is the best bet. They really missed it on this part of the exception management. It is so good in the Orchetation portion but pipelines are terrible. Oh well, something custom coming up :) I face an additional problem. In addition to splitting a message, I want to have envelope fields available in my orchestration, e.g. attribute receiveDate when the envelope contained a root element like <customers receiveDate="...">. Is this possible? It's great, Jan i've search this trick around the world :) Tanks mgi@cdhsrl.it, italy! With C# this work beatifull, but i need something like this in FoxPro 8, some body can help me?? Nice work Jan. I am currently working on an article for MS regarding the creation of custom notification channels (e.g. getting alerted via instant messaging) in SharePoint Portal. The portal has quite another approach to the creation and management of alerts. It would be nice to outline the difference between alerts on the portal and alerts in the WSS sites. If I have the time, I will do a post on it!. I'm looking forward to it Patrick! Thanks for your comments. It's great work. Since I am new developer in Sharepoint portal server 2003, it helps me a lot to know alert functionalty. Thanks. Nach CeBIT und vor Summit wieder ein paar Links und Ressourcen zu SharePoint: Excellent, this is a great we part! It would be nice if you could select more than one name in the drop-down list. It would also be nice if the person did not get sent an email to tell them the alert has been set up, but I suspect this second option is almost impossible. Rohan, your first request can be done; I'll look into it. But as you state: the second request is impossible without writing directly to the config database (I think). Jan, I was trying to install the code as other web part, but it's not working. By the
http://weblogs.asp.net/jan/archive/2005/05/31/409808.aspx
crawl-002
en
refinedweb
Your source for hot information on Microsoft SharePoint Portal Server and Windows SharePoint Services ******* REMOVED SOME HARSH WORDS ON THE SESSION ******* I took some notes, and augmented it with some of my own thoughts and information. ------- SharePoint Online provides:Managed Services on the net- No server deployment needed, just a few clicks to bring your instance up and running- Unified admin center for online services- Single sign on system, no federated active directory yetEnterprise class Reliability- Good uptime- Anti virus-...SharePoint online is available in two tastes: standard (hosted in the cloud) and dedicated (on premises)Standard is most interesting I think: minimum of 5 seats, max 1TB storage.On standard we have no custom code deployment, so we need to be inventive!SharePoint Online is a subset of the standard SharePoint product (extensive slide on this in the slide deck, no access to that yet)SharePoint online is for intranet, not for anonymous internet publishing.$15 for the complete suite: Exchange, SharePoint, SharePoint, Office Live Meeting. Separate parts are a few dollars a piece.Base os SharePoint Online is MOSS, but just a subset of functionality is available. Also just the usual suspect set of site templates is available: blank, team, wiki, blog, meeting.SharePoint Online can be accessed through the Office apps, SharePoint designer and throuth the web services.SharePoint Designer:- No code WF- Customize content types- Design custom look and feel Silverlight: - talk to the web services of SharePoint online.- Uses authentication of current user accessing the page hosting the Silverlight control- See for some discussion on getting a SharePoint web service call workingData View Web Part:- Consume data from a data source - Consume RSS feeds through http GET - Consume http data through HTTP GET/POST - Consume web services - ...- Configure filter, order, paging etc.- Select columns, rename columns, ...- Result is an XSLT fileThis XSLT code can be modified at will. There are infinite formatting capabilities with XSLT. Also a set of powerful XSLT extension functions is available in the ddwrt namespace (See for a SharePoint 2003 article on this function set, see reflector for additional functions in the 2007 version;-)). See for writing XSLT extension functions when you are able to deploy code, so not for the online scenario; this was not possible on SharePoint 2003).Note that the Data View Web Part can only be constructed with SharePoint designer.InfoPath client: custom forms for workflowsWeb services: Can be used from custom apps (command line, win forms, ...), but also from Silverlight to have functionality that is hosted in your SharePoint Online site itself.You can also host custom web pages on your own server or in the cloud on Windows Azure (the new Microsoft cloud platform), and call SharePoint Online web services in the code behind of these pages.What can't be done:- No Farm wide configurations- No Server side code - No custom web parts - No site definitions - No coded workflows - No features - No ... There is still a lot that can be done, but that will be an adventure to find out exactly.... Top News Stories Are You Integrating Properly? (Information World Review) I will stop harping on about Hmm. I found it quite useful. You have a nice summary of the points I took away. I was not super familiar with SharePoint Online so perhaps that's the difference. Nice to see Silverlight options exist. Pingback from office 2003 templates | Bookmarks URL Just got a mail from Troy Hopwood, the presenter of the session who followed up wwith some good during the presentation. He wrote "Sorry to hear you didn’t find any value in my Extending SharePoint Online session. The summary on your blog was perfect though suggesting you were able to come away with the key points." Actually I feel a bit embarrased by my harsh words in my blog post. Maybe I felt that way because I knew to much of SharePoint and its extensibility possibilities and shouldn't have attended the session. For other people it was useful, like for Tom in the reaction on this blog. Pingback from Websites tagged "bloglines" on Postsaver
http://weblogs.asp.net/soever/archive/2008/10/27/notes-from-pdc-session-extending-sharepoint-online.aspx
crawl-002
en
refinedweb
| ?!?!?? December 25, 2003 9:43 AM Scott said: re: ASPElite Not to mention that it doesn't include ALL of the really elite. I know some guys that could program rings around most of the bloggers I read that would never join any kind of user group. :) December 25, 2003 3:07 PM Jason Salas said: Hi Robert, Tell me about it...I've literally had to run at lighjspeed back to my office's workstation and make 7-second fixes to a database thing in the middle of commercials. :) Or, even worse....make a mad dash back to the PC to put something online that I just teased to 80,000 people as, "if you'd like to see the budget bill in its entirety, visit our website RIGHT NOW at". It's many hats to wear at once, but incredibly challenging and super-fun. December 25, 2003 5:16 PM Jason Salas said: Hi Steve, Essentially, what I've seen is people debate the merits of each, and find out which one is more suitable for their needs. It was mainly a "forward-only vs. disconnected" argument. At least now people know. December 25, 2003 5:18 PM Jason Salas said: Just a follow-up...Steve Smith explains how he pulled off this trick with Regular Expressions: December 25, 2003 5:40 PM YvesReynhout said: Try Enterprise Architect by . It can roundtrip C# code and allows for defining your own UML elements & stereotypes. December 25, 2003 8:13 PM Jason Salas said: Thanks! December 25, 2003 8:15 PM Shannon J Hager said: can someone at MS please five Placewear a free Passport SDK and license so I can log in with having to remember ANOTHER long random username/password? Or at least require them to put a "forgot your password?" link on the log in page? Or host the WMV version on your own servers? Or email me a link so I can download it? December 25, 2003 9:52 PM TrackBack said: December 29, 2003 7:21 AM TrackBack said: December 29, 2003 1:52 PM mike said: Couple of thoughts, off the record, so to speak. Per the "cheesy examples" :-), if I read this correctly, you're proposing that viewstate also store a token of some sort indicating the type of the value (?). What happens on the getter? Does it pre-cast the value back to that type? E.g. String myName = “Jason Salas”; ViewState[“aDudeInGuam”,System.String] = myName; // ... String aName; aName = ViewState["aDudeInGuam"] // works DateTime aDate; aDate = ViewState["aDudeInGuam"] // throws ?? As for client scripting, it would be interesting to hear about the scenarios. I don't think typing is the issue as such, since javascript types dynamically. It's really an issue of simply getting to the data in viewstate -- or is it setting? But javascript has access to the page in its client form, with controls populated, so is the issue primarily with being able to access arbitrary, non-control data? December 30, 2003 11:34 AM mike said: * Robust * Seamless December 30, 2003 2:17 PM Scott said: My criteria for deciding to use a DataReader vs. a DataSet usually has nothing to do with the "Forward-only vs. disconnect" It's usually a case of "size of the Result Set vs. What I'm going to do with the Result set(functionality" December 30, 2003 4:35 PM mike said: And what do "we" think about this topic in the docs? December 31, 2003 4:07 AM sux said: that sux December 31, 2003 3:25 PM r sux said: that really sux December 31, 2003 3:25 PM Scott said: Dude, you can!! December 31, 2003 9:26 PM G. Andrew Duthie said: Jason, I use an agent for my book projects. It costs me a bit more than 7%, but I've never regretted it, as my agent is very good at negotiating contracts. Of course, given the current state of the technical book market, that's not saying a whole lot at the moment, but my agent has definitely paid for himself in negotiations, as well as in being the go-between when you have something I'd sooner pass off to him to deal with. That said, I do wish I could use him for contracting stuff as well, but while he's got great connections in publishing, contracting's not his thing. There are, of course, contract placement agencies, but their cut tends to be bigger, often with less personal attention than you get in a one-on-one agent relationship. December 31, 2003 11:41 PM Phil Weber said: Check these out: January 1, 2004 12:48 AM Jason Salas said: Hi Phil, Thanks for the links. I remember hearing about the Carerra Agency, but I don't know how well they did. They're out of San Diego, if memory serves, and they contacted me when I was looking for a change of career about 1.5 years ago. Now that I think about it, when they first started out (about the time of the ZDNet article, circa Jan. '03), they only sourced talent for a set number of firms. They kinda bridged the gap between talent agent and corporate headhunter. I wonder what they're up to now... January 1, 2004 12:52 AM Jason Salas said: Hi Andrew, Good thoughts. I've always done the publishing thing solo, although it's got its ups-and-downs. I get more in the aggregate, but I've also been screwed on a couple of occasions. Thanks for sharing. January 1, 2004 12:55 AM Wallym said: Jason, These "agents" are called staffing companies. They squeeze for a lot more than 7%. The problem is that the majority of developers and programmers don't know how do the basics of running a business and get run over by the people that do. Wally January 1, 2004 5:22 PM TrackBack said: January 2, 2004 9:16 AM TrackBack said: January 3, 2004 4:05 AM TrackBack said: January 3, 2004 4:07 AM Darren Winsper said: I'd love to comment, except all I get is a blank page. January 3, 2004 5:18 AM Shannon J Hager said: It works here, but it's power-point-ish and I didn't make it past the third slide. I don't know that I've ever seen a PowerPoint presentation that was worth posting online without audio. January 3, 2004 10:02 AM Jason Salas said: Hi Shannon, You should try the other slides...the good stuff kicks in after the 5th slide...some interesting architectural approaches used. January 3, 2004 10:23 AM Shannon J Hager said: I find it insulting when people post their presentation outline like this. PowerPoint slides are rarely more than headlines for sections of a talk. If this were actually "a presentation", then the person who gave this did nothing more than read these words, right? Well, I'm willing to bet that he said a lot more than what is posted there. I see no difference than this and someone scanning and posting a handful of 3x5 notecards they listed the key points of their speech on and claiming they posted their speech online. The outline you linked to does look like it could be an interesting one-page document if it were a single Word.doc or html page and wasn't confined to bullet points and the over-sized "see jane run" paradigm. PowerPoint is fine when a presenter is speaking to a room full of people and wants them to see his notes in big bold letters in order to reinforce the ideas, but that just doesn't carry over into any other area for me. "Dick and Jane" books only had a few big words on each page but they at least had illustrations. Darren: The page is IE only, other browser get a blank white page. That may be the problem you're having. January 3, 2004 11:37 AM Scott said: The most interesting thing about OOP is that it allows for a high degree of re-use. The most interesting thing about most OOP code is that it is never re-used. January 3, 2004 1:04 PM SBC said: Scott Ambler (Software Development mag) had an excellent article recently about 'The Right Tool for the job'. It compares various methodologies - (may need a free registration). January 3, 2004 1:18 PM SBC said: I love C# too.. where can one order that cool C# gear you mentioned? January 3, 2004 1:21 PM Nancy Blachman said: This page provides descriptions of more advanced operators than any website or book I have run across (besides my own). Many pages in Google Guide may appear to describe basic concepts, but if you read them carefully, you'll discover helpful insights. Rather than telling you other things that I think are great about Google Guide, here's what users think about Google Guide. These comments and others can be found at It might be unofficial, but it's the best online guide on how to use Google I have ever seen. Pay it a visit. --Robert Skelton, Google Answers Researcher and developer of SearchEngineZ and Google Fan Exquisite and detailed. A treasure for beginners as well as veterans. A serious guide that is easy to use. Highly recommend. --Community Walla, Israel This is an excellent tutorial to assist folks of any computing background get acclimated to the vast resources Google has to offer. --Bruce Pechman, Vice-President of the Atlanta PC Users Group and President of PowerPartners Computer Services A comprehensive and up-to-date guide on searching Google successfully. --Kolyom, Israel January 3, 2004 1:26 PM Nancy Blachman said: Here's the URL I intended to include along with my previous message. This page provides descriptions of more advanced operators than any website or book I have run across (besides my own). January 3, 2004 1:30 PM Mike Cole said: Jason did a good job putting this together. I really like reading this kind of stuff… I could be critical, perhaps, of 2 of the design patterns covered, however... ** Single Responsibility Principal ** I don’t like the limitation of 1 responsibility per object. I view objects as a collection of cohesive responsibilities. Moreover, an object can have hundreds of responsibilities as long they are cohesive, and we don’t have overriding reasons to decouple them. I believe following this principal can lead to an anti-pattern (whose name escapes me! I think – “Ghost”) which refers to systems made up of 100s of tiny objects which, while flexible (in a pick & choose way), do not provide any type of easy-to-consume façade. Now, as far as you example goes… I would agree that the serialization of the object should not be a responsibility of the customer. Not because a customer object shouldn’t know how to serialize itself, but because by decoupling this responsibility we could address serialization in a generic way that could benefit all our business classes (not just Customer). You know what, though? Some of my differences may be about semantics… I think the “packages” you describe are the “façade” that wraps together your single-responsibilty classes… So I think you have built into your process a solution to the anti-pattern I describe above… certainly, I can see some advantages. ** Open-Closed Principle ** I don’t really agree with the use of inheritance as it was demonstrated in the “Open-Close Principal – Refactored”. That is, I don’t agree with the reasoning cited for promoting Customer to a base class. I would suggest that “fear of breaking the original class” is almost never a good reason to introduce inheritance. Perhaps, an intermediate developer working on an unfamiliar, and very mature system – might have reason to make such a move. I think the question to ask is, “if I knew about Loyalty points when I first designed this system, would I have made customer a base class?” If the answer is no --- then you should not refactor it in this way to support your new requirements. What happens when you have a new requirement for a “Preferred” customer? Do we now have classes like: Customer, LoyaltySchemeCustomer, PreferredCustomer, and PreferredLoyaltySchemeCustomer? Now, don’t get me wrong – this set of classes might be completely appropriate – for example, if the responsibilities of a “preferred” class were so fat they really needed to be separated. However, I would suggest here that fear of breaking the original class has resulted in an inheritance scheme which may not be appropriate – and may have resulted in a less adaptable and more error-prone system. To summarize, I just think that inheritance is often an over-used “hammer” – I would think twice about introducing it, unless an original design (based on current knowledge) would have used it as well. Just my 2 cents... enjoyed the document. January 3, 2004 4:54 PM Edwin said: I enjoyed the document. Refactoring is one thing that is beginning to interest me. It seems to help make code cleaner and more maintainable. January 4, 2004 8:44 AM TrackBack said: January 5, 2004 2:38 PM Robert McLaws said: There are too damned many polls and things going on to screw up the system. Make it simple and straight forward like the SuperBowl and the NCAA Basketball Playoffs and the NBA and every other sports league on the planet. They try to convolute it with all this ither crap, and look where it got them: nowhere. HOLY CRAP it doesn't need to be that complicated. January 5, 2004 8:12 PM Darrell said: The problem with opinion polls is people see patterns that are not really there (is it a natural attempt to create order in the world? I don't know). Computer polls remove this, but like you said do not take into account the fact that teams play differently on different days. Thus if team A beats team B, and team B beats team C, that does not mean that team A will beat team C, the fundamental fallacy in the computer ranking. I forgot who showed it, but someone created an example team matchup where, based on player ratings, A would beat B who would beat C, and C would beat A. Interesting. January 5, 2004 11:05 PM Scott said: two words, like it or lump it. "Playoff system" January 5, 2004 11:09 PM Phil Scott said: This isn't why college football is so screwed up, it's why college football is so great! Months after the season is over, fans can get in heated arguments over USC vs LSU (vs Miama of Ohio? Boise St.?) The Superbowl is almost an after thought of marketting and production for most on the US (cities of teams playing excluded of course). The level of debate, the human element and all that goes with it makes College Football one of the most intense and exciting things to play and be a fan of. You are looking at a system where 25% (at minimum) of the fans and players go out winners each year. I don't know, maybe this is crazy talk from an University of Louisville fan who is resigned to be happy to simply make it to a bowl each year and hope for a win. I like my basketball with the tourney, and I like my football with it's wacky bowls and quirky rankings. January 6, 2004 12:26 AM Russ C. said: Javascript popups .. Grr Oh and those little scripts that tell you your IP address and any other normal information that are supposed to make the owner look like an 'leet haxor'. I hate Javascript slideshows ... If you want to show me a series of picture ... Please give me a forward and backwards button ! January 6, 2004 3:40 AM Russ C. said: Oh forgot to say, regarding Copy and Paste, Whats wrong with using Ctrl+C / Ctrl+V ? January 6, 2004 3:46 AM Jason Salas said: Hi Russ, Thanks for commenting. The site basically negated the copy-and-paste functions, so it basically copy-protected its content, so the keyboard strokes wouldn't work. January 6, 2004 3:50 AM Russ C. said: Have you got the URL for this site ? Iwouldn't mind trying to see how they did this ? Purely for research purposes you understand ... January 6, 2004 3:59 AM Jason Salas said: Here 'tis: . The JavaScript file is at: /js/nocopypaste.js January 6, 2004 4:17 AM Russ C. said: Thanks for that .. and Yes , that is dammed irritating :) January 6, 2004 4:24 AM Jason Salas said: Yep...but hey...it does what it's supposed to do. Mission accomplished...to my dismay. :) January 6, 2004 4:37 AM Darren Neimke said: Clocks that spin! January 6, 2004 4:52 AM David Bossert said: Hi, Disabling active scripting in IE's security settings does the trick. What bugs me is when they use redirects to require you to have sripting enabled just to get to the site. January 6, 2004 5:31 AM Dr. Y said: Hey, Using firebird does the trick :-)) January 6, 2004 6:44 AM Scott Galloway said: I love stupid little games where you get to build stuff and wage war...so Settlers (1-4), Rise Of Nations, C&C Generals, Emperor Battle For Dune are favourites... January 6, 2004 8:18 AM Kirk Allen Evans said: While everyone seems to want closure on the season, supporting a single playoff system, I love the duality of the BCS and polls. I love to watch ESPN, CBS, and listen to sports radio on a Saturday afternoon and listen to the rankings of each team. Georgia was #6, #5, and #4, all on the same day (USA Today Poll, Coaches' Poll, BCS / AP poll). If you give credibility to informal (yet recognized and predictable) sources such as the ESPN "College Gameday" rankings by Corso and Herbstreit and your team could possibly represent the entire top 5 teams. Something that the BCS does correctly is the strength of schedule indicator. The strength of schedule variable is the indicator making it possible to have A defeats B, B defeats C, and C defeats A. The BCS is not some arbitrary points system, it is very statistical in nature. The coaches' poll includes the intangible and nameless component, sometimes referred to as "style points." Georgia beat Tennessee at Tennessee... it takes a lot for a team to overcome 90,000 fans shouting "ROCKY TOP!" But Georgia didn't just beat Tennessee, they manhandled Tennessee. The BCS considers a win, but does not consider the amount of trouncing that occurred, how effective the defense was at shutting down a consistently succesful running game, how the offense improved over the past several games. When LSU beat Georgia, some called it a fluke and said Georgia was limping. I watched that game, and LSU was the better team on the field, especially defensively. At the time, coaches' polls put Georgia at #4, as did BCS. When LSU met Georgia again for the SEC playoffs, LSU had drastically improved both in offense and defense and walked over Georgia. Georgia was a much better football team than LSU had seen earlier in the season, but LSU had improved at a higher rate than Georgia did. And it is that intangible factor that makes the coaches' poll an essential part of the equation. The BCS might indicate that a team improved considering strength of schedule, but the coaches' poll indicates just how much a team improved considering how good the team they were playing is. Besides, if we boil it all down to a playoff series, how does that improve the system? The only improvement that I see is that we get to see a couple more great games, but it leaves the question open for who the best team is. Should a wildcard game be included? Can a team be in a playoff game that did not win their conference? January 6, 2004 8:24 AM Dennis v/d Stelt said: Definitly, definitly Metal Gear on MSX Metal Gear Solid is okay, but Metal Gear definitly ruled my world! Other great games were Usas and... oh, gotta go... really important meeting! Hahaha! ;) Konami rules the world, as does MSX! January 6, 2004 9:04 AM Tim Marman said: Of all time? That's a tough one. There's two games back on C64 which I've wanted to play again for the longest time: Mail Order Monsters (one day, I WILL do a remake if no one else does!) and Bruce Lee. I had an NES, SNES, and N64, but I was usually more into PC gaming (I always had a better computer). The Zelda series has always been a big favorite. Baseball Stars on NES - I love games that you can "build" with, so even for sports games where you could improve your team over time and there was some GM-esque strategy. For similar reasons, the original Civilization, the original Railroad Tycoon... Hero's Quest 1: So You Want To Be a Hero (what later became the quest for glory series). Police Quest 1, Gold Rush.... In fact, me thinks I will be going home to play one of these classics now that you put it in my head. Moving forward, I wasted a hell of a lot of time in college playing Goldeneye and Red Alert 2, among other things. Right now I have Madden 2004, C&C Generals and Civilization III installed on my machine. Oh and Rise of Nations, but I haven't given it a proper playing yet. Seems promising, though - it's like a mix of an RTS and Risk. I'm sure I'm missing a ton.... but. January 6, 2004 9:12 AM Tim Marman said: David beat me to it, but if you disable scripting you're all set :) January 6, 2004 9:17 AM Josh Robinson said: Still have to say that even with all the incredible games out now, I have spent more time playing the original Tecmo Bowl on NES than any other game. I used to have a whole notebook full of codes for each game of the season with each of the teams. Modern day replacement for Tecmo Bowl - ESPN NFL Football 2k4. January 6, 2004 9:18 AM OmegaSupreme said: My favourite pc games are Battlefield 1942, Americas Army and Call of Duty. On the PS2 its pro evolution soccer 3. Cant wait for half life 2 either, thats gonna rock. January 6, 2004 10:05 AM Ed Kaim said: My favorite RPG was "Final Fantasy I" (in the US series--I never played the Japanese versions). It was very simple and straightforward, but flexibile enough to feel like I was playing. RPGs I've played since (especially in the FF series) are all movies with a little bit of button pushing every now and again. I'm currently protesting the whole FF series because they've released a game with two version numbers ("Final Fantasy X II"). I also loved a game called "Romance of the Three Kingdoms III" by a company called KOEI. It was a turn-based empire strategy game that included battles with hexagonal, turn-based strategy combat. They're probably up to version 12 by now. I was a big fan of Quake II. Even though games like Halo and Counterstrike have surpassed it in graphics, the simple gameplay of using a keyboard and mouse can't be matched with a console controller. Currently. I spend most of my video game time playing "NHL Rivals 2004" on Xbox Live. It's the best sports game I've ever played, but I guess I'm biased as a hockey playing Microsoft employee. January 6, 2004 11:51 AM Scott said: Civ 3, it's the only game that gets consistantly re-installed every time I rebuild my machine. (hey, it's Windows!) Well, now that I say that I notice that I've got Bejeweled installed on my PC and my Treo and I play it all the time going to work. hmmmmmmmm. January 6, 2004 11:52 AM Josh said: I was addicted to Superstar Ice Hockey (Mindscape) for C64. It was way ahead of its time. It featured multi-season franchise play (trades, drafting new players) which has just been added to most console games in the last couple years. It also had a great feature (depending on your perspective) that kept track of when you started a season game. If you lose the game and turn off the computer before it had a chance to save the results, it would detect that and count the game as a forfeit. And you couldn't just turn off every game in a season in hopes of getting last place (and therefore the most "improvement points" for draft picks, etc) because forfeits counted against your improvement points. Today's games give you the "option" to save your results, meaning you can go undefeated every season if you so choose. Oh yeah, and I also spent countless hours on: Kings Quest 1 & 2 Empire! Civilization 1-3 Bard's Tale 1-3 Wasteland (pre-cursor to Fallout) Railroad Tycoon The list could go on. I'm still a big videogame fan, but I don't play them nearly as much as when those games were in their prime. My most recent addictions were Medal of Honor online multi-player (PC) and the Tony Hawk series (PS2). January 6, 2004 1:01 PM Josh said: Or how about File | Save As... (Html). Sure, its not as quick as CTRL-C, but you can still get at the content (by opening the saved file in a text editor). The point being, they aren't really "protecting" their content, so why bother? It's just annoying. January 6, 2004 1:14 PM Andy Smith said: Every protection scheme I've ever seen is easily foiled by attaching a script debugger to IE. Sure that's beyond most people... but sure isn't going to stop me for more than a half a minute. January 6, 2004 1:39 PM Jason Salas said: Hi Ed, Thanks for sharing! The Finaly Fantasy game that Suqaresoft put out for the oroginal GameBoy was way beyond its time. I got it for Christmas in '90, and I didn't as for it, and it blew my away at how deep it was. I've also got Final Fantasy X for the PS2, and it's amazing. Gran Turismo 3 ain't bad, either. :) January 6, 2004 4:15 PM Jason Salas said: Hi Josh, Tecmo Bowl ruled! I played the 16-bit Ninetendo copy all the time. January 6, 2004 4:16 PM Paul Glavich said: My Favs :- Mouldy Oldies (ie. from way back) include :- Grid Runner by Jeff Minter on Commodore64 Elite on Amiga Current titles include :- Halo (Xbox) January 6, 2004 4:18 PM Jason Salas said: Hi Tim, Gosh, the Bruce :ee title you mentioned makes me remember "Kung Fu" for the 16-bit NES, which was based on the same storyline as Bruce's movie "Game of Death" (I think...whichever one it was that featured Kareem Abdul-Jabbar). Classic! January 6, 2004 4:18 PM Jason Salas said: Anyone into Dark Forces for the PC (although I think they later made console games out of them), the Doom knockoff with the Star Wars theme, or WarCraft/SratCraft? I've only ever played WarCraft on a Mac, but it's hours of fun. January 6, 2004 4:19 PM Jason Salas said: Hi Paul, Wow! I rememebr GridRunner! Almost makes me long for the days of the original Tron and ExciteBike. January 6, 2004 4:21 PM Paul Laudeman said: Battlefield 1942 and the arcade classics from yesteryear. January 6, 2004 6:09 PM Shannon J Hager said: chromeless pop-up ads are my least favorite JS trick. I don't mind when people disable the right-clicks as long as they don't give me that "haha! you can't do it!" dialog. That kills me. Anything that scrolls automagically usually bothers me, too, mostly because it is rarely done well. January 6, 2004 7:50 PM mike said: Another kind-of javascript trick is redirecting so that you can't use the Back button, or more accurately, if you do, you end up on the same page. Technically not necessarily a javascript trick (though it can be), but way annoying anyway. :-) January 6, 2004 10:59 PM Cory Smith said: Let's see, how about the following (honestly, this started as a short post ;-) ): Amiga - Jumping Jackson Killing Game Show Beast and Beast II Atari 2600 Breakout Space Invaders Missle Command Sega Genesis Sonic Series TurboGrafix 16 Bonk Sony Playstation Tekken PC (yesteryear) Bouncing Babies (a really old one ;-) where you are a fireman saving babies bouncing out of a window onto one of those catcher things three times into the firetruck - silly, but was fun) Black Cauldron (one of the first Sierra games that made me say Wow! Adlib Sound! - probably the first game on the PC that made me think games on a PC could be great.) Commander Keen (a lot of time wasted on this game) Raptor (one of the coolest, even by todays standards vertical space shooter) Wolfenstein (had to be on the list) Doom (well, you know, had to be on the list) Duke Nukem (was just downright fun with all of the vocals) Need For Speed (first racing game that was fun) PC (a little newer) Quake Series Warcraft Dungeon Keeper Age Of Empires (series) PC (today) Halflife/CounterStrike Halo Tron 2.0 Neverwinter Nights (with expansions) Dungeon Siege XBOX Halo (duh!) Dead Or Alive 3 Project Gotham Racing 2 Ghost Recon / Island Thunder Magic The Gathering: Battlegrounds Rainbow 6 III I'm amazed at the responses you've gotten so far with the lack of XBOX game references. The games/graphics/sound are extremely impressive. Only one other person mentioned it. On top of this, XBOX Live is probably one of the most impressive game 'technologies' that have come along in a long while. It's just too easy to jump on, find someone to play against (don't get me wrong, it can use some improvements in the friends management department, but overall, it's pretty cool). January 6, 2004 11:16 PM Nicholas Sabinske said: DDR! DDR! DDR! DDR! DDR! DDR! DDR! DDR! DDR! DDR! DDR! DDR! DDR! DDR! DDR! DDR! DDR! DDR! DDR! DDR! DDR! DDR! Dance Dance Revolution, on the Jap PSX and PS2!!!!!!!!!! I'm the second oldest person I know that's still into DDR, but I do owe it giving my lazy butt some physical activity. Anyhow, I'd have to say the ultimate SYSTEM for developers is the Gameboy Advance, which has a great games library and can sit there on your desk next to you every time you need to take a break ;) January 7, 2004 10:38 AM Josh said: Having never used XBOX live, and honestly scared away by fees, what makes it so much better than the free (Gamespy) alternatives we are used to with the PC and PS2? (not trying to start a debate, genuinely curious) January 7, 2004 11:25 AM Cory Smith said: XBOX Live... consistant interface and performance across all titles. At $50 per year, not a bad deal. Also, since all games are using the same service, it drives the whole platform forward and ensures that cheating will be held at bay. Any vulnerabilities found, thus address, can take effect across all of the games. Gamespy does offer a consistant interface, so to speak, but not all games use the same mechanism. With PS2, each game is responsible for their own online content, connection and delivery. XBOX Live puts all of this into one package and ensure that you are getting what you paid for when you see the XBOX Live logo ;-) For the price of one game per year, I don't see that as being a bad thing. January 10, 2004 12:22 AM Jay Goldsmith said: Some of these comments here seem to be criticisms of PowerPoint rather than the material itself. I had no problem at all following the content and the examples, and can't see why you would need someone to actually stand up and talk about the slides. They look pretty self-explanatory to me. If you look at the UML tutorials on the same site you'll see very few bullet points and lots of diagrams and sample code. I suspect Jason's avoided using UML here because he doesn't think a lot of .NET developers will know it - not that they need to understand design principles or refactoring. The OCL tutorial and UML for Customers is good stuff :-) Never seen anything like that before. The advanced stuff like Test-driven Analysis & Design is a new idea, too, I think. January 15, 2004 10:28 AM Jayson Knight said: i played the original Legend of Zelda on NES for a small percentage of my life from age 9 on. metroid was also a big hit w/ me. currently Halo has dominated most of my console experience, and i can'w wait for Halo 2. i still feel obligated to drop a quarter or 2 in whenever i see galaga :-) January 15, 2004 11:22 AM Paul Wilson said: Seems like a case for clustering. January 17, 2004 8:39 AM Christoc said: I lived on guam for a few years, when I was 8-10, 86-88 I believe. For some reason I knew the submarine ride was closed, but I don't know why, as I haven't been to Disneyland since 95.... Maybe it was closed back then, I can't recall. January 21, 2004 1:17 AM Christoc said: oh, BTW :) Congrats on the MVP! January 21, 2004 1:18 AM Jason Salas said: Thanks! Yeah, it was a real polar day...happy then sad! Oh well, the world could be worse...maybe EPCOT's in the future. I haven't been there since '87! :) January 21, 2004 1:22 AM Brian Desmond said: Congratulations & Welcome aboard (no pun intended)! January 21, 2004 1:23 AM Jason Salas said: Clever! Thanks Brian! January 21, 2004 1:48 AM Doug Reilly said: Bummer about the Sub ride, but congratulations on the MVP! January 21, 2004 8:01 AM Rob Chartier said: Awesome about the MVP award Jason! January 21, 2004 12:50 PM TrackBack said: January 21, 2004 8:16 PM Chris Frazier said: This sounds like an opportunity to build your own submarine...you do everything else, right? :P Congrats on the MVP award, Jas. January 22, 2004 10:30 AM Jason Salas said: Hahaha! Thanks Chris. Yeah, I guess I'll be having to recreate it if I want to see the sub ride again. Congrats to you, too! January 22, 2004 6:34 PM Wallym said: At least they didn't close DisneyLand like WalleyWorld.......... Wally January 22, 2004 10:11 PM Phil Weber said: Have you seen SNL's "Behind the Music" spoof featuring Blue Öyster Cult? ("I've got a fever... and the only prescription is more cowbell!") January 24, 2004 3:22 AM Jason Salas said: Hi Phil...yeah! That's a classic skit with Christopher Walken as "Bruce Dickinson" which they keep repeating, which was an off-handed reference to Iron Maiden's legendary frontman. Also, the Simpson's spoofed BTM, doing "Behind the Laughter". Did you see that one? Hilarious. January 24, 2004 3:27 AM Dave Chan said: I agree. I found these very useful. January 24, 2004 5:50 AM Ajay Juneja said: I personally remember watching the original "Triumph of the Nerds" in 1996 instead of studying for my U.S. history final! I still got the second highest grade in the class on my U.S. history final somehow. But man, that thing was so valuable for me it's not even funny. I got to see the personalities of those who dominate silly valley (as I affectionately call Silicon Valley) -- and it's a great historical documentary. I read Nerds 2.01 when it came out in 1999-2000. Knowledge of your industriy's past helps one figure out how to strategize one's future. January 24, 2004 6:28 PM Ricky Dhatt said: Admittedly, I haven't seen several of the episodes you listed, but for me the quintessential Behind the Music episode was TLC. January 24, 2004 9:36 PM Manjeeva said: Are u interseting Cricket ? January 25, 2004 6:09 AM mike said: I have Nerds 2.01 but have not read it. Now it's next. :-) Some others I have read that I think everyone in the industry should read: Accidental Empires by Robert Cringely Catty, sometimes hilarious history of the PC industry, highlighting many of the occasionally odd characters who made it happen. This is the basis for the program "Revenge of the Nerds," but Cringely is a funny writer who's well worth reading in the original. Hackers by Steven Levy The definitive history of the rise of hacker culture. Although the book is old by now, he describes the ethos that eventually gave rise to what we now call the Open Source movement. Insanely Great by Steven Levy History of the Mac, which captures the phenomenon of Apple. A Brief History of the Future by John Naughton Account of the invention of the Internet, with clear and interesting stories about the development of the ARPANET, packet switching, USENET, etc. Go To by Steve Lohr History of programming, which is recent enough to touch on .NET. Code by Charles Petzold A layman's guide to how software works. The Code Book by Simon Singh History of cryptography, from ancient times through Enigma and into public/private. The Chip by T. R. Reid History of the invention of the microprocessor. We take it for granted now, but this book describes how remarkable it was that you could put an entire computer on one chip. Very readable. January 25, 2004 1:46 PM Jason Salas said: Hi Manjeeva, Not really...I'm more interested in American football, baseball and basketball scores. January 25, 2004 5:31 PM Brian Swiger said: It's not down...but they are requiring partnership with them. You have to email or call and they will send you a differing URL with a sponsorship number as a querystring value that you pass to get the valid XML. The tech I spoke with did talk of bringing it down and replacing it with pay-only service. That'd suck for me. January 26, 2004 2:54 PM TrackBack said: January 28, 2004 4:00 PM TrackBack said: January 28, 2004 4:37 PM Dennis v/d Stelt said: Here's my hug! And we get a bottle of cheep wine ;) January 29, 2004 7:39 AM SBC said: Quite a few of my clients have bought me lunches and gift certificates, the latter, which I promptly exchanged for books.. :-) January 29, 2004 7:51 AM Paul Wilson said: I worked for a consulting company about 5 years ago that pushed me to first get my MCSD. I was the very first one for this company in their Atlanta office, and the first overall on the new MCSD track of the time. Again, they pushed it, claiming that it was super important, so I did expect to really be thanked for it. What did I get -- movie tickets! What did they get -- I immediately started looking for other jobs and got my recognition (and raise) elsewhere. January 29, 2004 8:12 AM Brad said: That's an easy one ... they don't. We slog on in relative obscurity, being assigned more work and more responsibility for which we have demonstrated ability, but that's about it ... January 29, 2004 8:18 AM Russ C said: Congratulations on the MVP but reading your blog just made me realise that I've never heard the term 'core competency' outside of a Scott Adams book until today :) January 29, 2004 9:11 AM Jason Salas said: Hi Russ, LOL! That's 2.5 years in graduate school for ya... January 29, 2004 9:16 AM Jason Salas said: Hi Brad, Yeah..the tecnhies get more work, while the marketing morons continue to wear sunglasses inside, take 3-hour power lunches and play golf. :) January 29, 2004 9:22 AM Jason Salas said: One thing I do know about getting technical achievements...I may not get recognized for it, but they sure are quick to push me to doing talks and sales calls. Funny... January 29, 2004 9:26 AM Ian Cooper said: Well I'm working my way towards an MCSD.NET and I know of others that are. I think there has been a delay caused by the depressed market - why get the certs if there is no one to sell them to. .Net arrived when there were precious few new projects for folks to cut their teeth on. But that seems to be changing, new projects are emerging and certs seem once again to be a valuable addition to a resume. So I think pronouncing the death knell for certs is a little premature January 29, 2004 9:53 AM Derick Bailey said: You guys are lucky. I get a "why didn't you get this done last year?" and a "sign this form that says you won't leave or quit for the next year, or we'll make you pay us back at the highest possible interest." January 29, 2004 10:11 AM Derick Bailey said: correction: "won't quit or get fired for the next year" January 29, 2004 10:11 AM Scott said: News flash Jason (har har, pun intended) :) Certs are and always have been useless! Read on true believers! January 29, 2004 11:24 AM TrackBack said: January 29, 2004 2:36 PM Scott said: Derick: That's exactly why I never let my work pay for any training, unless they force me to go, or certification tests I take (only happened 3 times). I knew a guy who was a CNE and refused to put that on his business card because the company he was with wasn't paying him enough to get the extra cred his cert would have given them. The CNE cert was one of the few that required you to actually KNOW something non-trivial. January 29, 2004 3:01 PM David Eichner said: Can you give me the contact info to get a sponsorship number? January 29, 2004 4:43 PM Jason Salas said: Hi Derick, I've heard & read that IBM had a serious turnover problems becauses of this...people got jobs there and just worked there, and as soon as they got the comp,any to pay for training/certification, they split. A former empployer of mine made all staffers sign an agreement that said that we had to agree to work there as long as it took to get trained or certified, if the company provided for it. January 29, 2004 5:01 PM Jason Salas said: Hi Scott, Good point. Many ocmpanies I know could care less if the people they employ have credentials, but they do pay for certification and training to use their people as sales drivers. January 29, 2004 5:03 PM Jason Salas said: Hi Scott, I read your counterpoint, and I really enjoyed your thoughts. Sure, the MVP program has taken off due to more people being awarded it than in the past, and with greater frequency (it apparently used to be only a few people selected annually, now it's several quarterly, if memory serves). It’s on paper an acknowledgement of one’s commitment to using and sharing knowledge of Microsoft products, and indirectly acts as a way of ensuring external promotional push, which is an invaluable asset in any industry. Not a bad idea, and a classical approach to marketing dominance we’ve seen time and again from all types of businesses – I’ve been doing this for years with having people outside my newsroom link to stories on our site to get us free exposure. There are pros and cons to which is better/more superior/carries more clout – the MVP program or genuine Microsoft certification – or if either is even worth it in the real world. Foremost among these is that the MVP is, as you pointed out, by selection only from Microsoft staffers, so one can't buy their way in like one can with certs. The MVP program is, when you think about it, that same model in reverse – Microsoft is not only rewarding the commitment of those who use its stuff, but it’s also securing their brand loyalty. However, you’re right...this very reason makes the program suspect for political favoritism. There's certainly a justifiable argument that those granted MVP status (myself included) are MS kiss-ups and not really gurus. Still (and granted I can only speak for ASP.NET), I’ve yet to meet a person who’s been selected as an MVP who’s not what would be considered an expert and professional, being extremely knowledgeable, competent and conducting themselves in a mature, civil, friendly, helpful manner. So there is some distinction in the designation. But in the end, you’re right – certs and credentials don’t really make the developer, its skills and the ability to use them that separates the men from the boys in programming. In the meantime, it’s all about stacking Ye Olde Resume in the hopes of opening a door with a good company or landing a sweet contract. At any rate, keep up the good work...I enjoy all your blog posts...especially this retort! Write me back at jason@kuam.com if you want to rap about this some more, or if you post other comments to this subject. I'd love to read them. :) Jas Note: I got certified several years back and was just awarded MVP status myself a couple weeks back, and it's interesting to see the differences between the two programs first-hand. However, in either case I’m no more “Rah Rah, Microsoft” than I have been (or have not been) in the past because of it. And neither makes a developer that have them superior to those who don't. January 29, 2004 8:48 PM Scott Mitchell said: I've gone through the interview process twice - once for an internship and once for a full-tiime job. The vast majority of the questions I got were technical, and not brain teasers or riddles like those listed. There were some typical college CS classes, like, "Write the code to do a postorder traversal of a binary tree," "Given an list of words as an array of strings, write code to find all the anagrams," "Write code to implement C's atoi() function," and "Write code to determine if there exists a cycle in a linked list." I also had some typical questions - where do you see yourself in 5 years, why do you want to work for this company, blah, blah, blah. Both interviews, though, were a boatload of fun - one up in Redmond and one in Silicon Valley. >assume you have to market a new alarm clock for the >deaf. How do you do it? "Market" or "create?" If it was market, I dunno, run some ads in "The Deaf Quarterly," and on deaf-related sitcoms. >suppose two spaces occupy two distinct values. I know those words, but that sentence makes no sense. A space occupies a value? Usually things occupy spaces, not the other way around, no? >how can we make Word better to lawyers? What's funny, my lawyer uses some ancient version of WordPerfect that is DOS-based, he has to drop to the command line to run it and gets 80x25 resolution. No WYSIWYG, no mouse. But, man, he knows the keyboard shortcuts like nobody's business. January 30, 2004 12:50 AM Jason Salas said: Hi Scott, Yeah, part of the challenge was sussing out what was trying to be said. I think you're right about the "create" over "market" question, but I do remember vividly one hiring manager asking about spaces occupying values, which I took to mean variables. It was basically a math question to have X, Y and Z, and have Z at some point assume X's value. Or something like that. I found the answer in a book I bought in the Seattle-Tacoma Airport "How to Get A Job At MIcrosoft"...on my way back home after the interview. :) January 30, 2004 12:55 AM Uffe said: The company where I work dont like to pay for the courses and tests nor the cert itself anymore, cause they believe ( the new boss anyway ) that " the only reason that we want certs is that it`s more easy for us to get another job" . Wouldn`t you ? January 30, 2004 1:14 AM Jason Salas said: Good point Uffe, Why train when it's just setting you up for departure? January 30, 2004 1:18 AM M. Keith Warren said: I got asked to describe 5 server controls the ASP.NET team should add; then once I was done answering I had to tell the interviewer why they should not add them. Another memorable one...Design an object model for poker :) January 30, 2004 1:27 AM Jason Salas said: Hi Keith, I forgot about that one! I was also asked to describe a new server control for the ASP.NET team and break down the object model, and add in new features. The big challenge was that the hiring manager kept shooting these rapid-fire conditions at me while I was in the middle of my sentences. January 30, 2004 1:31 AM M. Keith Warren said: We must have been interviewed by the same woman! She kept interrupting me trying to guide (or misguide) my path of reasoning. Twas frustrating but considering the position, very neccessary. January 30, 2004 2:19 AM Jared Evans said: I was in talks with someone over at Microsoft about a possible interview and she had a tip for me: Read this book: Programming Interviews Exposed: Secrets to Landing Your Next Job There would be questions related to thing such as insert/delete a node or reverse a singly linked list. Also, you should be able to do the same for a doubly linked list. (To reverse a doubly linked list, just swap the next and previous pointers -- it's a trick question.) Other types of questions are to determine if a b-tree is valid (nodes on the left are less than nodes on the right), to reverse a text string, etc. January 30, 2004 2:36 AM TrackBack said: January 30, 2004 2:43 AM M. Keith Warren said: ...another popular book... January 30, 2004 3:00 AM Scott Galloway said: Personally, always found it about as funny as a nail in my hand...but I guess my cultural perspective is different...I don;t think it's quite as appreciated in the UK as it seems to be in the US January 30, 2004 7:05 AM Jason Salas said: Hi Scott, Yeah, I've heard that Benny has a bigger fan club to the left of the Atlantic. :) January 30, 2004 7:35 AM Cameron Reilly said: a few of us were talking recently about how benny died a broke, lonely old man... he was dead for a week or something in his flat before someone noticed... January 30, 2004 7:43 AM Jason Salas said: Hi Cameron, I've read that despite his success, at the time of his death Benny didn't own his own house or car. January 30, 2004 7:45 AM Scott Galloway said: But he wasn't broke...he left £7.5M, he was just really mean ( ) January 30, 2004 8:09 AM Scott Galloway said: I had a couple when I interviewed - One was about the number of cars going over the golden gate bridge, another was about telling which was the heaviest out of n balls in the fewest moves. I had a security one about secure key exchange (public / private key stuff) and an odd one about 'the perfect alarm clock'. Unfortunately, I was sick and jet-lagged, they may as well have asked me to count up to 10 in sumerian for all the sense they were going to get from me (I spent the next 2 days in a Seattle hospital) January 30, 2004 8:47 AM Tim Marman said: Or you could have a bunch of gay guys make-over the ne.... oh, right. Has to be original. I hate reality television as well, but the one that let me down most was Joe Millionaire. As I've said in the past - they should have found a Bill Gates look-alike and brought HIM on there. It's like put Halle Berry or Heidi Klum etc up there and tell me she's rich. Then tell me she's not. Big deal, she's still hot, you know? January 30, 2004 10:21 AM Scott said: I feel dirty knowing this but.... They always make sure that the reality people don't have access to a TV or radio and limited access to the internet. Every time I see that I think "that would never fly with a geek in the house. He/she would have some kind of proxyless WiFi connection leeched off the house down the block using a Pringles can." January 30, 2004 12:18 PM Scott said: "I've got a lovely bunch of coconuts." January 30, 2004 12:47 PM Scott said: Hi Jason, Thanks for reading my dinky little articles. :) My point, which I may or may not have made, is that certification is being replaced by the MVP program. I hope my retort didn't come off as slagging the MVPs. I have to say that MOST of the MVPs I've seen blogging would get my respect anyway. They don't need a title. I didn't need to see the little MVP logo at Scott Hanselmans site to know he knew his stuff. I knew it when I downloaded his TinyOS sample. I can't fault them for accepting the title, they get swag with it. Swag is like air to a geek. :) That being said, it seems to me that there is a wide margin for questionable MVP selection and a lack of clarity in what being an MVP actually means. For example, Chris Pirillo was recently made an MVP for "Digital Media". ( ) I have no idea what that means? I should contact Chris if I'm having trouble getting WMP to play my MP3's? If I wanted advice on internet marketing, Mr. Pirillo would be one of the first people I would think of as a resource. If I wanted advice on SVG, I think DonXML( ) would be a better choice. I've noticed questionable code and design posted on some MVP blogs which has made me think, "How in the heck did THEY get MVP status? What does MVP really mean?". (a an aside I see some of the same things in the Microsoft blogs) I realized that MVP doesn't mean any more or less than the old MS certifications. Some people seem to actively campaign for the title, which tends to give it a popularity contest type feeling. Like I said, it's a win-win situation for MS. The carrot is dangled for people who aren't participating and the people that win have to keep participating to maintain the status. It's a lot cheaper than the cert program. January 30, 2004 1:18 PM Jason Salas said: Hi Scott! No sweat, and like I said, I agreed with the majority of your rebuttal, and you're right in the end. And no one's more surprised that they landed the MVP than me. Keep up the great work and great thoughts! Jas January 30, 2004 5:53 PM Jason Salas said: Hi Scott, Yeah, my jetlag had a little to do with the rapidity of my responses...coming from Guam, I'd been on planes for 19 hours straight, and then got into Seattle at 5AM, with the interview at 7AM. But nothing like yours. Hope everything worked out. January 30, 2004 6:05 PM Jason Salas said: Hi Scott, Not like I want to talk bad about a man that gave me so much laughter, especially after he's passed on, but there was a big controversy about his politics and the fact that several people thought he was a bigot. January 30, 2004 6:09 PM Alex Lowe said: My old company put out a press release to the Associated Press. They also included a blurb about the award in a "Meet the company" presentation they did with new customers, etc. January 30, 2004 7:19 PM Scott McCulloch said: My Company uses it in all there marketing brochures, presentations, and uses it for requirements into the various partner programs at MS. The only help I got towards the certification was the company paid for my exams after I passed each of them, which is okay since they are so expensive now. January 30, 2004 11:03 PM Tejas Patel said: Jason, sounds like a very good idea indeed. I think you should or someone else should approach a techie producer and director and start something up. Very nice idea indeed. January 31, 2004 1:04 AM Tejas Patel said: Although I cannot say what the task was and are assigned on a daily basis, I can say that my Boss allways comes up with work and which is "just very urgent" and "ASAP". People in my company comes up with problems and want the answer or solution straight away (they think computer people are God). Well I quite enjoy the level of respect I get when I get it but I also enjoy the challenge that I face and the innovative ways I come up with to meet those challenges. January 31, 2004 1:12 AM TrackBack said: January 31, 2004 1:26 PM asdf said: asdf February 1, 2004 3:13 AM mike said: Not sure what you mean by "rename" the GridView. GridView is a new (in 2.0), improved version of the DataGrid control. (What's to rename?) We'll also get a DetailsView control that allows you bind to a data source and to scroll through individual records to view and edit them, *and insert new ones*, yay. As you probably know, the data controls will (can) use a new model whereby you put a data source control on the page that handles the connection and the query. You then bind controls to the data source control. IOW, data access is abstracted into a component (technically still a control) that you can bind any bindable control to. This will apply to the new controls like GridView, etc., as well to the older controls like ListBox, etc. The new model is easier also in that the data source control and the control bound to it can talk and determine when it's time to data bind. No more 4-line Page_Load snippets to perform data binding. And just to be clear, all old data-binding technology will work just fine. Hmm. Got off on a tear there, sorry about that. :-) February 1, 2004 8:24 AM Colt said: Just add on Mike, we can always use (backward compatabile) DataList, DataGrid, GridView and DetailsView control in ASP.NET v2.0. FYI: Moreover, just want to mention 1 point and the subject of this blog might be: The evolution of a control: DataGrid ==> MxDataGrid ==> GridView ... (MxDataGrid is an "enhanced" DataGrid control and work like a GridView control and come with the ASP.NET Web Matrix project) February 1, 2004 9:03 AM Colt said: ooops...typo: backward compatible instead of backward compatabile February 1, 2004 9:05 AM mike said: True -- Web Matrix is a pretty good peek at many of the new Whidbey features. :-) February 1, 2004 3:40 PM Jason Salas said: Hi Mike, It's halftime during the Suepr Bowl, so I've got about 30 minutes with which time to respond. :) I know of the GridView control (a souped-up baby brother of ASP.NET 1.x's DataGrid), and I enjoy it, as well as the DetailsView for ASP.NET 2.0. But check out the article I linked to in the blog ( ) in which Raghavendra indicated that for WinForms, GridView has been renamed to "DataGridView". I'm not sure if this was just for WinForms or not. Go Pats! Jas February 1, 2004 8:23 PM Raghavendra Prabhu said: My post referred to just the WinForms control, not the ASP.NET one. February 1, 2004 10:59 PM Christian Nagel said: Jason, Read my weblog entry and sample about extending CultureInfo. I've done a sample to create a Klingon culture (et-Klingon): February 2, 2004 1:39 AM Jason Salas said: Thanks much Christian! I was gonna mess with Pig Latin as a sample project, but I'll check out your stuff. :) February 2, 2004 4:09 AM TrackBack said: February 2, 2004 5:19 PM TrackBack said: February 2, 2004 5:19 PM Mark Hurd said: I'd guess it is something more simple like you can't inherit static (Shared) methods, so it's hard to fabricate the proxies. February 3, 2004 7:55 AM Jason Salas said: Hi Mark, Yeah, I figured it was something like that. :) February 3, 2004 8:15 AM Jay Glynn said: I've been working with Wiley since they picked up some of the Wrox titles. Pro C# should be out in March. I have to say that working with Wiley has been a joy compared to others. Best of luck. Jay February 3, 2004 9:18 AM Tejas Patel said: Good luck Jason. February 3, 2004 12:50 PM Jason Salas said: Thanks Jay! IO'm still steamed at not being paid and being ignored when trying to collect, but I've got some reliable friends that work there now, so everything should work itself out. Jason February 3, 2004 5:30 PM Jason Salas said: Thanks Tejas! February 3, 2004 5:30 PM Jason Salas said: Here's the actual code for extending the CultureInfo class Christian mentioned in his reply above (thanks, Christian): using System; using System.Resources; using System.Collections; using System.Data; using System.Data.SqlClient; using System.Threading; using System.Globalization; using System.Diagnostics; namespace Nagel.Demos.Enterprise { // derive from CultureInfo, so it can be used as a CultureInfo public class EnterpriseCultureInfo : CultureInfo { private string name; private string displayName; public EnterpriseCultureInfo(string extendedCulture) : base("en-US") { if (extendedCulture != "et-klingon" && extendedCulture != "et-vulcano") throw new NotSupportedException("The culture " + extendedCulture + " is not supported"); name = extendedCulture; switch (extendedCulture) { case "et-klingon": displayName = "Klingon"; break; case "et-vulcono": displayName = "Vulcano"; break; } } #region Overridden properties of CultureInfo public override string DisplayName { get { return displayName; } } public override string Name { get { return name; } } public override string EnglishName { get { return displayName; } } #endregion // also needed for a full implementation: date/time format, Klingon calendar, number format... } } February 3, 2004 7:19 PM Dody Gunawinata said: I started in 2000 on the PDC bits and Code Behind never grow on me. I still hate it with the same vigor now as it was then :) February 4, 2004 5:26 AM Scott Galloway said: Wow...never had that experience, I use to code ASP with VBScript classes, I always hated in-line coding - especially as in ASP 2.0 they had a big performance hit (the context-switching issue). I jumped from ASP to Jave then to ASP.NET - sode behind seemed far more straightforward to me... February 4, 2004 8:06 AM Scott said: All the ASP projects I worked on, at least the large projects, all used the code-behind paradigm anyway by placing all the business logic in VB COM DLL's and calling them from the ASP page. Most of my ASP pages looked like this. <% set oBO = Server.CreateObject("ApplicationName.BizObjects") oBo.doStuff() ' paraphrasing, I often had several different methods for each piece of functionality set oBO = nothing %> Well, I usually had some ASP code in the page as well to manipulate the HTML. But most of my drop down list creation waas handled with a call to "createDropDownList(byVal rs as ADODB.RecordSet, textFieldName, valueFieldName)". A short stint in the Java/JSP world made the transition to C$ and ASP.NET a lot easier; Helped me get into a more pure OOP state of mind although you can still see some procedural linerarity in my code. February 4, 2004 9:50 AM TrackBack said: February 5, 2004 4:09 AM Frans Bouma said: "...therefore completing the “Wine, Women and Song” trifecta." :D I don't think a lot of people call the E-A powerchords hetfield is hammering on his jackson V guitars 'songs' ;) February 5, 2004 5:00 AM Jason Salas said: Ouch...that one hit home...I've been covering Metallica songs since I was 12 (I'm 29). Keep in mind that most blues music (from which lots of metal, including Metallica, is derived) is based on the 1-4-5 chord progression, which by its nature isn't that complex, but leaves lots of room for improvisation. =-) February 5, 2004 5:06 AM TrackBack said: February 5, 2004 10:48 AM Tejas Patel said: That's a bit of pity Jason. Well atleat you can access it at office. i am sure it won't be that difficult for you as you would be found in the office 80% of your time :). While having heard your story I will try to have a bit more patience why my ADSL connection is playing up. February 5, 2004 7:53 PM Cameron Reilly said: i saw a stat yesterday stating that only 8% of Australian homes currently have broadband. Even though I've only had access to it for about a year, I can't imagine life with out it. Or without my 802.11G home network either. But, let's look at the upside of not having broadband... oh wait, I can't think of any! :-) Dude, have you ever thought about moving? February 5, 2004 8:38 PM Jason Salas said: Hi Cameron, Yeah, I've thought about moving somewhere else on island, but I live in a really nice, quiet, crime-free neighborhood with stores and schools close-by (and I'm only 5 minutes away from work), so it wouldn't offset not having broadband to ship out. It's funny...Guam is growing in broadband use...about 11,000 people have it locally out of the 30,000 potential customer base, so it's gotten better (I used to work at a large ISP here). However, if I do make it out to the States... February 5, 2004 8:48 PM Jason Salas said: **UPDATE** Thanks much to Paul Murphy ( ) for confirming my theory about the requirement for an instantiated object. He responded in the ASPAdvice mailing list: -------------------------------- You can not create a Context around a static method (since there is no instance how to you wrap it?). Therefore the wrapper attribute that does the serialization can’t be applied to the method. If you want to read more about it dig into the System.Runtime.Remoting.* namespaces. Paul February 6, 2004 2:28 AM Scott said: Yeah, your only other choice besides moving is probably a satellite connection. Those aren't that great though, lots of latency. re: great thing about not having broadband. - when I first moved to Seattle, I didn't have anything but a local free ISP for dial up. I noticed that I tended to be a lot more picky about what I surfed. Quality over quantity. I spent a lot more time reading and programming. hmmmmmmmm, maybe I should call Comcast and cancel? Using SQL Enterprise Manager over dial up is an experience not unlike being on the receiving end of a root canal without the benefit of novacane. Heck, the same could be said for using EM over broadband. February 6, 2004 9:58 AM Paul D. Murphy said: yeah. I agree. I've been in the email marketing business for 3 years now and the 1% that does things wrong has grown to about 20%. It has made it virtually impossible for us legitimate marketers to deliver campaigns while ensuring that only those who are willing to be deceptive and criminal are operating with impunity across the world. It's a massive problem and it won't go away until companies subscribe to personal data instead of people giving a copy of personal data to companies. I can't wait for Longhorn. Paul February 8, 2004 3:17 PM TrackBack said: February 11, 2004 3:35 AM TrackBack said: February 11, 2004 4:37 AM Wallym said: Jason, The reason that I see why they aren't using FreeText and such is that MS's fulltext engine in Sql2k is not very good. It takes an amazing effort on the part of the CPU to get it working. Yukon's full text engine is suppossed to be much improved. Wally February 11, 2004 5:46 AM Derick Bailey said: The actual indexing may take some cycles, but it only happens on the schedule that you set. Any good DBAmin should know how to setup the schedule to run during the lowest traffic times, and create a good fulltext index for searching. As for the actual fulltext searching in SQL Server 2K, it's actually part of Indexing Service that gets called to do the searching. If someone complains about SQL 2K fulltext, but also uses the Indexing Service, then that person is full of crapp. I personally think that the fulltext searching in SQL2K is quite nice. But then, I actually spent a few minutes to write a query parsing engine that builds proper fulltext search queries based on user input. February 11, 2004 9:24 AM AndrewSeven said: Never whistle while you are pissing. -Fictional character in a book by R-A Wilson. February 11, 2004 10:53 AM TrackBack said: February 11, 2004 10:53 AM TrackBack said: February 11, 2004 10:53 AM Scott said: I've got two, one inspirational and one that simply reminds you of the "big picture" in any project. "Imagination is more important than knowledge" - Albert Einstein "When you are up to your ass in aligators, it is difficult to remember that your initial objective was to drain the swamp." - Unknown February 11, 2004 11:16 AM David Cumps said: If they make better or worse managers, no idea. But the mental if then structure probably exists in a lot of us programmers. But sometimes, thinking with logic isn't always good, there seems to be different kinds of logic. What seems logic to you may be completely wrong to someone else, and vice versa. I do believe programmers are a lot mor goal driving, with long term plans (like when you code something, you think in the future as well) and also, our minds thinks in a form of statements, which kinda creates order (or chaos in the eyes of others) February 11, 2004 2:40 PM Jason Salas said: Hi guys, Good thoughts. I'm pondering moving to FTI myself, but in doing so I'll need to migrate the entire SQL Server DB we use now (only about 9,000 records, with multiple NTEXT fields) to a different database server. Has anyone seen any creative alternatives to using a good classical search? February 11, 2004 6:59 PM Jason Salas said: Here's one of my personal faves form the late Benny Hill: "The wise man believes only 50% of what he reads, but it is the genius who knows what 50% to believe." February 11, 2004 7:02 PM Scott Galloway said: Hmm...although the Microsoft way of interviewing is often seen as the pinnacle of Tech Interviewing, it's not perfect by any means. Have a look at "How Would YOu Move Mount Fuji?" - it gives some decent insights into this method of assessment as well as some pretty decent opinions as to what works and what doesn't with this approach... February 11, 2004 7:09 PM Jason Salas said: Hi Scott, I agree, the MS interviewing process is admired by many, but certainly its still a fallible machine (is anything perfect these days?). However, I did learn a lot from my trips to Redmond, about asking questions and really pushing a candidate to think. My biggest challenge is getting someone to really experience what it's like to work in our hectic environment, and really expose them in some fashion to the problems we've got, as they're likely being brought on to solve some of them. February 11, 2004 7:43 PM Stephane Dubois said: Thanks for the good word on our web service test page design. It may not serve much purpose for the guy who goes straight to the WSDL but it helps those less familiar with it. Best Regards -Stephane Dubois Xignite February 11, 2004 9:01 PM Jason Salas said: **UPDATE** I wrote Juval Lowy directly and he confirmed that this would not work, as (1) returning a type from a WS call requires the assembly to be local to the client, and (2) such would arguably be a major violation of most security and versioning models. I was actually thinking of using this as the basis for an app I was going to share with the members of my user group - developing a core API and letting them derive from it for their own implementations. Oh well, back to the drawing board. 1 out of every 10 every crazy ideas I get like this actually works. :) February 11, 2004 10:38 PM TrackBack said: February 12, 2004 1:31 PM TrackBack said: February 12, 2004 2:18 PM James Geurts said: Via an article on This is Broken ( ) there is an error in the HTML code in the ad. There should be no slash in the <p> tag on line 28. Kind of makes me want to believe that FrontPage will be better... February 12, 2004 7:36 PM Jason Salas said: Ha! I wonder who created this ad - a technical firm or an ad house. Ifd it's the latter, that's what happens when you leave a technical ad to artistic people. February 12, 2004 8:15 PM R said: Not the first time I've seen typos in Microsoft Ads. There's one that keeps appearing in DDJ magazine, and CUJ that not only mis-spells webService as webServce - in a code sample; it has done since the advert has been out (several months). I sent a mail to feedback@mic.... but it's never been fixed. Guess they just don't care .. February 13, 2004 2:08 AM Ron Miller said: Hi Jason: I appreciate you quoting my post, but you should know that I wrote that based on a news story about the impending release of FrontPage 2003. I haven't actually looked at or reviewed the program, so I can't say with any authority that these changes made it in, or that they lived up to the pre-release hype. Regards, Ron Miller February 13, 2004 7:23 AM Jason Salas said: Hi Ron, No problem..I figured that. No intention to hold you responsible for the features the program did or didn't have. :) February 13, 2004 7:35 AM Michiel said: Well, what about WebML? February 15, 2004 11:03 AM Jason Salas said: Hi Michiel, WebML is great, but NOT specific to ASP.NET, which is what I'm after. February 15, 2004 11:23 PM Jeff said: I agree... it'd be nice to see something with a little more meat to it. I was really interested when MS published the "case study" on Match.com and its conversion to .NET, but it was really weak in terms of getting to what they were really doing with architecture. I suppose there's only so much you want to reveal in that case (the dating site market is pretty competitive), but something more than an assembly and server count would've been nice. February 17, 2004 8:30 AM TrackBack said: February 17, 2004 2:02 PM Mike Gunderloy said: Don't worry, you'll find the new AdventureWorks sample database to be just to your liking, I think. If you want a preview, download the SQL Server 2000 Reporting Services evaluation, which includes the first (I think) public release of AdventureWorks. February 17, 2004 10:04 PM Jason Salas said: Good news to know! Thanks Mike. February 17, 2004 10:48 PM mike said: The problem for those of us creating samples that talk to a database is that we often don't want to focus on the underlying database, but rather on whatever it is we're illustrating -- ADO.NET code, whatever. So we pick a database that we figure the largest percentage of our reader base will be familiar with. IOW, Northwind. Even when Yukon comes out with AW, Northwind will still be the best-understood database out there, and so we'll probably continue to use it for examples that need to use just any old database you happen to have handy. Unless, of course, the point is to illustrate some specific feature of Yukon or of AdventureWorks itself. February 17, 2004 11:21 PM Jason Salas said: Hi Mike, Oh sure....and I agree, testing with the Northwind or Pubs DBs is a snap because they've been in use for so long that we can recall the recordsets from memory, so if we connect and call data, we can be sure a script works. I was just hoping for a new set of data from perhaps a different industry, using a different DB schema and table configuration - just to be honest. :) February 18, 2004 12:30 AM Robert McLaws said: Northwind sucks. It has table names with spaces, it has stored procedures prefixed with sp_... it's a horrible example of how to design a database. Good to know it's gonna get better. Thanks for the heads-up Mike! February 18, 2004 1:12 AM TrackBack said: February 18, 2004 4:30 AM TrackBack said: February 18, 2004 4:30 AM SBC said: go for the DVD instead of CDs - you get a $300 rebate and lighter mail packages.. February 18, 2004 6:25 AM Jason Salas said: Lucky me...I got the DVDs. I should have mentioned that. :) February 18, 2004 6:28 AM Dennis said: Too bad Microsoft has filed patents on their use of XML in Word.... February 18, 2004 9:12 AM Scott said: You're right about the table names with spaces Robert, but not about the sp_. At least not on any of my installs of SQL 2000. I think that pubs is fine to use for an example. Like Mike said, you don't want your "student" spending 1/2 their time or more trying to figure out your DB schema. But neither Northwind nor pubs are good examples of what SQL 2000 can do. Neither of them use user defined functions or types. Heck even the views in both of them are pretty simple. I'd include at least one partitioned view, those are pretty powerful IMO. February 18, 2004 1:56 PM James Geurts said: You might want to take a look at Salamander .Net Linker and Mini-Deployment tool ( ). It looks like it'll turn the IL into native code. "The framework appears as an integrated part of your own application" February 18, 2004 11:59 PM mike said: BTW, this seems to be a popular topic. Joel Spolsky wrote a piece on "can we have a linker?", which generated lots of commentary. Just a sample, starting with Joel himself: February 19, 2004 12:32 AM Jason Salas said: Thanks Mike! I'm thinking about working on a consideration note to the ASP.NET team to possibly make this available. February 19, 2004 12:46 AM Jason Salas said: Thanks a bunch James. Good tip. I've been thinking about this more and the possibility of having a somewhat-portable runtime and hosting environment, and if it could fit on a disk or somewhere around there. February 19, 2004 12:48 AM Jason Salas said: **UPDATE** Here's a good article that's along the lines of what I was talking about: February 19, 2004 1:41 AM Jason Salas said: **UPDATE** A Microsoft PM that owns the Globalization subset of the ASP.NET feature set was nice enought to point me to a working sample of how to add new custom cultures in order to tap into resource files: February 21, 2004 12:28 AM Christian Nagel said: There is a List class in the namespace System.Collections (this is what the error said). This List class is private. February 22, 2004 7:06 AM Jason Salas said: Yeah...I got that part. Again, I understand about the List type being inaccessible, but the error code didn't really imply that. :) February 22, 2004 8:46 AM Christian Nagel said: The error code said "'System.Collections.List' is inaccessible due to its protection level". I think this is completely fair, because System.Collections.List is a private class. However, you just wanted to use the class System.Collections.Generics.List. This list was inaccessible because you didn't include the namespace. Maybe the error should say something like "if you want to use 'System.Collections.List' this class is inaccessible due to its protection level. If you want to use a List class from another namespace, import the namespace" ;-) February 22, 2004 9:42 AM Rex John said: Hey Jason Salas & M. Keith Warren , What position did you guys interview for? Rex John February 22, 2004 9:55 AM Jason Salas said: Hi Rex, At the time, I was up for a product manager position on the Office team. February 22, 2004 6:11 PM Jason Salas said: Right - I agree, the error code makes sense, but it's confusing as to the real nature of the problem with the code. February 22, 2004 6:12 PM Gavin Joyce said: Great idea Jason, I'd buy one in a snap. February 23, 2004 6:01 AM Firoz Ansari said: You can use any key mapping tool for this purpose. Take a look at WinKey 2.8. You can download it from February 23, 2004 8:33 AM Scott said: Yeah, but you have to remember what the keys are Firoz. Maybe we should pressure this company to release an overlay for Visual Studio and the different languages? I know they have overlays for Word and Excel. February 23, 2004 12:39 PM Dumky said: Why change the hardware? Add a C# "mode" to a regular keyboard using a rarely used key as the toggle, or more generally use macros...? February 23, 2004 3:24 PM Jason Salas said: Hi Firoz, Oh sure...but I'm one of those people who would prefer a set piece of hardware that had specific purposes in mind. Also, should anything ever happen to my PC and the settings get erased (perish the thought), I'd have to do the whole thing over again. I wouldn't mind paying extra for new hardware that someone built that laid everything out for me. Case in point: varying joysticks for flight sim and racing games. In other words, I'm an idiot, I need help and I'm one the people who would buy this type of thing. :) February 23, 2004 7:23 PM Jason Salas said: Hi Scott, Now there's an idea...I might shoot this by a couple of product managers I know for their input.... February 23, 2004 7:24 PM Mike Tanguileg said: Yes, Jason, I'm still at Microsoft. I'm very interested in your localization project, as I have had to do some localization frameworks myself here. I'd be very interested in what you come up with. Also, I'd certainly make my web service available to you, or anyone else for that matter. Got to keep the Chamorro culture alive, and this is one way I can do it. February 25, 2004 5:03 PM Paul said: Hi guys, I'm applying for an internship (summer) as a software design engineer. I have amazing background in mathematics and problem solving. I was a former International Mathematical Olympiad participant. But the problem is, I've only experienced C++/programming for a year. So my computer knowledge really isn't THAT great. My worry is that they ask me some technical things about C++ or computer knowledge and I wouldn't have a clue. I can ace the riddles/logic questions with ease, but technical C++ questions I don't think I can do too well. I've taken 2 courses in C++ so I know everything up to linked list/polymorphism...ect. But I really haven't delved too deep into techniques/tricks. Am I in a bad situation? February 26, 2004 10:46 AM Jason Salas said: Hi Paul, Just be yourself and answer honestly. Try and work out whatever strange problems they throw at you and give it your best shot. The worst thing you can do is try and lie your way through it or give up right away. Good luck! February 26, 2004 5:41 PM Paul said: I've sent in my electronic resume about a week ago and haven't heard a phone call yet. Does this mean there is no hope? Generally how long is the time between sending out the resume and receiving a phone call (if Microsoft is interested)? Thanks. March 1, 2004 10:16 AM Jason Salas said: Paul, This is the gray area. I've known people who've sent their resumes and get nothing back for months. The company is constantly shifting and evolving, and departments get new hires positions and close existing ones all the time. Remember, Microsoft is a huge corporation, so they literally get thousands of applications a day, which stack up over time. Sadly, the only thing you can do it wait. :( March 1, 2004 7:36 PM ware said: well all i know is that benny hill has to have been one of the funniest comedians of his time, i still really crack up laughng when i see a skit involving Fred Scuttle. I think it's a shame the way he died alone but can't we just remember the man for how funny he was and how he managed to bring a smile to everyones face? March 2, 2004 4:41 AM Paul said: Is it also the same for interns? When do people applying for summer interns (college student) get interviews? March 2, 2004 1:29 PM Jason Salas said: I think if you have to openly apply for an intern job (i.e., you're not contacted by a Microsoft recruiter or your school doesn't set it up for you), it's a waiting game. Good luck! March 2, 2004 4:38 PM swc said: (IE users) If you have edit text software.. no need for file/save as.. just "Edit button" and select which tools you want to use to view the source. And then save it right into your hard drive. So far I have not seen any site can prevent that. March 6, 2004 10:05 PM MihaK said: OK, let me start with some simple model for ASP.NET apps: 4 class stereotypes: - <<ClientPage>> (represents the stuff that comes down the wire and renders/executes in the browser) - <<ServerPage>> (represents the .aspx page on the server that builds the ClientPage) - <<CodeBehind>> (represents the .cs or .vb code that is referenced by ServerPage - <<JavaScript>> (represents the stuff that executes locally when runs-on-client events triggers) 4 relations stereotypes: - <<builds>> (the relation from ServerPage to ClientPage: "ServerPage builds the ClientPage and sends it down the wire") - <<event>> (the reletation from ClientPage to JavaScript - "ClientPage triggers the execution of JavaScript" or from ClientPage to ServerPage - "ClientPage triggers the submit up to the ServerPage") - <<redirect>> (the relation between ServerPage and ServerPage - "ServerPage redirects to another ServerPage") - <<include>> (the 1:1 relation between ServerPage and CodeBehind) At design time you build ServerPages, which automatically include specific CodeBehind-s for every ServerPage. At run time browser calls ServerPages (event) that generate ClientPages (build) that fire events either locally (to JavaScript) or back to ServerPages. If you agree with this, we can move on to Web Services model. March 7, 2004 6:57 PM Scott said: I had hella problems with it under Windows 2000 (IIS 5 I think). Things like images coming up rex'ed, CSS missing, all sorts of stuff like that. I'll have to try it again if I ever get to work on at IIS 6 server. :( March 14, 2004 3:24 AM Robert McLaws said: That's funny, cause the CODE directory is my least favorite feature. I hate the fact that Microsoft is continually dictating to me what my directory structure should be. That folder should be configurable on a web-by-web basis. March 19, 2004 6:19 PM Jason Salas said: Ahh...so you're one of those people who instead of using the default "images" directory, changes it to "pictures" just to be non-conformist? :) March 19, 2004 6:26 PM Robert McLaws said: LOL. No, actually, I have a folder called "Common" that hold all the items common to the different parts of the site. I then have several nested folders underneath. They are: -Components (Code) -Controls (User Controls) -Images -MenuData (ASPnetMenu XML Files) -Scripts -Stylesheets You shouldn't have .VB (or.CS) files strewn around your site for no reason. If it's not an ASPX page, it doesn't belog outside the common folder. When several sites depend on the same files, I have a "Common Folder" outside of the main web, and link it into the web directories using NTFS junctions. That way, if one file changes, all directories get updated. I'm extremely anal about website organization. Mostly because most people aren't. I got fed up with inheriting projects with no organization whatsoever. So I came up with a predictable system that I use every time. Has served me very well. March 19, 2004 8:31 PM mike said: The good news, I think, is that the further you are beyond that first critical job, the less the degree and institution matters. Lots of high-tech firms recruit directly out of such universities (and many more), but when hiring someone with experience, the experience count about 1,582% more than the schooling. Just as with high school -- that GPA mattered when applying to colleges, but who cares now? March 19, 2004 8:34 PM mike said: I vote for the Code folder. :-) It's actually double-good, although that's my opinion. First, no need to compile classes -- just run and go. We could have used this feature w/ Web Matrix, heh. Second, the Web page designer in Whidbey gives you IntelliSense on anything in the Code folder. How cool is that? March 19, 2004 8:37 PM Jason Salas said: Excellent points! (Although, those dang Ivy League fraternities seem to stay eerily close-knit...) March 19, 2004 9:01 PM Jason Salas said: Sweet. I know people that sing the praises of Microsoft technologies up and down the street, but change absolutely every default setting and recommended coding convention just to prove they can. These type of people setup an MS-driven site and then proceed to make the default documents something different than "index.html" or "default.aspx", rename all the provided dirs and other quirky stuff. I'm in the middle...I change some things, and leave others the way they were set up. I always create several directories: /images /css /modules /customcontrols /usercontrols /servercontrols Hey, it takes all types... March 19, 2004 9:06 PM Jason Salas said: Hi Mike, I'm very pleased directories like /code, /themes and /resources got the treatment they did for V2 on a functional level. However, I was surprised at the naming convention used. It makes sense to keep things simple, and undoubtedly the engineers took the fact that legacy apps may have those dirs named already and so would allow files and folders within those spaces to be served and/or non-compiled, but I think for some it creates a headache. I would have opted for some other naming system...something most people woulnd't be using to rule out not technical incompatability, but confusion for webmasters running migrated sites. Think about how many people out there right now have a /code and/or /resources folder one step down from root containing sample source files. I guess it's a self-defeating argument...you can't really pick a name that's easy to remember and short enough that someone out there isn't using already. March 19, 2004 9:12 PM Jason Salas said: And for the record (and for any corporate recruiters reading this now), I once got 3 gold stars and a complementary puffy sticker for drawing The Pokey Little Puppy for my 4th grade book report. :) March 19, 2004 9:13 PM Cesar said: One great email validator is one that, after performing the string expression validaton, check if the domain name exist (using dns query), get th MX for this domain and check if the user exist in SMTP server. Theses validations may be anabled/disabled by properties of this validation control. March 20, 2004 11:40 AM Rich Storaci said: Jason- Very well-expressed. Remember what you have going for you more than any platinum-level diploma is your technical competence and quality work experience. You're one of the most brillant men I know, especially with all the varied and demanding hats you wear at KUAM. Keep smiling, Jason, as you're going places! March 21, 2004 7:52 PM Jason Salas said: > Keep smiling, Jason, as you're going places! Thanks Rich...hopefully sooner than later! :) March 21, 2004 11:45 PM TrackBack said: March 23, 2004 1:52 PM TrackBack said: March 24, 2004 3:45 PM TrackBack said: March 24, 2004 3:52 PM Jason Salas said: **UPDATE** They made the change. Cool. :) March 25, 2004 5:50 AM Jeff said: I'm impressed. I got out of broadcast years ago because the pay and consolidation (in radio especially) made it not very fun anymore. I still miss it, being behind or in front of the camera. Good times! March 25, 2004 9:34 AM Tim Marman said: Non sequitor: Why must Stuart Scott adopt a "hip-hop vocabulary"? He's not ghetto - does he have to pretend solely because he's black? I don't get it. :) March 26, 2004 7:11 PM Darron said: I write better code when listening to "lounge music" like Sinatra, and Deano. March 26, 2004 8:04 PM David said: Actually for me Techno or Hard House is the way to go - must moving at over 140 beats per minute have proven to wake you up. (better than caffeene) Allows me the same sort of clear focus you observed. Code becomes much easier and productivity goes way up. March 26, 2004 8:48 PM Mark Erikson said: Techno / electronic type stuff. I highly recommend . There's a big mixture of stuff there, all sorts of styles. I've barely sampled it myself, but I've found plenty that works good for me. March 26, 2004 10:40 PM Jason Salas said: Hey David & Mark, Cool. I think because I grew up listening to metal, it's more conducive to the way I work. Like I said, I know people who CAN'T work without techno. March 26, 2004 11:02 PM mike said: Jazz, man. Late night work + late night jazz = true productivity. And the guys at the radio station must know that, too, coz they just keep the good stuff comin'. March 27, 2004 12:43 AM Jason Salas said: Now that's the ticket! I can also code pretty well to the blues...Jimi Hendrix or Stevie Ray Vaughn, mostly. March 27, 2004 12:52 AM Chris McKenzie said: ALl kinds of music: Bela Fleck and the Flecktones, Beethoven, Sarah McLachlan, Crystal Method, Mudvayne, Slayer. I dig it all. March 27, 2004 10:18 AM mike said: The mobile story in Whidbey uses so-called adaptive rendering; each control has an adapter that can render the markup appropriate to the browser request. This works something like browsercaps, but rather than just set various switches based on the user agent, the control can instiate entirely different classes to render the markup. It's both tons more powerful as well as extensible. (New device? New adapter class.) Whidbey also introduces device filtering, which you can apply at the property level. For example, you can have default text (maybe terse text for all devices) and override the Text property for desktop browsers with more verbose text. So these features are in the service of making it substantially easier to create single-source pages that work for many devices. (Rather than having to author MMIT pages separately.) That said, adaptive rendering and filterin aren't going to make the need (or desire) for separate desktop and device pages go away altogether. March 27, 2004 12:43 PM Jason Salas said: Hi Chris, Beethoven ==> Sarah McLachlan ==> Slayer Now that's an eclectic list! :) March 27, 2004 5:03 PM Trench said: Sounds like a great idea man. Looking forward to see how things turn out. April 5, 2004 9:37 PM Sean McCarthy said: Have you found anything that is relatively easy to use or easy to program yet? I found a java applet on Johnny Damon's website ... April 6, 2004 12:19 AM PatF said: If you can change the 404 page used on your site to a .aspx page, you can effectively have the .net framework process any path you want (as long as that path doesn't actually exist). You could use this to have your asp.net pages look like .html pages or directories, or anything you like : protected void Application_BeginRequest(Object sender, EventArgs e) { string strPath=Request.RawUrl; if(strPath.IndexOf("404.aspx") > 0 && strPath.IndexOf("mypage.html") > 0) { Context.RewritePath("realpage.aspx"); } } This would translate any request with the string mypage.html in it to the page realpage.aspx. April 6, 2004 11:39 AM Garrett Baker said: You could look up the user's ip address in whois to find the owner of the IP block. This would provide the company information. Usually this information includes email addresses and URLs. Of course, this is more complex than using a web.config file. April 6, 2004 8:57 PM Jason Salas said: Hi Garrett, Interesting tip...although I'm pretty sure the default homepage URL for mobile splash pages wouldn't be available from within a WHOIS query. April 6, 2004 11:42 PM Sachin Patil said: Guys!!! I have an interview in another 2 or 3 days. That is a phone interview of 20-25 mins. Please do tell me the ways. What is generally asked in the initial Phone interview. April 8, 2004 5:45 AM Jason Salas said: Just keep some of the comments on this post in mind...and check out Chris Sells' post at: Good luck! April 8, 2004 6:02 AM Sachin Patil said: Thanks Jason, I will let you know my performance !!!! April 8, 2004 2:39 PM Bill Hayes said: Hi Jason, If you were interviewed on MS campus by less than 6 interviewers, is that a bad sign? I read from web resources that 6 interviews (the last one being the project manager) is a good sign. But my recruiter said it doesn't have to be 6 interviews.... Thanks, Bill April 13, 2004 7:12 PM Jason Salas said: Hi Bill, I'd say that's typical. I've met people that say they've been through the wringer as many as 9 times in a day, and I've also seen people who have only gone to 5 hiring managers. And of course, executives typically have one high-level talk and they're in :) Good luck! April 13, 2004 8:59 PM Santos said: I got an on-site interview with them on friday, and Im excited and scared at the same time. I know that its quite a challenge to get a job and I also know that there are a ton of really smart people applying also. I managed to get passed the phone interview and get a shot with the oncampus trials. It'd be awesome to get a job there but I look at stuff like, XSLT drawbacks and ASP.NET controls, and I have little to no knowledge about either technologies. I'm applying for a fulltime Sofware Design Engineer (I graduate in June), but considering these questions, I dont know that I have much of a chance. April 14, 2004 5:23 AM Jason Salas said: Hi Santos, Just be yourself...talk about the stuff you know and they won't harp on areas that you've got no experience with. You'll be expected to give deep opinions on the areas you are interested in and that you do work with, so just concentrate on being strong on working with what you do know. April 14, 2004 5:26 AM Prashanth said: Can any one help me for accessing the viewstate contents thro javascript code. Kindly mail me at prashanth.k@sonata-software.com April 23, 2004 8:56 AM Dan Kelly said: I have an interview for a director level marketing position at Microsoft. Please guide me with any inputs to face the interview successfully. Thank you April 23, 2004 2:00 PM Jeff said: Couple of reason why advertisers may user flash or standard GIF... If you are simply displaying text and maybe a few images, flash may yield a smaller file size. On networks like AOL:AIM advertisers use flash to display short movie previews simply because most people have flash and it does not require any extra download. It may not be of the best quality but it is extremely light weight. Also, I don't know of any publisher who would allow anything but flash or image as an interstitial. April 26, 2004 9:23 AM Shannon J Hager said: I just had a conversation with a designer about this Friday. He was forced to use an animated .gif when the client really wanted Flash. The effect was nowhere near as nice and the file size was outrageous. Flash would have done the trick with ease. Another thing to keep in mind is that you don't really know what is going on besides the animation you see. There could be all kinds of client/server communication going on and/or dynamic building of the text and animations. Most aren't doing this, of course, but Flash is still (usually) smaller and better looking than a similar animated gif (except for simple "blink" type animations). April 26, 2004 3:00 PM Jason Salas said: Hi Shannon, Good comments. I guess admittedly one thing that's nice about Flash is that you can also prevent someone from *easily* copying a resource. April 26, 2004 7:28 PM Adam said: Does anyone have the wav of that skit. Preferrably the, "I got a fever, and the only perscription is more cowbell."? Or atleast know where I can find it? Thanks Adam April 27, 2004 9:45 AM Jason Salas said: Hi Adam, I found this floating around in great volume before...on Kazaa, AudioPlanet and Napster when it was "legit" to use those services. April 27, 2004 9:36 PM Jason Salas said: Actually, I think the SNL Blue Oyster Cult sketch w/Christoper Walken and the Jimgleheimer Junction sketch w/Cameron Diaz are the funniest bits to have come from SNL in the last 10 years. April 27, 2004 9:37 PM TrackBack said: April 29, 2004 7:38 AM Barb said: Interesting comments, Jason. :) I still don't like Mike Hall, I don't think he deserved to win, and I think ESPN cheated the viewers by not counting the audience vote at the end of the first hour of the finale. If they had, Zach Selwyn and Aaron Levine would have been tied, and it would have been up to the infamous "red phone" guy as to who got cut. Zach probably would have been axed anyway (even though I thought he was better than Aaron was in the first hour), but at least it would have been a bit more upfront. (btw, only way I know about the audience's vote was a crawler at the bottom of the screen at the end of part 1. Votes to cut Zach were lower than the votes to cut Maggie -- how did she last so long, anyway, except for her writing? -- and to cut Aaron.) Be that as it may, I still enjoyed what you had to say about the show -- and appreciated reading your comments, as they're more informed than most because you've actually done the job before. Barb April 30, 2004 3:32 AM
http://weblogs.asp.net/jasonsalas/archive/2004/08/10/211378.aspx
crawl-002
en
refinedweb
A blog about using ASP.NET for web applications and my work experiences Yesterday I found a useful tool for checking the syntax of JavaScript Object Notation strings: I needed that because I'm trying to create a widget for Stickam that will work in my compiled help file. I'm using my xml2json generic handler to convert XML to JSON and apparently it generated some invalid JSON. This tool helped me to narrow down the location of the syntax error. I need to make some improvements to my xml2json generic handler. First it needs to surround the JSON string with parentheses to prevent the infamous invalid label error. Second it needs to output the JSON string to a text file so I can debug errors caused by invalid syntax (trace listeners don't seem to work in generic handlers). And third it needs to use regular expressions to replace bad JSON syntax with an empty string. I should probably build in the JSON Checker code and return an error message about bad syntax. I have come across the JavaScriptSerializer class in the System.Web.Script.Serialization namespace but it cannot serialize XML or DataSets. Working with JavaScript Object Notation is currently quite painful. I think programmers need better tools for working with JSON. We need tools to check the syntax and visualize the data or objects it represents because JSON strings are very cryptic. Hey Robert, I used the JavaScriptSerializer class to serialize a DataTable into a JSON object. I did not implement a deserializer but it can be easily added. Here's the link to the code: It won't serialize a DataSet but if you extract the DataTables then it may be useful. Hope it helps and good luck with the xml2json - please consider sharing when you have a stable version!
http://weblogs.asp.net/rrobbins/archive/2008/01/11/json-syntax-checker-tool.aspx
crawl-002
en
refinedweb
This document is also available in these non-normative formats: XML and HTML with differences highlighted...". Patent disclosures relevant to this specification may be found on the Working Group's public patent disclosure page. (public archive). 7 Internationalized Resource Identifiers (IRIs) 8 XML 1.1 Productions A References B The Internal Structure of XML Namespaces (Non-Normative) B.1 The Insufficiency of the Traditional Namespace B.2 XML Namespace Partitions B.3 Expanded Element Types and Attribute Names B.4 Unique Expanded Attribute Names an IRI reference, which are used in XML documents as element types and attribute names. ] XML namespaces differ from the "namespaces" conventionally used in computing disciplines in that the XML version has internal structure and is not, mathematically speaking, a set. These issues are discussed in B The Internal Structure of XML Namespaces. [Definition: IRI references which identify namespaces are considered identical if and only if they are exactly the same character-for-character.] Case differences and escaping differences (including case differences in escape sequences) are therefore significant. Note that IRI references which are not identical in this sense may in fact be functionally equivalent. Examples include IRI references which differ only in case or escaping , or which are in external entities which have different effective base URIs. The empty string, though it is a legal IRI reference, cannot be used as a namespace name. The use of relative IRI references, including same-document references, in namespace declarations is deprecated. Future W3C specifications will define no interpretation for them. Names from XML namespaces may appear as qualified names, which may contain a single colon separating the name into a namespace prefix and a local part. The prefix, which is mapped to an IRI reference, selects a namespace. The combination of the universally managed IRI namespace and the document's own namespace produces identifiers that are universally unique. Mechanisms are provided for prefix scoping and defaulting. IRI references can contain characters not allowed in names, so cannot be used directly as namespace prefixes. Therefore, the namespace prefix serves as a proxy for an IRI reference. An attribute-based syntax described below is used to declare the association of the namespace prefix with an IR, an IR, as these names would be reserved if used without a prefix. [Definition: In XML documents conforming to this specification, some names (constructs corresponding to the nonterminal Name) may: A namespace declaration is considered to apply to the element where it is specified and its attributes, and to all elements and their attributes within the content of that element, unless overridden by another namespace declaration with the same NSAttName part: <?xml version="1.1"?> <!-- declaration is considered to apply to the element where it is specified, and to all elements within the content of that element, unless overridden by another default namespace declaration. If the IRI reference in a default namespace declaration is empty, then unprefixed elements in the scope of the declaration are not considered to be in any namespace. Note that default namespaces do not apply directly to attributes. <.1'?> types types types. Some characters are disallowed in URI references, even if they are allowed in XML; the disallowed characters, according to [RFC2396] and [RFC2732], IRI reference is a string that can be converted to a.
http://www.w3.org/TR/2002/WD-xml-names11-20020905/
crawl-002
en
refinedweb
Microsoft MVP BizTalk Server Oracle ACE As promised, this is the first of a series of posts that intend to cover the new capabilities implemented in the WCF REST Starter Kit to enhance the development of RESTful services using WCF. Specifically, this post is focus on how to enable caching on RESTFul services by using the REST Starter Kit . Undoubtedly, caching is one of the greatest benefits of the Web programming model and one of the main attractive of REST over alternatives such as SOAP/WS-*. Throughout the evolution of the web, the industry has developed very innovative techniques for optimizing content retrieval by the use of caching. These techniques have been reflected in technologies such as MemCache, Oracle Coherence and more recently Microsoft's Velocity that specialized in distributed caching. Additionally, web technologies like ASP.NET provides generic programming models that address some of the most common caching scenarios. All this infrastructure and technologies can be naturally leveraged by RESTful services without the need of creating new caching mechanisms. Precisely, the REST Starter Kit leverages ASP.NET extending WCF RESTful services with caching capabilities. The fundamental steps to add caching to a WCF RESTful service consists on adding the WebCache attribute to the operation intended to cache. The following code illustrates this concept on a simple WCF RESTful service which WAS NOT implemented using the WCF REST Starter Kit . [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]public class SampleService : ISampleService{ [WebGet(UriTemplate= "/results")] [WebCache(CacheProfileName = "SampleProfile")] public Atom10FeedFormatter GetData() { SyndicationFeed feed = new SyndicationFeed(); feed.LastUpdatedTime = DateTime.Now; feed.Id = OperationContext.Current.Host.BaseAddresses[0].ToString() + "/" + Guid.NewGuid().ToString(); feed.Title = new TextSyndicationContent("Sample feed"); SyndicationItem item= new SyndicationItem(); item.Title = new TextSyndicationContent( "sample feed" ); item.Content= new TextSyndicationContent("Time: " + DateTime.Now.ToString()); item.Id= OperationContext.Current.Host.BaseAddresses[0].ToString() + "/" + Guid.NewGuid().ToString(); item.LastUpdatedTime= DateTime.Now; List<SyndicationItem> items= new List<SyndicationItem>(); items.Add(item); feed.Items = items; Atom10FeedFormatter formatter = new Atom10FeedFormatter(feed); return formatter;} As you can see, our sample operation is decorated with a WebCache attribute that references a specific caching profile. The details of the caching profile can either be configured as parameters of the WebCache attribute or as a configuration file section as illustrated in the following code. <system.web> ... <caching> <outputCacheSettings> <outputCacheProfiles> <clear/> <add name="SampleProfile" duration="30" enabled="true" location="Any" /> </outputCacheProfiles> </outputCacheSettings> </caching> ...</system.web> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/> <services> <service name="SampleService" behaviorConfiguration="ServiceBehavior"> <!-- Service Endpoints --> <endpoint address="" binding="webHttpBinding" contract="ISampleService"> … </endpoint> </service> </services> <behaviors> ... </behaviors></system.serviceModel> The configuration detailed above instructs ASP.NET to cache the results of the operation for thirty seconds. We can corroborate this by querying the following URI multiple times and checking the date returned in the Atom entry. Although some scenarios might require different caching techniques, this model extend all the benefits of the ASP.NET caching programming model into WCF RESTFul services which directly a large variety of the most common caching use cases in real world REST-based solutions. From the WCF programming model standpoint, the WebCache attribute is implemented as a WCF operation behavior. [AttributeUsage(AttributeTargets.Method)]public sealed class WebCacheAttribute : Attribute, IOperationBehavior{…Implementation omitted… public void ApplyDispatchBehavior(OperationDescription operationDescription, DispatchOperation dispatchOperation){ …Implementation omitted… dispatchOperation.ParameterInspectors.Add( new CachingParameterInspector(…));}} When the host initializes the behavior adds an instance of the CachingParameterInspector class to the dispatcher parameter inspector's collection. Ultimately, is this parameter inspector which executes the caching algorithms by leveraging the ASP.NET infrastructure. If you are interested on getting deep into the things you can achieve with the WCF REST Starter Kit make sure you check the Hands On Labs available here. As promised, this is the first of a series of posts that intend to cover the new capabilities implemented This week we are finally taking the wraps off of Workflow 4.0 and Monday we had a great talk from Kenny Pingback from Adding caching to WCF RESTful services using the REST Starter Kit - Jesus Rodriguez's WebLog After a flurry of PDC specific posts, back to a mixture. Architecture I wanted to give a big mention Let's begin from a hypothetical example that we want to publish a simple product catalog as an ATOM feed <a href= estrazionesuperenalottoitalia.fathersaua.info >estrazione superenalotto italia</a> <a href= A great resource - many thanks! Gute Arbeit hier! Gute Inhalte. Dies ist ein gro�er Ort. Ich m�chte hier noch einmal. Pingback from Caching using WCF REST Starter Kit in Medium Trust « Emmer Inc
http://weblogs.asp.net/gsusx/archive/2008/10/29/adding-caching-to-wcf-restful-services-using-the-rest-starter-kit.aspx
crawl-002
en
refinedweb
Copyright © 2007 W3C® ( MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark, and document use rules apply. This document describes the SPARQL Annotations in WSDL (SPDL) project, a system for allowing web services to provide bindings for SPARQL queries. SPDL integrates WSDL, XML Schema, WSDL, XPath and SPARQL to allow SPARQL queries to choreograph and invoke web services and bind the returned information to SPARQL results. This is the work of the author — it is not endorsed by the W3C members. SPARQL Annotations in WSDL (SPDL) maps information between RDF Graphs and the XML messages used in web service. This mapping enables RDF applications to invoke web services and use the resulting data. W3C's SPARQL Query Language for RDF offers the semantic web a standard way to query a materialized or conceptual RDF graph. WSDL 2.0 provides the information necessary to invoke web services, but provides no explicit semantics. SPARQL Annotations (SPAT) provide a conduit between RDF graphs and XML documents. SPDL applies these annotations to WSDL and the associated XML Schema provide the necessary information to bridge Web Services and the Semantic Web. SPDL uses a combination and extension of many technologies: The code is tested against an annotated version of Amazon's own WSDL description of their web services. This was chosen for its practicality, familiarty, and complexity; it provides a realistic web services demonstration. SPDL was implemented and tested on annotated Amazon Web Services WSDL service description. The Amazon WSDL describes many web servise offered by Amazon. A test program demonstrates how a small number of annotations enabled a user query to choreograph two Amazon Web Service operations. The process is documented in AWS Choreography Example. The SPARQL PATHPATTERNs required modification of the SPARQL grammar: 'PATHPATTERN' ConstructTemplate SolutionModifier 'XPATH' '(' String ')' An existing XML Schema processor had to be extended in two ways: No existing schema libraries supported handler dispatch for non-XML Schema data. The XML Schema language documents two distinct ways to include non-schema data in a schema: The XML::Validator::Schema was extended to allow a the caller to register handlers for given namespaces. The handler is called when an attribute or an Appinfo sub-element with that namespace is found: <xs:element name="SubscriptionId" spat: <xs:element <xs:annotation><xs:appinfo><spat:SPAT> PATHPATTERN { ?X tns:id xpath("tns:SubscriptionId") } </spat:SPAT></xs:appinfo></xs:annotation> XPaths in these annotations are resolved relative to the parent context. SPDL uses this extension point to enable contextual parsing of SPARQL annotations. The context includes the location in the schema and the current namespaces. The schema-validation library was extended with functions to, given a set of bindings of XPath to PCData, generate a valid XML document with that data. The extension is used to generate the request message. It takes the bindings from SPARQL variables to XPath locations and creates a valid request message with those bindings substituted into the message. (Extracting the bindings from the response message is simply done with an XPath processor.) The first stage of this project was a success. It verified that annotations could be added to existing WSDL schemas to provide intuitive and easy to maintain semantics. Further, it demonstrated that semantic web machinery could use those annotations to automatically invoke queries. The second stage demonstrated the use of a rule system to choreograph services annotated with SPDL annotations. This system is compatible with conventional rules for ontology mapping, enabling better use of the annotated services. I propose the following work, in order of importance: with mappings between First Order Logic (FOL) and Logic Programming (LP) @@ based on F-logic, but with a mapping to RDF. $Log: Overview.html,v $ Revision 1.18 2007/02/10 02:27:06 eric + author Revision 1.17 2007/02/07 14:09:01 eric + review feedback: need clear purpose statement Revision 1.16 2007/01/30 03:52:36 eric - moved procedural details to AWS Choreography Example Revision 1.15 2007/01/29 23:38:28 eric ~ adapting to use the SPARQL Annotations page Revision 1.14 2007/01/27 04:38:40 eric + more links Revision 1.13 2007/01/26 19:07:53 eric . snapshot Revision 1.12 2007/01/25 19:27:27 eric + anchor SPAT and link to the spec Revision 1.11 2006/07/15 23:28:22 eric + nav Revision 1.10 2006/06/16 11:48:45 eric + descriptions of attribute and element annotation callbacks (from XML Schema) Revision 1.9 2006/06/16 11:36:15 eric + link to annotated Amazon Web Services WSDL ~ changed examples to use attribute notation Revision 1.8 2006/06/16 09:11:50 eric + Change History
http://www.w3.org/2005/11/SPDL/
crawl-002
en
refinedweb
. I do have a generic Repository interface as following: using System; using System.Collections.Generic; using System.Linq; namespace TechHeadBrothers.Portal.Infrastructure.Interfaces { /// <summary> /// IRepository exposes all methods to access the data repository /// </summary> public interface IRepository { void InitializeRepository(); bool Save<T>(T entity) where T : class; bool SaveAll<T>(IList<T> entities) where T : class; bool Delete<T>(string id) where T : class; T Find<T>(string id) where T : class; IQueryable<T> Find<T>(); IQueryable<T> DetachedFind<T>(); IQueryable<T> Find<T>(System.Linq.Expressions.Expression<Func<T, bool>> expression); int Count<T>(); int Count<T>(System.Linq.Expressions.Expression<Func<T, bool>> expression); } } My ORM mapping tool of choice is Euss. And here comes the slight difference, I do have one implementation of my interface leveraging Euss, and that’s it. All different possibilities are handled by Euss. During my work on the definition of the domain I took the habit to use an Euss XML Engine or an Euss Memory Engine. I use those two engine for my unit test and my real application. Following the lean principle I postpone the choice of the data repository till the last minute, when I know more about the real need. So it really happen that I stay with an XML Engine so that all my data are stored in an XML file. If I need more I go to an Euss SQL Mapper Engine and then define the mapping. So I moved to the ORM framework the different implementations. Now I am still free to go to another ORM, or something else, by using the interface IRepository. I used several time this technique and I am currently happy about it.
http://weblogs.asp.net/lkempe/archive/2008/11/04/follow-up-on-reducing-orm-friction-by-rob-conery.aspx
crawl-002
en
refinedweb
.css.text.syntax.javacc.lib;20 21 /**22 * Support for JavaCC version 1.1. When JavaCC is required to read directly23 * from string or char[].24 * <p>25 * Added support for JavaCC 3.2 generated TokenManagers: extends SimpleCharStream.26 *27 * @author Petr Kuzel28 */29 public class StringParserInput extends SimpleCharStream implements CharStream {30 /** the buffer */31 private char[] buffer;32 33 /** the position in the buffer*/34 private int pos;35 36 /** Begin of current token, for backup operation */37 private int begin;38 39 /** Length of whole buffer */40 private int len;41 42 /** buffer end. */43 private int end;44 45 public StringParserInput() {}46 47 48 public void setString(String s) {49 buffer = s.toCharArray();50 begin = pos = 0;51 len = s.length();52 end = len;53 }54 55 /** Share buffer with e.g. syntax coloring. */56 public void setBuffer(char[] buf, int offset, int len) {57 buffer = buf;58 begin = pos = offset;59 this.len = len;60 end = offset + len;61 }62 63 /**64 * Returns the next character from the selected input. The method65 * of selecting the input is the responsibility of the class66 * implementing this interface. Can throw any java.io.IOException.67 */68 public char readChar() throws java.io.IOException {69 if (pos >= end)70 throw new java.io.EOFException ();71 return buffer[pos++];72 }73 74 /**75 * Returns the column position of the character last read.76 * @deprecated77 * @see #getEndColumn78 */79 public int getColumn() {80 return 0;81 }82 83 /**84 * Returns the line number of the character last read.85 * @deprecated86 * @see #getEndLine87 */88 public int getLine() {89 return 0;90 }91 92 /**93 * Returns the column number of the last character for current token (being94 * matched after the last call to BeginTOken).95 */96 public int getEndColumn() {97 return 0;98 }99 100 /**101 * Returns the line number of the last character for current token (being102 * matched after the last call to BeginTOken).103 */104 public int getEndLine() {105 return 0;106 }107 108 /**109 * Returns the column number of the first character for current token (being110 * matched after the last call to BeginTOken).111 */112 public int getBeginColumn() {113 return 0;114 }115 116 /**117 * Returns the line number of the first character for current token (being118 * matched after the last call to BeginTOken).119 */120 public int getBeginLine() {121 return 0;122 }123 124 /**125 * Backs up the input stream by amount steps. Lexer calls this method if it126 * had already read some characters, but could not use them to match a127 * (longer) token. So, they will be used again as the prefix of the next128 * token and it is the implemetation's responsibility to do this right.129 */130 public void backup(int amount) {131 if (pos > 1)132 pos -= amount;133 }134 135 /**136 * Returns the next character that marks the beginning of the next token.137 * All characters must remain in the buffer between two successive calls138 * to this method to implement backup correctly.139 */140 public char BeginToken() throws java.io.IOException {141 begin = pos;142 return readChar ();143 }144 145 /**146 * Returns a string made up of characters from the marked token beginning147 * to the current buffer position. Implementations have the choice of returning148 * anything that they want to. For example, for efficiency, one might decide149 * to just return null, which is a valid implementation.150 */151 public String GetImage() {152 return new String (buffer, begin, pos-begin);153 }154 155 156 /** @return token length. */157 public int getLength() {158 return pos - begin;159 }160 161 /**162 * Returns an array of characters that make up the suffix of length 'len' for163 * the currently matched token. This is used to build up the matched string164 * for use in actions in the case of MORE. A simple and inefficient165 * implementation of this is as follows :166 *167 * {168 * String t = GetImage();169 * return t.substring(t.length() - len, t.length()).toCharArray();170 * }171 */172 public char[] GetSuffix(int l) {173 char[] ret = new char[l];174 System.arraycopy(buffer, pos - l, ret, 0, l);175 return ret;176 }177 178 /**179 * The lexer calls this function to indicate that it is done with the stream180 * and hence implementations can free any resources held by this class.181 * Again, the body of this function can be just empty and it will not182 * affect the lexer's operation.183 */184 public void Done() {185 }186 187 public String toString() {188 return "StringParserInput\n Pos:" + pos + " len:" + len + " #################\n" + buffer; // NOI18N189 }190 }191 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/netbeans/modules/css/text/syntax/javacc/lib/StringParserInput.java.htm
CC-MAIN-2017-17
en
refinedweb
#include <BoundedQueue.hpp> List of all members. Use BoundedQueue class to create objects that will serve as bounded task queues. Scheduler uses an instance of a queue object to buffer arriving tasks. BoundedQueue has a property called upper bound which restricts the maximum number of tasks that can be queued. If upper bound limit is reached and a new task is scheduled, QueueOverflowError exception is thrown. To prevent this, Scheduler calls full() method of a queue before putting new tasks. BoundedQueue can be combined with other types of queues to achieve desired functionality. For example, one can combine BoundedQueue with SortedQueue; the resulting queue will be a priority, bounded queue.
http://rhascheduler.sourceforge.net/classRha_1_1BoundedQueue.html
CC-MAIN-2017-17
en
refinedweb
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How to get value from selection field in Email Template Hi, I am creating a template for a class (say sale.order), in which I have a selection field (state). While I am trying to fetch value of state using ${object.state}, I am getting the "key" but not the "value" of the selection field. How can I get the value of selection. Please kindly help me. Thanks in advance. Regards, Scot. Not sure on how to get the field_val. But a workaround could be using if statements <code> % if object.state in ('draft'): draft % endif % if object.state in ('paid'): paid % endif </code> Something like this should work. Although the only if statements in mail templates i could find at this moment are checking if the variables are set. But I figure this would work. I couldn't test this, as my mailserver on my testenv is not configured About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-to-get-value-from-selection-field-in-email-template-82655
CC-MAIN-2017-17
en
refinedweb
Surprise Attack! If you visited the previous dungeon examples in a non-Chrome browser, you may have noticed two issues. One was that the graphics looked all blurred and the other being that the sound didn't play. Let's roll a couple of D6's and see if we have the agility to overcome it! Firstly, why is it blurry? This is due to browsers smoothing scaled images on the canvas by default. For pixel based art (blocky), we do not want this to happen so we have to use the property imageSmoothingEnabled of the drawing context object that we get from the canvas. Currently in Dart, this property doesn't cover the numerous variants. If we want the dungeon to work on IE, Firefox etc we need to use JavaScript. Fortunately calling JS from Dart is very easy. We will add a simple class to the index.html page with a simple class called crossBrowserFilla with a single method called keepThingsBlocky. var crossBrowserFilla = function (){ this.keepThingsBlocky = function(){ var canvas = document.getElementById("surface"); var ctx = canvas.getContext("2d"); ctx.mozImageSmoothingEnabled = false; ctx.imageSmoothingEnabled = false; ctx.msImageSmoothingEnabled = false; ctx.webkitImageSmoothingEnabled = false; } }; This is called in main.dart. Fairly easy! import 'dart:js'; .... JsObject jsproxy = new JsObject(context['crossBrowserFilla']); bool canvasConfigured = false; ... if (!canvasConfigured){ jsproxy.callMethod('keepThingsBlocky'); canvasConfigured = true; } The second issue of sound has to navigate things a bit differently. The best format across browsers is MP3 currently, the downside being is that Dartium doesn't yet support the required codec as it is based on Chromium. This is short term (hopefully) once the Dart VM makes it into Chrome. Apart from changing the file extensions, there is not change required to the audio code. Code is available on Github and a live demo is available here. Use arrow keys to move! Next time we will REALLY REALLY WILL finish our dungeon adventures by putting some other characters in the dungeon. A wizard's promise!
http://divingintodart.blogspot.com/2015/02/procedural-generation-part-five.html
CC-MAIN-2017-17
en
refinedweb
wire speaker Speaker interactively sculpts wire forms based on the sounds of people talking. A micro-controller is used to analyze speech and control several small motors that push and bend wire. Speaker interactively sculpts wire forms based on the sounds of people talking. A micro-controller is used to analyze speech and control several small motors that push and bend wire.. When a note is played, it is displayed in a color representing the instrument that played that note. This makes it easy to see patterns within individual instruments as well as the overall variety of instruments. final product essentially does the following: poster can be seen below: The applet currently isn’t online since it needs a lot of fixing up. Hopefully I can work on that over the summer because I do want to continue this project since I think it has a lot of potential. Once I have some time to relax and detox from this busy school year, here’s some things I want to work on next: vomiting it into each other’s mouths. “Lovers Theme”, from Herve Roy’s Romantic Themes, plays throughout. Part of what has facilitated 2 Girls 1 Cup’s spread are the reactions it causes. Thousands of videos exist on YouTube of users showing the original video (off-camera) to their friends and taping their reactions, although some videos seem to be staged. Analysis A collection of twenty of the most-viewed YouTube reaction videos were downloaded and then edited to start at the same time. This was possible by listening for the start of the audio for the 2 Girls 1 Cup video. Each of these videos were then processed to collect data about optical flow and volume. Volume Loud reactions are common in the reaction videos. Screams of disbelief and horror are a big part of what make the reactions so interesting. For each frame of a reaction video, the fast Fourier transform calculated the frequency spectrum of an instance of audio. The frequency bins were then summed to calculate the volume. A window of one second was used to smooth the data. Optical Flow There are often strong visible reactions to the video, with people either flailing or cowering in fear. Optical flow is the amount of movement in a video. This differs from techniques such as background subtraction and differencing because the amount of image displacement is calculated. For this project, the openCV function cvCalcOpticalFlowBM was used to retrieve the amount of motion between consecutive frames. As with the audio, a window of one second was used to smooth the data. Pretty graphs Each color represents a different reaction video plotted over time. The median value for each second was used to lessen the effect of outliers. So in the end, were there any conclusions that resulted from this analysis? I had hoped by analyzing the reaction videos quantitatively, patterns would emerge that could, in an indirect way, describe 2 Girls 1 Cup. But it appears that once the mayhem begins, the reaction videos turn into chaos, maintaining the shock through the duration of the video. This project was created for Golan Levin‘s Shameless plug for other projects: So much for trying to put numbers to scat-fetish porn. . uses a face shape control panel to drive a head like small robot. When the robot hit an object (projection), it will eat the object, and the face will change. Some objects represent good some mean bad. We earn points by eating those good objects and get minus point by eating bad objects. In the limited time the user need to gain the points as high as possible. 1.Motivation The idea of creating a small creature that has simple behavior (follows simple rule) came to my mind at the first time. In many movies like AVATAR or Princess Mononoke we can find a character that is small and white and represents the spirit of pureness. Inspired by Valentino Braitenberg’s VEHICLES I started building simple moving robots that follow people. 2.Exploration To Make a Braitenberg Vehicle I tried different approaches: (Two servo motors and two light sensors hook up with Arduino) (Two DC motors and two light sensor with a handmade Arduino board) (Two light sensors and two pager motors connected with transistor. No Arduino ) To control these people following robots I was thinking using projector from top projects bright white spot on the floor which enable robots to follow with. A camera from top captures the image of player and find the position, and then projects white circles that are flocking around. This idea fail because: 1. Robots are too slow to chase the white spots. 2. Player’s shadow might block the light. 3. The light environment has many noise which causes randomness of robots. At this point I changed the idea: Instead of making robots follow light spots, I make the white spots (projection) follow a robot. And here is the basic structure of HEAD MONSTER: Using a IR-camera from the top we can find the position of IR emitter embedded robot and project image on the robot.The robot is made by a small RC vehicle which controlled by a four tilt sensors embedded controller. 4.Implementation ROBOT: (The form of head monster. Drawn in Alias, cut by CNC milling machine.) (Vacuum forming the head shape foam, the upper part is embedded with an IR emitter) (The lower part has a four wheel robot inside) (Hacked radio controller. Four tilt sensors control four directions of movement) IR CAMERA: (Hacked PS3 Eye, the IR filter has been taken off. Instruction:) INTERFACE: Programmed in Openframeworks. (When the small face moves cross over an object, the score and the big face change) 5.Final Thought There are many challenges of controlling small robots that following people, which makes me think of: It is so hard to 100% control the robot. What if we abandon the right of control and let the randomness of the physical world (the ambient light, the form of robot, sensors, motors…) drives the robot, the robot might become more vivid. Although the original idea was not success, we learned the experience from this exploration.... This is not a final screenshot!! The videos displayed are for testing purposes only. Minute is an investigation into our individually unique perception of time. Participants are asked to perceive the duration of one minute, and a video recording of this minute is added to a database. A visual arrangement of minutes is then created by pulling twenty random videos from the database. Hardware needs for the showcase on 4/28: Ideally, a monitor and Mac mini, but a monitor and DVD player could also work. Some favorites: Cheng’s Words From Beyond Hope Paul’s The Central Dimension of Human Personality David’s Fantastic Elastic Type What if creating architecture was something that anyone could do in their back yard? This is the question Ant Farm, the legendary early-70s artist collective asked when they wrote the Inflatocookbook, an a recipe book for inflatable structures that attempted to liberate architecture from the realm of professionals. With the advent of computer graphics and accessible CAD software Ant Farm’s dream of democratic architecture seems more plausible than ever. With this in mind I wrote a pattern generator for inflatable structures, implemented as a plugin to Google Sketchup. A digital supplement to the Inflatobookbook, it simplifies one of the most difficult parts of making an inflatable: deciding where to cut. The hardest thing about building an Ant-Farm-style inflatable is designing the flat cut-out pattern used to create the 3D form. My plugin lets the computer do the thinking for you. All you need to give it is a model. Before I created my plugin I hoped to modify existing software to create plans inflatables. However I was dissatisfied with the cost and difficulty of existing unfolding software. Google Sketchup was an ideal platform for my plugin. It’s free, cross-platform, easy to use, and scriptable in Ruby. The plugin’s unfolding algorithm is based on a post by StackOverflow user Theran. It works by representing the 3D shape as a graph. Below is an example of how the algorithm works on a cube: The sketchup model is converted into triangles—a polygon mesh. The polygon mesh is converted into a graph (in the computer science sense.) Each face is a vertex in the graph, and each edge joining two faces in the polygon mesh is a connection between the two vertices in the graph. By applying Dijkstra’s algorithm to this graph, we get a tree where every connection represents an edge that we won’t cut, but will ‘fold’ when the structure is inflated. The edges to fold are chosen based on how much they would need to bend to achieve the right shape. Edges that connect faces with large angles between their normals are less likely to be used as folds than faces with similar curvature. In the cube below, it’s much more likely that 5 will share an edge with 4 than with 8 because the difference in curvature is small between them. Once the edges to fold are determined, each face is laid out flat one at a time. The first face placed is the one determined to produce the lowest overall cost by the Floyd-Warshall algorithm. From there, all of the other faces are placed, connected to the other shapes by their fold edges. Occasionally two faces will overlap in the unfolding. When this occurs, the program creates a separate shape that doesn’t intersect and continues adding faces off of that new shape. This algorithm can be very slow for more complicated meshes, so it’s a good idea to simplify any model you provide it. I use a piece of software called MeshLab to clean up and simplify the mesh before loading it into Sketchup. The quadratic mesh simplification algorithm available in MeshLab can lower the number of polygons in a model while still maintaining its basic form. With large models, segmenting the mesh into components before processing can help make the output simpler. In the bunny model I created, I made a separate model for each ear, the head, the tail, the feet, and the body. In a future version of the plugin, I plan to automate these steps. To test my software I decided to build a 12’ tall Stanford Bunny—a classic generic 3D model and somewhat of an in-joke among graphics experts. When a new graphics algorithm is created it’s usually tested first on the Stanford Bunny. After some pre-processing, I ran my plugin on a model of the bunny. It produced several pieces, available as a printable PDF below. The numbers on the faces represent the order in which the software laid down each face. I used a computer projector to get the pattern I had created in Sketchup onto plastic. I put a piece of black electrical tape over each line in the projection and wrote each face’s number using a black sharpie. This step took the most time. Afterwards I cut out each shape in preparation for sealing, leaving about an inch and a half around the edge for creating a seam. A scale paper model of the bunny helped align the pieces in preparation for sealing. Sealing the bunny is a lot like sewing—It’s done inside out. First I matched up the adjoining faces with the outside, taped sides of the plastic facing in. Then I used the impulse sealer to fuse the plastic at the place where the two pieces overlapped. Since the impulse sealer I used wasn’t big enough, I cut a flap in the excess plastic each time so that it would fit in the machine. Alternatively, a clothing iron can be used for this step. I prefer the impulse sealer because of the clean seam it creates. I don’t have a picture of how I attached the bunny to the fan. There wasn’t much time before the show to design a proper fan tube, so it’s pretty MacGyver. I just cut a slit in the side of the bunny and attached the plastic to the fan using packing tape. Surprisingly the seal held. After inflation, I found some holes that the impulse sealer had missed. A friend helped me seal them with packing tape from inside the bunny. During the show my bunny sat in the hallway of Carnegie Mellon’s College of Fine Arts building. It was backlit with a stage light with a red gel. This project is funded by a student grant from the Fine Foundation, received through the Electronic Arts area of the Carnegie Mellon School of Art Thank you to everyone who helped assemble the bunny. You know who you are! Project Title: SubFabricator – Explorations in Subtractive Fabrication Project Image: Project Description: Think of the simplest way to fabricate a sculpture. The answer is removing undesired volume of a bulk material till remaining volume shapes like the desired form. SubFabricator is an interface that enables user to input her/his 2d or 3d drill paths or drill vectors algorithm in Processing. Afterward, the SubFabricator provides visualization of final form and data for fabricating form (using CNC or 6-axis Milling Robot) through GrassHopper. Exhibition Requirements: All I need to exhibit the work is a table or any kind of flat surface for putting laptop and fabricated forms on. A 6.0′ x 3.0′ or 8.0′ x 2.0′ table would suffice. Among all projects my favorites are: Project 1 – The World According to Google Suggest by Nara Project 2 – AI Brushes by xiaoyuan Project 3 – Trace Modeler by Karl DD import processing.serial.*; XMLElement xml; ArrayList Followers = new ArrayList(); Serial myPort; int temptime=0; void setup() { size(200, 200); if (myPort!=null) myPort.stop(); myPort=null; xml = new XMLElement(this, ““); XMLElement site = xml.getChild(0); XMLElement[] siteData = site.getChildren(); for (int i=0;i<site.getChildCount();i++){ if(siteData[i].getName().equals(“item”)){ String sttt =(siteData[i].getChild(0).getContent()); String[] tokins = split(sttt, ” “); if(tokins[0].equals(“Follow”)) Followers.add(tokins[1]); // if(tokins[0].equals(“Unfollow”)) // println(tokins[1]);} }}} boolean searchArray(ArrayList alist, String lf) { for(int i = 0; i < alist.size(); i++) if (alist.get(i).equals(lf)) return true; return false; } void draw() { background(255); xml = new XMLElement(this, ““); XMLElement site = xml.getChild(0); XMLElement[] siteData = site.getChildren(); for (int i=0;i<site.getChildCount();i++){ if(siteData[i].getName().equals(“item”)){ if (myPort!=null) myPort.stop(); myPort=null; String sttt =(siteData[i].getChild(0).getContent()); String[] tokins = split(sttt, ” “); if(tokins[0].equals(“Follow”)) if (!(searchArray(Followers,tokins[1]))) { if (myPort!=null) myPort.stop(); myPort=null; println(tokins[1]); temptime=millis(); while ((millis()-temptime)<1000) { if (myPort==null) myPort=new Serial(this, Serial.list()[0], 9600); } Followers.add(tokins[1]); } } }} First off, here’s a screen shot of my final project so far: My main question to the class was how to improve the interface. I changed the font from when I presented this in the morning. However, I have finished all the back-end calculations. The vast majority of the calculations (such as the neighbors calculation) is done during the loading, so there should not be an issue with speed while the user is actually interacting with the project. Items Still Needed to Be Completed: Items Needed for Exhibition: I just need one computer. I can provide my own (however, my computer does sound like a jet engine–if that is an issue). Some Favorite Past Projects of Fellow Students: fantastic elastic (work in progress) from David Yen on Vimeo. FaceFlip from Max Hawkins on Vimeo. I’ll have more to show on Wednesday, but here’s some images of what I’m getting for the modules displaying gradients. Rolling over the gradient displays the color and the image it came from. I shouldn’t need any special hardware for the exhibit, I’ll show it on my own computer and I’ll make a poster or printed piece to go along with it (so maybe an easel or something to hold the poster) Title/Sentence: ColorShift: Creating color palettes from time lapse photos of items which change color over time. Other projects I’d like to see at the exhibition: FaceFlip, Alyssa- Flower Generation, David- Fantastic Elastic Shaper ‘Shaper’ is a prototype device that uses a three axis CNC machine to interactively dispense expanding foam material in an additive fabrication process. The device can be controlled directly via a translucent touch screen, allowing the user to look directly into the fabrication area and create physical artefacts with simple gestures. At the Show: a 1m X 0.5 m space for the 3D printer, two tables by the side to place controlling computer and display printed objects, and a rubbish bin. Also we’d need a cart to move the whole machine over to studio. 0. Progress 1. Project Image 2. Project Description This Kaleidoscope mirror allows the viewers to see themselves with various fun effect. By rotating the mirror frame. viewers can manipulate their face in realtime and even allows the time travel. 3. Hardware Issue (tricky!!) 1. I need good quality of projector (if poor quality, viewer will see the rectangular edge of the screen – like the projector that I am using for the prototype). 2. The first picture below explains how the mirror setup will be. The projector should be slanted and set up high so that viewer do not block the projected screen. For the exhibition, my idea is to hang the projector at the bottom of second floor hand-drill (yellow highlight). But I am sure about this idea. 4. Favorite Project 1. David Yen’s elastic type 2. Max’s Tesserae 3. Kuan’s Trees Cycle
http://golancourses.net/2010spring/category/final-project/
CC-MAIN-2017-17
en
refinedweb
Adding "like" to Dingus In making serious efforts to become a better tester I’ve tried to get better acquainted with testing tools. Mocking specifically is one technique that seems extremely helpful, especially when you have an application built upon many other systems that you’d want to test without having a huge test setup stage. The library I settled on was Dingus thanks to Gary’s screencasts and seeing first how how flexible it is in terms its ability to almost magically replace everything except that which is tested. At the moment the Dingus docs are rather sparse, but the basic idea is you use a Dingus object and then assert different things happened to the object. This primarily done by using a method called “calls”. Here is an example: from dingus import Dingus d = Dingus() d.foo('bar') assert d.calls('foo', ('bar')) Pretty easy so far. Dingus can also be used to replace all the objects in a test using a DingusTestCase. So, for example, you can do something like this: class TestSomeModel(DingusTestCase(MyModel)): def test_getting_a_thing(self): m = MyModel() m.get('foo') assert m.db.calls('query', (), {'name': 'foo'}) This makes sure the “db” object in the MyModel class calls its “query” function with the query argument “name” equal to “foo”. Again this is pretty simple and is really helpful because you verify that you are using an API correctly without having to spin up some external service. This all works really well when you’re looking at mocking something one level deep, but often times it would be beneficial to go a little deeper. In my specific situation, I’m using a MongoDB connection, so I want to verify that specific things happened when I created the object and when I query something. To do this, I have to go a bit deeper with the calls. Here is an example: from dragoman.testing import DingusTestCase from dragoman import storage class TestLanguageList(DingusTestCase(storage.LanguageList)): def test_get(self): result = storage.LanguageList.get('foo') db = storage.get_db.calls('()').one().return_value collection = db.calls('__getitem__', ('config.languages')).one().return_value assert collection.calls('find_one', ({'abbrev': 'foo'})) My project is a gettext-like service and the test is for testing getting a language. You can see in the above code I have to go into the calls list and pull out the return value of the “get_db” function and continue to traverse the graph of calls to make sure the query was correct. What I’d like to do is to describe what it should have looked like. Something like this: def test_get(self): result = storage.LanguageList.get('foo') expected = Dingus() expected()['config.languages'].find({'abbrev': 'foo'}).count() assert expected.like(storage.get_db) Instead of manually traversing I can just utilize a Dingus and compare it with the one that was called previously. This is really helpful for defining protocols of sorts. Often times applications have a set of business rules that go along with certain actions and it can be difficult to test that sort of thing. The idea here is that those sort of protocols can be defined clearly in the tests and there is a simple way to test them. Here is another example. Say you have a User object that when saved needs to create or update a datastore. Here is what the test might look like: class TestUser(DingusTestCase(User)): def test_create(self): u = User('eric') mock_db = Dingus() mock_db.find('user') mock_db.create('user_%s' % eric) mock_db.commit({'username': 'eric', 'type': 'user'}) assert mock_db.like(u.db) Based on the tests we verify that the create method will be using a “user_” prefix in its create method. If we assume the database is a document store like CouchDB, then we can glean from the tests that there is a “username” field that is needed for documents of that type. Likewise, it is clear that the protocol for adding a user involves checking if the user there, creating the user then committing the transaction. Here is the implementation along with a few tests: from dingus import Dingus class LikeDingus(dingus.Dingus): def _flatten_calls(self, d): calls = d.calls() def flattener(cl): calls = [] for call in cl: if call.return_value: calls.append((call.name, call.args, call.kwargs, flattener(call.return_value.calls()))) else: calls.append((call.name, call.args, call.kwarg)) return calls return flattener(calls) def like(self, other_dingus): return self._flatten_calls(self) == self._flatten_calls(other_dingus) ## tests class TestDingusLikeStmt(object): def test_singular(self): tested = LikeDingus() tested('foo')['bar'].find({'name': 'baz'}) expected = Dingus() expected('foo')['bar'].find({'name': 'baz'}) assert tested.like(expected) def test_multiple(self): tested = LikeDingus() tested('foo') tested('bar') expected = Dingus() expected('foo') expected('bar') assert tested.like(expected) def test_the_order(self): tested = LikeDingus() tested('foo') tested('bar') expected = Dingus() expected('bar') expected('foo') assert not tested.like(expected) Seeing as I’m still learning to be a more effective tester, I can’t say whether these kinds of assertions are extremely helpful or not. I do think that defining the expected assertions in terms of some other Dingus seems helpful in that you get to keep on mental model. Also I think the definition feels more natural than focusing entirely on the calls method of a Dingus. What do you think?
http://ionrock.org/2010/03/11/adding-like-to-dingus.html
CC-MAIN-2017-17
en
refinedweb
On Fri, 26 Mar 2004, Edgar Toernig wrote:> Sridhar Samudrala wrote:> >> > The following patch to 2.6.5-rc2 consolidates 6 different implementations> > of msecs to jiffies and 3 different implementation of jiffies to msecs.> > All of them now use the generic msecs_to_jiffies() and jiffies_to_msecs()> > that are added to include/linux/time.h> >[...]> > -#define MSECS(ms) (((ms)*HZ/1000)+1)> > -return (((ms)*HZ+999)/1000);> > +return (msecs / 1000) * HZ + (msecs % 1000) * HZ / 1000;>> Did you check that all users of the new version will work correctly> with your rounding? Explicit round-up of delays is often required,> especially when talking to hardware...I don't see any issues with the 2.6 default HZ value of 1000 as they becomeno-ops and there is no need for any rounding.I guess you are referring to cases when HZ < 1000(ex: 100) and msecs isless than 10. In those cases, the new version returns 0, whereas some of theolder versions return 1.If i am not mistaken, Jeff Garjik/David Miller are the maintainers for mostof the users of these routines and i have got an OK from them. drivers/block/carmel.c drivers/net/tulip/de204x.c include/linux/libata.h include/net/irda/irda.h drivers/atm/fore200e.c include/net/sctp/sctp.hThe only other place where the older version is different is drivers/char/watchdot/shwdt.cDave, Jeff Do you see any issues with the new generic versions of these routines?ThanksSridhar>> Ciao, ET.>-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2004/3/26/154
CC-MAIN-2017-17
en
refinedweb
SUMMARY 01 02 Baltic Forum in Svetlogorsk The 11th session of the Baltic Sea States Council took place on 5-6 March 2002 in Svetlogorsk of the Kaliningrad Region. For 10 years of its existence, the Council has turned into a large and influential organization not only in the Baltic Region, but also in the world. One of the urgent issues discussed at the session was a problem of the economic development of the Kaliningrad Region being surrounded by EU countries. Experts of Trade Chambers of the Baltic States presented the results of the project on "Recommendations for the expansion of trade and investment in the Kaliningrad Region", the major of which was: Kaliningrad attracts now only academic interest, regional economy is not important for foreign investors. Increase of Kaliningrad importance in Europe requires joint effort on the regional and federal level. EU is expected to support these efforts. Development of the commercial and fishery ports of Kaliningrad is another local problem. Ice-free port is no longer vital as soon as its turnover of goods is of just 4% of the total turnover of ports of St-Petersburg, Estonia, Latvia, and Lithuania. Some reasons hamper the development of the port: location of the port within the precincts of the town and a narrow dredged sea channel do not allow vessels with tonnage of more that 25,000 tons to enter the port. There are some plans to turn the navy base in Baltiysk into а commercial port, because it will bring port facilities closer to the open sea. However, it will require the development of automobile and railroads. Special terminal will be needed for processing of exported goods such as oil, gas, some raw materials, timber, and fertilizers. Import to Russia is formed by consumer goods transported in containers. The major part of these goods is destined to the main part of Russia and in this connection location of Kaliningrad is not convenient. Existing competition among the ports will be more intense and railroad transport via Poland and Byelorussia round the Kaliningrad Region may become more important. Kaliningrad has also lost its significance as a fishery centre and there is little hope for its revival. Economic zone of Kaliningrad in the Baltic Sea is quite small and maintaining of a large fleet becomes unprofitable. It is more reasonable to have a small well-equipped fleet the growth of which is prevented by decreasing fish stocks. The session of the Council was preceded by a significant work on elaboration of the Federal Target Programme for the development of the Kaliningrad Region. This programme supposes allocation to the region of 10% of Russian quota for cod catch which will allow annual building of two modern middle-sized vessels at YANTAR shipyard. Development of the transport complex implies conducting negotiations with Lithuania on securing unified tariffs for the transit through Lithuania; construction of a deep-sea port in Baltiysk including a ferry terminal for connecting the Kaliningrad Region with the main territory of Russia; construction of automobile roads for a more convenient transit through the Kaliningrad Region. Between the Baltic Sea and the European Union A. Kuznetsov, Kaliningrad For the last 10 years the Baltic Sea Region has undergone some significant changes: a confrontation of two political blocks came to an end, several independent states appeared — Lithuania, Latvia, and Estonia - which soon may become members of the European Union, NATO strengthens its position here. In the result of these changes, EU and NATO member states will surround the Kaliningrad Region - a Russian enclave — in the nearest future. Serious political transformations are reasons for major problems and conflicts. However, we managed to avoid any new confrontations in the Baltic region. The main part in that belonged to the Baltic Sea States Council (BSSC) which was founded on 5-6 March 1992 for the promotion of new democratic initiatives, economic co-operation, health and environment protection, energy, transport, communication, etc. In the frame of the Council there were created several special and working groups, and the Advisory Council for business. The BSSC became a coordinator of a multi-lateral co-operation with many regional authorities. The Council influences the situation in Europe and its importance is proved by the fact that some of the remote countries, such as Great Britain, The Netherlands, the USA, France, Italy, Slovakia are observing states for Council. One of the well-known European newspapers mentioned that owing to the Council, the Baltic region did not turn into the second Balkans. The Council can maintain its high position in case it is able to adequately respond to new challenges for the Baltic Region, Europe and the world. One of the main problems for the Baltic community is the position of the Kaliningrad Region under the condition of EU expansion. It is possible to say that already now the Council is looking for the acceptable solutions of this matter. On the first place, it assists preparation and implementation of projects for the region and with the region. The first example of such project is Eurofaculty, and other initiatives will be launched soon. Problems of the Kaliningrad Region were also reflected in the declaration passed by the XI Minister session. The Council supports regional efforts to develop and underlines the importance of the region for the development of co-operation between Russia and EU. Ambiguity of the Kaliningrad problem can negatively influence regional co-operation and give rise to some confrontation within the Council. Our common task is to prevent such a scenario in which the Kaliningrad Region will be between the devil and the deep blue sea. Kaliningrad region and wto: peclaities of regional and branch consequesnces of the Russian federation joining World Trade Organisation G. Dykhanov, Kaliningrad The Kaliningrad Region is an only enclave of the Russian Federation. In order to compensate its location, a Special Economic Zone (SEZ) was created on the territory of the region in 1998. Joining WTO obliges Russia to undertake certain duties which can lead to a significant change of economic conditions in the Kaliningrad Region. This problem should be discussed in two aspects: regional and branch. The regional aspect is reflected through the SEZ regime which is the basis of the regional economy functioning. The core of this regime is in providing a system of customs facilities to economic organizations of the region. These facilities break the basic principle of WTO — all members of the World Trade Organization should apply a most-favoured nation treatment — and may be considered as a discrimination of rights of some other organization members. Russia undertaking of obligations corresponding to the requirements of WTO is a real threat of the radical change of the SEZ functioning mechanism, and even of its liquidation. But liquidation of the SEZ can be avoided. WTO allows some abnormality for the regions under constant adverse conditions. This refers to the Kaliningrad Region as well; so maintaining a SEZ status is a foundation for further economic development after Russia having joined WTO. The diversity of branch problems can be brought to the matter of how changes in customs legislation, agricultural policy, and liberalisation of service markets influence regional economy. The present economic structure was formed without any well-thought development strategy. In the result there was a model of a lop-sided import substituting economy taking advantage of high tariffs and customs duties. Under the new conditions, the regional production will be non-competitive even in the Russian market because of high transport costs. Orientation to export production on the level of world standards is a way out of this situation. It requires a primary strategy for the regional development. Bringing Russian customs legislation in correspondence with the norms of WTO will take some 5-7 years. This time is enough for adaptation of the regional economy to the new conditions. Major part of the service market sectors is undeveloped. It caused the high level of their liberalization. In this case even financial sectors (banks, insurance companies, non-governmental retirement funds) protected from competition, are unable to fulfil their functions. In spite of the well-developed structure, the financial sector is one of the weakest points in the regional economy. One should not expect that foreign banks will come to Russia just after its joining WTO. The main obstacle for that is in the unfavourable investment climate in the Kaliningrad Region. However, increase of competition in the financial sector in the long-term perspective may become a factor contributing to the development of the regional economy. Agriculture in all WTO member-countries is protected by the state. On the contrary, expenses for the support of agriculture in Russia and in the Kaliningrad Region in particular are much lower than that in other countries. Limited budget means lead to the necessity to maintain regional quotas for imported agricultural production and to direct proceeds of quota sales to support regional agriculture. This is the qualitative assessment of measures capable of compensation unfavourable consequences of Russia joining WTO for the Kaliningrad Region. On the results of the fishery complex work in the Kaliningrad Region in 2001 For the last 50 years a unique fishery complex has formed in the Kaliningrad Region. It includes fishing and processing enterprises, modern servicing infrastructure, branch science, and organizations for training and retraining of fishery specialists. Fishing enterprises focused on the foreign countries' zones and open parts of the World Ocean. Within the period of 1998-2001 while overcoming crisis in the fishery industry, the situation of the deep-sea fishery was stabilized, the work on resources development in Russian economic zone of the Baltic Sea was perfected and the infrastructure which is the basis for the further development of the industry, was maintained. Fishery Committee of the Regional Administration participated in the elaboration of the Complex Target Programme for the development of the fishing industry in the North-west of Russia for the period up to 2010. It was built upon an earlier developed programme for the development of the fishing industry of the Kaliningrad region for the period up to 2010. There was also drafted a Law of the Kaliningrad Region "On the fishery industry in the Kaliningrad Region". In 2001 there was caught and processed 334.6 thousand tons of fish and seafood. Overall catch grew for 10.9% in comparison with the previous year and amounted to 9.1% of the total Russian catch of fish. Quotas for catch in the Baltic Sea and its gulfs are almost taken out. At the same time, the main part of the catch is of cheap fishes (Baltic herring, sprat) - 77%. In this case the enterprises do not have a possibility to renovate production means deterioration of which varies between 60 and 80%. A steady growth of production is observed in the fish processing industry. In 2001 there were produced 146 million conventional cans of canned fish. This figure is 22 million higher than that for 2000 and amounts to 32% of all-Russia production of canned fish. Almost every enterprise involved in the fishery industry, participates in the foreign trade. Every year they export some 3.5 thousand tons and import 65 thousand tons of frozen fish. Fishing enterprises proved for the workload of shipyards in the region. Thus, in 2001 they spent 222.4 mln. Roubles for ships repair. Production growth is the reason for the increased tax proceeds to the federal, regional, and municipal budgets. Fishing enterprises are regular taxpayers, because the Fishery Council does not allocate fishing quotas with no documents from the rating authorities. In spite of some factors containing the development of the regional fishery, among the tasks for 2002 it is planned to start implementation of the Complex Target Programme for the development of the fishing industry in the North-west of Russia for the period up to 2010. On the first stage it is necessary to determine the place of the fishing industry of the region in the programme for the development of fisheries in Russia and to create a mechanism for renovation of main production means, to strive for the inclusion of investments to the regional fishery into the federal Programme for the fishing industry development and to assign to the region a part of quotas for catch in various zones for a period of 10 years, to propose to Russian Government to place the state order at the fishing enterprises of the region and to determine measures for the financial support of enterprises. Quality investigation of the World Ocean resources — a pledge for their rational usage A. Alexeev, V. Ponomarenko, Moscow The authors attract attention to the urgent problems of Russian fishery and its scientific provision. Fishing industry of the country has formed for some 50 years and in its final form it consisted of survey organizations, fishery fleet, and a network of scientific and educational institutions. Joint work of all these functional structures provided for the effective catch in the World Ocean. Scientific data allowed determining a long-tern perspective for the further development of the industry. Within the process of reforms taking place now, the fishery industry is almost ruined. Survey institutions practically stopped functioning. Fishing organizations were privatized and those remaining of the state ownership are scientific and some other institutions. And the decrease of scientific staff and investigation vessels resulted in the restriction of the possibilities for the latter to successfully operate. Now their investigation work is mainly connected with the limits of the economic zone. All the above has significantly decreased scientific support to the fishing fleet of Russia. Till now there is no law on fishery and water bioresources protection. Federal fishery committee is always under reorganization, employing incompetent people for managing the branch. These reasons created conditions for uncontrolled catch and import of fish, as well as for excess of quotas and catch limits. Catch statistics is no longer objective. Activities of the Federal Government and Fishery committee show misunderstanding of the specific character of problems in the fishery industry. The main task of scientific organizations and Federal Committees the authors see in study of commercial bioresources, assessment of their stocks and forecast of their dynamics for identification of overall permissible catch (OPC). The main stage in OPC forecast which includes identification of the stocks and acquiring a real view of the actual amount of catch, requires reliable statistics. However, nowadays it cannot be considered as a reliable one as soon as it does not account excess of quotas and illegal import of a part of the catch. The next problem refers to the issue of regulating and rationalization of bioresources usage and change to balanced fishery. For this information on the stocks of bioresources and safe level of catch is required. But the economic situation in Russia does not allow creation of a numerous scientific fleet. Building several fast vessels equipped with modern devices allowing quick and accurate assessment of hydrocoles stocks and the environment can solve the problem. Fishery surveys are commercially valuable that is why scientific institutions are members of the market relations with their specific place and legal grounding. Fishing industry is one of the most science consuming and science dependent branches. That is why it requires timely and sufficient financing. With no revival to the science, fishing industry will not be able to develop in Russia. Monitoring of the internal fish products market in the Kaliningrad Region in the 4th quarter of 2001 V. Teplitsky, Kaliningrad The article presents the investigation of the retail and wholesale prices dynamics, as well as price formation on fish products in the trade shops of the Kaliningrad Region in the 4th quarter of 2001. The investigation was conducted on the basis of analysis of 47 retail outlets. The chosen period differs from the same period of the year 2000 by the fact that due to favourable weather conditions Baltic herring and sprat was caught for longer time. It brought to the increase of sales of chilled fish and decrease of sales of frozen fish. However, this situation resulted not in the decrease of prices, but in their increase, especially for the fish products being of the high demand of the population, its part of scanty means in particular. Dynamics of retail prices increase is defined by changes of wholesale prices which grew much quicker than inflation. The prices were also negatively influenced by continuous growth of prices for energy and export of the part of popular production to other regions of Russia thus leading to the demand of Kaliningrad to remain unsatisfied. Increased wholesale prices gave rise to the increase of extra charges at retail shops of various property forms. All the above lead to further decrease of purchase capacity of needy and middle class people, as well as for fish products from neighbouring countries to enter to the Kaliningrad market due to the their cheaper prices. Collagen containing raw materials V. Kiselev, Kaliningrad Any processing industry always faces problems of effective usage of raw materials, reduction of production waste, widening of assortment, and increase of the production quality. This is especially true for industries involved into processing of collagen containing raw materials of connective tissues of animals the basis of which is formed by collagen fibres. The latter contains a group of proteins under a common name of collagen. Connective tissue provides for the strength of the internal and external structures of an animal; it is rich in valuable mineral substances and contains many amino acids and bioactive substances. Raw materials containing collagen are classified according to their technological features, end use and their origin. Production uses the capability of collagen for swelling in water, as well as acid, alkali and saline solutions. Swelling of collagen leads to the increase of the volume and mass of the raw material. Content of collagen in different tissues varies: its highest amount is found in bones, skin, tendons, cartilages, and bowels' walls. Raw material with collagen is a basis for food, medical, fodder, and technical production which often requires collagen dilution products (CDP). CDP is acquired after caustic and saline or enzyme treatment of the raw material. Swollen collagen is then dissolved in some organic acid. Caustic and saline method is cheaper than enzyme that is why it attracts scientific and practical interest. Enzyme method is used for getting alpha- and beta-collagen for further production for medicine. Baltic sturgeon (Acipenser sturio) fate R. Kolman, V. Liutikov The authors present historical fate of sturgeons found in the waters of the temperate zone in the Northern hemisphere — a so-called Atlantic (or European) sturgeon. The only representative of sturgeons in the Baltic Sea basin was Baltic, or German, sturgeon which was one of sturgeon's biggest variety: it reached 3 m in length and 300 kg in weight. Their natural habitat included all Europe from the White Sea to the Black Sea with Norwegian, Northern, Baltic and Mediterranean Seas between them. Thoughtless catch of this fish during several centuries was very intensive: the share of sturgeons in the total catch was of 10-80% which certainly resulted in the sharp decrease of sturgeons population, as well as their age and proportions. Within the first decade of the XX century, the catch of Baltic sturgeon decreased 4 times and by the 50s it lost its commercial value. This was a result of a rapid development of water transport, active regulation of river-beds, and waterworks. Sturgeon has lost its spawning sites. There arose a real threat that sturgeon would disappear in the Baltic Sea basin. Various measures taken for the restoration of the sturgeon population: restriction and prohibition of catch, attempts to conduct artificial spawning, etc., did not lead to any positive result. In the second half of XX century there were some single catches of sturgeon, but every year there are much less chances to catch one. In this case fishermen should be informed of the highest value of every fish caught which must be kept alive according to the law and for the possibility to use them as source material for the restoration of one of the most valuable fishes - Baltic sturgeon. Ports and the environment: A case study of the Port of Southampton, UK D. Johnson and J. Matthews, Southampton, UK This paper looks briefly at the history of the port of Southampton in the UK, and then presents an overview of the environmental impacts of ports and key legislation, which is shaping the response of port operators in Western Europe. Southampton has been an important port from at least Saxon times and the modern port was commenced in 1838 by the Southampton Dock Company. Although the port today is Britain's premier cruise port it has also always been an important cargo liner port. As for the environmental impact, the need to provide ships with safe access and shelter has inevitably resulted in habitat destruction. This is compounded by hydraulic changes produced by the construction of breakwaters, sea defences and enclosed docks. Of particular significance is the level of land claim or reclamation associated with port expansion. Greater ship sizes have also required extensions to harbour and jetty structures, together with deeper access channels. For future development and port expansion the central conflict is currently between commercial interests and the interests of nature conservation. Within the European Union the latter interests are safeguarded by various directives. In response to increasing environmental legislation Associated British Ports, the owners and operators of the Port of Southampton, has implemented an holistic environmental policy in order that their ports can identify how their statutory duties can be integrated with environmental goals. Perhaps the greatest challenge for the Port of Southampton in the immediate future, however, is the need to expand to accommodate the next generation of container ships. Russian - Danish co-operation for environmental protection of the Baltic Sea against oil spills and ship waste pollution S. P. Petersen, Denmark Due to its semi-closed nature of the Baltic Sea, its marine environment is heavily influenced by human activities, resulting in a severe impact on the environment. In order to protect the environment, Baltic countries signed the Helsinki Convention for Environmental Protection of the Baltic Sea area in 1974. Some obligations were to establish the ability to combat oil spills on the sea and to ensure facilities for proper reception of ship-generated and cargo-related waste in all Baltic Sea ports. As a part of an international environment assistance programme, the Danish Cooperation for Environment in Eastern Europe (DANCEE) has financially supported a number of co-operation projects in contingency planning for marine oil spill preparedness and response and with ship waste management planning in Eastern Baltic Sea ports. Two of them were established as Russian-Danish co-operation projects. The increased activities related to ship-transport of oil and oil terminal import/export have raised concerns for the potential hazards posed to the environment in case spill incidents occur. As a consequence hereof, Russian responsible authorities have been in a process to review and update the regional capacity for marine spill incident response in the Baltic Sea area to protect the marine environment, its natural resources and amenity values against pollution with oil or other harmful substances. The State Marine Pollution Control Salvage & Rescue Administration (Gosmorspasslujba) under the Ministry of Transport of the Russian Federation has recognised the necessity of re-considering the Regional Oil Spill Contingency Plan for the Russian Baltic Sea Response Zone. In September 2001, the implementation phase of the on-going Danish-Russian co-operation project for updating of the Russian regional oil spill contingency plan was initiated. In a preceding feasibility study for the project, an assessment of the overall Russian marine spill contingency set-up in the Russian Baltic Sea response zones covering the St. Petersburg and Kaliningrad Region was prepared. The assessment included recommendations for spill response equipment to be purchased in the implementation part of the project. The purchased equipment is planned to be delivered in Spring 2002. The overall long-term objective of the project is to contribute to the protection and sustainable use of the natural resources and the recreational amenities at the coastal areas of the Eastern Baltic Sea by minimising the consequences of oil pollution. The major short-term objective of the projects is to provide assistance in fulfilling the requirements of the Helsinki Convention and the OPRC Convention [2,3] by increasing capacity for handling accidental or deliberate oil pollution incidents in the Russian regional response zone. The project is implemented in close coordination and contact with all involved authorities on regional and federal level. It is planned that the project will be finalised by the end of year 2002. The protection of the marine environment in the Baltic Sea area against the impact of pollution from various types of ship-generated waste is overall regulated by specific national legislation and the rules and recommendations in the two international conventions for marine environmental protection. Slop water, oily ballast water, heavily oil-contaminated bilge water, garbage and other wastes must not be pumped or thrown into the sea in the Baltic Sea Area. All these types of wastes must be kept on board until the ships reach the destination ports. In these ports, there should be adequate reception facilities where the waste can be handled in an environmentally acceptable way. Untreated sewage water must not be discharged into the sea less than 12 nautical miles from the nearest coast. Experience from a number of European ports has shown that a detailed planning process is needed in order to establish a ship waste management system in the port which will comply with the requirements of the MARPOL 73/78 Convention, be economically sustainable and not cause the ships any undue delay. DANCEE programme has supported a number of projects in the Eastern Baltic Sea area for development of ship waste management and handling plans. In these projects, a ship-waste handling and management plan has been developed and accordingly approved - integrating recommendations from the Helsinki Commission and also considering fully the requirements from the recently enforced EU Directive. The projects have included an assessment of existing reception and treatment facilities, if any. In some of the involved ports, the projects have also included planning for and construction of facilities for reception and treatment of ship waste. Issues of international co-operation in monitoring transfrontier water basins of the Kaliningrad Region S. Kondratenko, M. Durkin, Kaliningrad The Kaliningrad Region is among the first facing urgent water problems one of which is the issue of superficial transfrontier water stretching to the neighbouring countries of Poland and Lithuania. Major part of pollutants gets to this water already in these countries and later on the territory of the region. 60% of drinking water is supplied from open water basins. The more contaminated water in these basins, the more complicated and expensive its purification and the worse quality water we drink. Vast polder territories make water basins dependent on floods and run-ups. The above phenomena determine an urgent need for a close co-operation with the neighbouring countries which will primarily deal with the effective regional water monitoring and management. An exchange of up-to-date information on the transfrontier water basins is of the same importance. Some work in this direction has already been conducted with Lithuania for quite a long time. An Agreement on co-operation in environmental protection was signed. Under an additional agreement a Commission for environmental protection was created. An expert working team for drawing a plan of co-operation in monitoring of superficial, ground- and seawater was founded. Such a group in Kaliningrad was created on the basis of the regional administration; however, this group has not yet done anything significant. Moreover, the Kaliningrad party cannot independently solve organisational matters on joint realization of the ecological policy. Water monitoring in the Kaliningrad Region is conducted by three representations of federal authorities the work of which is co-ordinated by the Committee for Natural Resources of the Kaliningrad Region. At the same time this co-ordinating body does not practise water monitoring. There is almost no information exchange on ecological issues. Besides, those people who provide their foreign partners with such information are prosecuted. In order to solve the matter of information exchange under the present conditions, the administration of the Kaliningrad Region should undertake this responsibility which is quite real in connection with redistribution of competence determined by a new Federal Law "On environmental protection". Ecological aspects of development of the offshore oil field on Russian shelf of the Baltic Sea. Estimation of the ecological impact O. Pichuzhkina, Kaliningrad Offshore oil field "Kravtsovskoye" was discovered in 1983. It is located outside Russian territorial waters within 22.5 km from the coastline. It is supposed that the field will be developed from a sleetproof stationary platform (SSP). The project is based on the principle of minimization of damage caused to the environment at every stage of construction and operation. Accepted technology provides for the "zero discharge" of pollutants into the sea during the design mode of operation. The greatest influence on the marine biota is predicted in causing anxiety to fishes and birds. But this influence is estimated as a local, brief, and low-intensity one. Calculation of pollutants emission to air showed that onboard engines would produce the maximum pollution during construction of the SSP. This kind of volatile pollution will be observed within the distance of 5 km from the SSP, i.e. in Russian territorial waters. During well-boring and operating air pollution will be possible at a distance of 1.5-1.7 km from the platform, meaning that it will not reach neither the Curonian Spit nor the Russia-Lithuania frontier. Some pollution to the bottom layers could be caused by stratal water, but the technological process does not allow its emission to the sea. Sanitary wastewaters will not contaminate the bottom, as soon as their emission to the sea is also excluded. Ecological investment is calculated for some 5 million USD and compensation for fish stocks reproduction is put to 126.7 thousand USD. The structure of ecological expenses also provides for the fees for use of nature resources and environmental pollution in accordance with the established procedure. In order to assess the marine environment condition, a series of ecological studies was conducted in 1988-1997. Principles of ecological monitoring organization for the period of the field development were elaborated. The purpose of monitoring is in organization of regular observations of the influence level on the marine environment with the following analysis of the results obtained compared to the stated norms. Tests of bottom sediments and sea water, as well as monitoring of the hydrocoles in the area of the SSP will be conducted on terms approved by the environmental authorities. Accounting ecological aspects of the planned activities and an end-to-end solution to the problem of its negative influence minimization will allow successful developing of oil extraction on the continental shelf of the south-east Baltic Sea. Once again on the energetic safety of the Kaliningrad Region A. Vasiliskov, Kaliningrad There is a certain danger to the energetic supply of the Kaliningrad Region due to the reason that up to 98% of power the region receives from outside through Lithuania. Attempts of the Baltic States to synchronize their energy supply network with EU requirements will certainly influence the energetic safety of the Kaliningrad Region. And construction of the Heat Station-2 of 900 megawatt will not solve this problem. Energetic independence of the region from neighbouring countries can be achieved through diversification of the supply and development of the regional power base. A new powerful source of energy will significantly raise the regional supply and use if modern technologies will not cause any negative influence on the environment. Heating of all Kaliningrad will also be improved. For this positive outcome Heat Station-2 should be supplied with fuel. Russian power engineers insist on using one sort of fuel - natural gas. The present consumption of natural gas is of 600 mln. m3, the new station will require 1200 mln. m3. Use of local peat, oil, and brown coal is excluded. Gas would be delivered from Urengoy through a pipeline of Moscow-Vilnius-Kaliningrad. Its capacity is limited, gas pressure mainly depends on the takeouts of the intermediate consumers in Byelorussia and Lithuania and sometimes it drops to the critical level of 4-5 kg/cm2. In this case, gas supply of the Heat Station-2 becomes impossible. The second pipeline will not improve the situation either: regional life will be more dependent on gas; in the present state it is better to look for some other ways to solve the problem of the energetic safety of the region. Some measures suppose a change of approaches to the decision of the matter: there must be created a flexible fuel and energy complex independent from one type of fuel. Thus, the energetic safety of the region will be provided on the account of usage of local fuel requiring less financial expences. Perfection of starting devices of marine engines by use of overrunning clutch A. Vasilyev, O. Sharkov, Kaliningrad The present development of the fishing fleet in Russia is connected with use of boats and medium-size vessels. Almost 85% of Russian fleet consists of this type vessels. Such vessels' engines are put to operation with an electric starter the reliability of which depends on the regularity of the overrunning clutch (OC). There are mostly used a roller and ratchet OCs, but they are not considered 100% reliable due to some constructional disadvantages. Attempts to correct their design do not guarantee elimination of all drawbacks and make their production, operation and repair more expensive. The main line of developing OC's construction is to increase their output capacity maintaining the same dimensions and to provide contact free running of elements. Eccentric OCs meet these requirements completely. Experimental data proves that eccentric OCs are up to 4.3 times more rigid than standard roller OCs and, consequently, their reliability is also higher, and loss by friction in the operating mode is 2 times less than that of the roller OCs. Eccentric OC is also simpler in operation and maintenance. Increase of effectiveness of high-viscosity fuel usage on shipboard B. Zavgorodny, Odessa A tendency to decrease costs for purchase of fuel for onboard engines forces to use high-viscosity oil products which include fuel oil being the end product of oil processing. However, minimal content of hydrogen and increased amount of various contaminants influence calorific value of fuel oil and accelerates the engine's deterioration. Within a long-term storage of fuel oil, chemical reactions turn a part of fluid fractions into precipitating solids. The quality of fuel oil is also worsened by an excessive watering, and mixing old and new fuels or lots of different origins with an incompatible molecular structures leads to a quick loss of fuel stability. Use of unstable fuel causes a rapid engine's deterioration. These are the reasons to search for new technologies to increase effectiveness of fuel oil usage for onboard engines. Improvement of the fuel oil physical and chemical characteristics is achieved by use of various homogenizers one of which was offered by Centre of Experimental Technologies "Hydrotoplivo". Its hydrodynamic plant has been successfully used in fuel systems of onboard engines since 1985. The plant uses intensive ultrasonic vibrations for homogenisation of the blended fuel. In this case long hydrocarbon chains are broken and structural changes occur on the molecular level thus leading to the increase of the fuel oil viscosity for 20%, density - 2.5% and significant combustibility. Quality features acquired in the result of homogenisation remain for a long time. Use of hydrodynamic plants is determined by a need to increase dispersion of fuel oil and conversion of engines to a high-dispersion water-and-fuel emulsion. Experiments conducted proved that adding 5-10% of water to the fuel accelerates the combustion rate for some 5-6 times as soon as thermal dissociation of water to oxygen and hydrogen increases combustibility of hydrocarbons. Thanks to a more complete and quick combustion of fuel, parts of engines do not get dirty with combustion products and take less risk of abrasive deterioration. Less detrimental compounds are produced, fuel consumption decreases, and no failures of hydrodynamic equipment are observed. At the same time servicing of this equipment is very easy. KAMCHATKA IS GIVEN UP for LOST S. Vakhrin, Petropavlovsk-Kamchatsky The article attracts attention of the people and the government to economic problems of Kamchatka. The region is now under the threat of economic catastrophe. Fishery was always a priority in the economy of the region; it was titled "a fishing factory" of the country. However, the share of fisheries in the economy has recently decreased, but debts of enterprises increase even quicker. At the same time tax proceeds to budgets of all levels is cut. With its policy of quotas for fishing the Government pushes the region to bankruptcy because the major part of fish resources goes to foreign fishing companies either directly through auctions in which foreigners may participate with no preferences to Russians or indirectly by crediting Russian companies for participation in such auctions. The State Fishery Committee of Russia violates the procedure of quotas distribution for catch in the Far East. Finally, Kamchatka has lost a significant part of quotas for catch of valuable fishes and received a half of quota allocated for all the Far East for catch of not valuable fish as a replacement. The situation is aggravated by the preferences of the government to foreign vessels in the economic zone of Kamchatka. GEOGRAPHICAL DISCOVERIES OF THE DUTCH IN RUSSIAN ARCTIC ZONE: SEA EXPEDITIONS OF WILLEM BARENTS N. Vekhov, Moscow The period between XIII and XVII centuries in the human history is known as the epoch of Great Geographical Discoveries: the Portuguese, the Spanish, the Italians, and the English became outstanding seafarers. The Dutch discovered the part of the Arctic Zone known as Russian Arctic. They were attracted to this part of the World Ocean due to their wish to find a way to conduct trade with South-east Asia: they could not use southern shipping routes because the latter were under control of the English, the Portuguese, and the Spanish. And though the northern route to Asia has not been laid, the contribution of the Dutch to the investigation of the Arctic Zone is significant: they discovered several archipelagos and islands, gave the first description of the isles and the continental coast-line to the east from the Kola Peninsula, and charted them. The first Dutch expedition to the Arctic Zone took place in 1594: the Novaya Zemlya (The New Land) was described, and many strange animals met — polar bear, walrus, whale. The second expedition in 1595 included more ships than the first one, but from the very beginning of the sailing everything went wrong: accidents and unfavourable weather conditions prevented the ships from going further than the New Land. It was decided to stop here and study the lands discovered in 1594, in a more detail. The result of the second expedition disappointed Dutch merchants and the Government, so the next expedition of 1596 was less in quantity and equipment. And again hardships chased the travellers: the part of the expedition was bound in ice and it was decided to stay for wintering. It was the first and the hardest wintering in the history of developing the Arctic Zone — winterers had three major troubles: finding food, keeping warm and defence from polar bears attacks. Many people died, and in the end of the wintering Willem Barents fell ill. During the back sailing, on 20 June he died and took his final asylum in an ice grave. Within the course of time the sign of the grave disappeared and the searches conducted in 1995 and 1998 were unsuccessful. Translated by Anna ROMANOVA 1-97 / 1-98 / 2-98 / 3-98 / 4-98 / 1-99 / 2-99 / 3-99 / 4-99 / 1-01 / 2-01 / 3-01 / 1-02 / 2-02 / 3-02 / 1-03
http://mi32.narod.ru/engl/01-02.html
CC-MAIN-2017-17
en
refinedweb
Pisa and Reportlab pitfalls Generating PDFs with Django, Pisa and Reportlab and what to look out for About a week ago an entry about generating PDFs with Django was posted on the Uswaretech Blog. In particular this blog post talks about using Pisa, a html2pdf python library to generate complex PDFs from existing HTML pages. I now took the chance to finish the draft for this blog post you are reading right now, which was lying around for about 2 months, which I originally wrote to point out some pitfalls I ran into while using Pisa in a Django project. I'm using Reportlab for PDF-generation, which is a very powerfull open-source python library. Reportlab features both, a good low-level API for generating documents and an higher level abstraction with has an layout-engine, which knows where to do pagebreaks and such things. Some documents are easy to build using the reportlab API, especially documents which contain much text and are not so heavily styled. For documents which are heavily styled I added one more tool to do the heavy job, while I could concentrate on technologies, which I'm fluent in. The solution was to use pisa and write the documents in plain old HTML+CSS. Pisa is an open-source python library which uses html5lib to parse HTML (with CSS) documents and then creates a PDF using reportlab. The results are pretty good and pisa provides some vendor-specific CSS extensions, which allow styling pages with different templates and adding static headers and footers. Additionally there are some pisa-specific XML-tags, like <pdf:pagenumber />, which allow adding pagenumbers, pagebreaks etc. to the resulting PDF. Both, reportlab and pisa, have some documentation, but I want to document some gotchas I couldn't find in the docs, which took some time for me to figure out. (And I hope to save someone else some time figuring this stuff out.) Where are my pagenumbers? As said before pisa allows adding static headers and footers via CSS, this is documented very well, so will not repeat it here. One problem I ran into was adding a static footer to my pages, which contains a <pdf:pagenumber /> tag and should show the current pagenumber at the bottom of every page of the resulting PDF. The problem was, that every page just showed the number 0. The solution was very simple, but I was only able to figure it out after studiying the pisa source-code: You can only use the <pdf:pagenumber /> tag inside a paragraph, not (as I did) inside a table for example. Even parapgraphs inside tables don't work. It has to be a top-level paragraph. Wrong Pagebreaks The documents I was generating are starting with a headline, short introduction text (about 5 lines) and then follows another headline and a long table (more than one page). Pisa and reportlab know how to do pagebreaks in tables, but I had the problem, that everytime the table was longer than the remaining space on page one a pagebreak appeared directly after the introduction text, the table starts at the top of page two and was correctly split over the next few pages. The pagebreak was added by reportlabs layout-engine (platypus), which is rather smart, but I had to find out what was going on, before I could understand why this was happening. The layout-engine knows a concept of keep-with-next, which avoids orphaned elements on the bottom of a page. Pisa assignes a default keep-with-next attribute to all HTML headers (h1-h6), which is a good thing most of the time, but had the following consequences in my case: reportlab knows that the headline before the table and the table should be kept together, because both together don't fit on the remaining space on the first page, they are moved to second page. They don't fit on this page either, but now the pagebreak is done in the table, because nowhere in the document will be more space as on a new blank page. The solution to avoid this pagebreak and just have a normal break inside the table is to assign a css style of "-pdf-keep-with-next:false;" to the headline just before the table. This will tell pisa to tell reportlab not to use a keep-with-next around the headline and the table. Reportlab will put the headline on the first page, then the table and will notice that the table don't fit on the page and will add a pagebreak inside the table, just as one would have expected. Adding Pictures to the PDF This one is rather trivial and not really a pitfall, but as it fits nicely into this topic I'm going to write it down here. To be able to get pictures into the PDF, which are visible on the HTML page, you should define a link-callback function, which knows how to translate a src attribute from the HTML document to a local path to the image. If your are processing remote files, this callback could even fetch the image, but it has to return a path to image where reportlab can find it on the filesystem, not a file-like object or something else. A very simple link callback function which should work for most Django project could look like this: import os from django.conf import settings def fetch_resources(uri, rel): """ Callback to allow pisa/reportlab to retrieve Images,Stylesheets, etc. `uri` is the href attribute from the html link element. `rel` gives a relative path, but it's not used here. """ path = os.path.join(settings.MEDIA_ROOT, uri.replace(settings.MEDIA_URL, "")) return path My company has try both reportlab and pisa but found none good enough. Then we tryied pyuno, the OpenOffice python api. You can do some interesting thinks using this, such as modify the resulting document exporting it to odt, import html into a document, use template based design for documents and so on. We found it the most interesting alternative for now. i encourage to give it a chance. Geschrieben von esauro 2 Stunden, 5 Minuten nach Veröffentlichung des Blog-Eintrags am 16. Okt. 2008, 12:44. Antworten pyuno looks interessting, but a bit more complex than pisa+reportlab. Thanks for pointing out, I will look into it the next time I need such functionality. Geschrieben von Arne 2 Stunden, 12 Minuten nach Veröffentlichung des Blog-Eintrags am 16. Okt. 2008, 12:51. Antworten Olive: I've done this already, but I think the solution with pisa is much easier, considering that the HTML page is already existent. Geschrieben von Arne 7 Stunden, 15 Minuten nach Veröffentlichung des Blog-Eintrags am 16. Okt. 2008, 17:54. Antworten Hi Arne, I am the author of Pisa. I like this article very much and will try to eliminate the described pitfalls as soon as possible. Thanks for using the tool. If anyone else traps into problems with Pisa do not hesitate to join the mailing list and I will try to help as soon as possible: Dirk Geschrieben von Dirk Holtwick 6 Tage, 1 Stunde nach Veröffentlichung des Blog-Eintrags am 22. Okt. 2008, 12:10. Antworten hi arne, i'm fairly new to django and not too fluent in python either. your approach to adding pictures to the pdf sounds quite interesting, but how do i actually use it in my code. i have rendered my template and have the html file i want to convert, but how do i actually change each path in the existing html? amy Geschrieben von amy 1 Monat, 3 Wochen nach Veröffentlichung des Blog-Eintrags am 7. Dez. 2008, 22:29. Antworten Amy, you don't have to change the paths in your HTML template, as long as the pictures show up correctly in a webbrowser. The code above is a function, which will be passed to the pisa function, which contructs the PDF. The purpose of the function is to translate the Image-URLs in the HTML page to filesystem paths, which can be used to embed the picture in the PDF document. Geschrieben von Arne 1 Monat, 3 Wochen nach Veröffentlichung des Blog-Eintrags am 8. Dez. 2008, 06:16. Antworten hello arne, when trying to handle pagebreaks IN tables with pisa i'm facing some problems: i'm not sure how/where to apply the "-pdf-page-break". as you pointed out in your article "Pisa and reportlab know how to do pagebreaks in tables" it would be great if you could give me a short example. thanks a lot, axel Geschrieben von axel 2 Monate, 3 Wochen nach Veröffentlichung des Blog-Eintrags am 7. Jan. 2009, 17:10. Antworten Hi axel, "Pisa and reportlab know how to do pagebreaks in tables" means that they will automatically break a table that is longer than one page at the end of the pages and continue the table on the next page. Applying -pdf-page-break should not be needed inside the table. I don't know if it would be even possible but I suspect not. Maybe you could ask for help at the google group for pisa: Geschrieben von Arne 2 Monate, 3 Wochen nach Veröffentlichung des Blog-Eintrags am 7. Jan. 2009, 17:35. Antworten hey folks! i found another little pitfall in the image-processing-part (that i could find neither in your information nor in the official documentation), so i'm going to post it here, maybe it saves somebody's valuable worktime. you _have_to_ use the height and width attribute in the <img /> tags of the images that should be placed in the pdf. if you don't have them, pisa will just ignore the images. arne, thanks for this site, it saved me a lot of time :) cheers, nitram Geschrieben von nitram 4 Monate, 1 Woche nach Veröffentlichung des Blog-Eintrags am 23. Feb. 2009, 14:24. Antworten @nitram: Thank You nitram!!!! Thanks a lot, really. You're _have_to_ use advice helped me alot. Geschrieben von Gaston Ingaramo 1 Jahr, 2 Monate nach Veröffentlichung des Blog-Eintrags am 7. Jan. 2010, 15:50. Antworten Hi, Thanks for your article. I am successfully converting an html page with multiple tables to PDF using tips from this article and the one at UsWareTech. However, the rendered PDF does not show any table styling. Not even simple table borders, only text. My html tables are styled with CSS, with lots of different styling for <th> <tr> etc. Is it possible to have that styling show up in the generated PDF? Geschrieben von S Kujur 1 Jahr, 6 Monate nach Veröffentlichung des Blog-Eintrags am 19. April 2010, 07:11. Antworten Hi, When I add <pdf:pagenumber/> to my file I get double page numbers e.g. 11 instead of 1 on the first page. Any thoughts? Geschrieben von Mike Gauthier 1 Jahr, 8 Monate nach Veröffentlichung des Blog-Eintrags am 17. Juni 2010, 01:39. Antworten It is bad form to user spaces in URLS. I was working on a site today, where the admin had uploaded some images with spaces in the names. While most browsers tolerate such bad URLS, the html to pdf program did not accept them. In Django I created a template filter to replace the spaces with %20 and now everything is working fine. Geschrieben von Paul Egges 1 Jahr, 8 Monate nach Veröffentlichung des Blog-Eintrags am 13. Juli 2010, 06:51. Antworten
http://www.arnebrodowski.de/blog/501-Pisa-and-Reportlab-pitfalls.html
CC-MAIN-2017-17
en
refinedweb
mastercoin MasterCoin quotientcoin_xqn.png 2014-11 161 XQN Quotient 2014-11-11 1% 61 1618033 1,618,033 coins (proof-of-work), 1,618% PoS interest rate, 618 coin cap on stake reward, 61 second block spacing, 161 blocks to maturity, Min Coin Age 16 hours, 1% Premine (16,180 coins) - so the developers are well fed ico-pow-engagement Dissemination via purchase, then by proof of work and then by giveaway ico-engagement Dissemination via ico, then media engagement profitcoin_pfc.png 2014-11 PFC ProfitCoin 2014-11-11 20% 7 300 premined 10000000 PFC is the only one cryptocurrency with guaranteed profit of 10 - 20% per month, Coins total: 10 000 000, 7 PFC per block, Block generation: 5 minutes, About 2 000 PFC generating every day, Premine 20%, We're planning to start accepting ProfitCoins for hashrate purchases in hashprofit.com service before the end of November (approximately 25th of November), Post address 121 Amathountos Avenue, Agios Tychonas 4532, Limassol, Cyprus. nist6 Folklore combiner: composed of 6 algorithms approved by the National Institute of Standards and Techology (BLAKE, BMW, Grøstl, JH, Keccak, Skein) c11 Chained 11 hash functions vanilla Vanillacoin protocol. momsha momentum plus a round of SHA2-512 ico-freebie Dissemination via purchase and by giveaway 3s Folklore combiner: composed of SHAvite, SIMD and Skein. unknown Dissemination scheme is not known or undeclared czarcoin_czr.png 2014-10 CZR Czarcoin 2014-10-21 100% 60 premined 100000000000 Algorithm: Scrypt, ~100% Mined with fair distribution to the public, 100,000,000,000 CZR Issued with a target of 1% annual inflation. commodity coin for czarparty fimkrypto FimKrypto, Whirlpool Whirlpool realpay Realpay 2nd gen realpay RealPay intercoin Intercoin 2nd gen, details opaque intercoin InterCoin upcoin_up.png 2014-04 UP UpCoin 2014-04-03 (0.1, 4.1) 60 300000 Algo: Fresh, Block Time: 120 seconds, Block diff retarget: every block, Block reward: , 1st block premine of 14100 coins, block 2 - 100 : 0 UP for fair launch, blocks 101 - 7200 : 500 UP, blocks 7200 - 14000 : 250 UP, blocks 1400 - 28000: 125 UP, Total PoW coins 7'050'000 UP + premine, Stake interest: 15%, Min Stake Age: 4 hours, Max stake age 12 hours. growcoin_grow.png 2014-11 GROW Growcoin 2014-11-09 1005000 60 2587800 JH NIST 2nd round candidate dividend Dividend paid proportionally to owners of online wallets lioncoin2_lion.png 1 block 2014-11 10 60 LION2 Lioncoin 2014-11-23 0.5% 60 6400000 X15 Algo., Abbreviation - Lion, Total coins - about 6.4 Million., 60 seconds block ., Difficulty adjusts every block., 10 confirmations for transactions., 60 confirmations for mined blocks., 7% POS Annual , POS starts at block 7 000, POW Block 7 000 (about 5 DAYS)., POW BLOCK REWARD Block 1-200 -- 1 COIN, Block 201-400 -- 10 COINS, Block 401-600 -- 100 COINS, Block 601-7000 -- 1 000 COINS , , P2P: 20010 RPC:20020, 0.5% Premine ripple Ripple, 2nd gen ripple Ripple thiamine Folklore combiner: (BLAKE-512, SHA2-512, BMW-512, CubeHash-384, Whirlpool Enhanced, Groestl-512, JH-512, SHA3-512, Skein-512, Luffa-512, Tiger, Tiger v2, RipeMD-160, Cubehash-512, Panama, Shavite-512, SIMD-512, Echo-512, Fugue-512, Hamsi-512, Shabal-512, HavalType5-256, Standard Whirlpool) poc-pow Proof of chain followed by Proof of Work. nfd NFD NXT Fair Distribution poc-registration Distribution by proof of chain and by registration. stellar Stellar protocol. NeoScrypt NeoScrypt x12 Chain of 12 hash functions dcrypt SHA256 made difficult to parallelise via using leapfrog hashing pob Proof-of-Work plus Proof-of-Stake augmented by Proof-of-Burn nostrum Nostrum engagement Dissemination via engagement arenacoin_arn.png 2014-11 ARN ArenaCoin 2014-11-08 100% 30 1000000 ZeroCoin implementation pure POS, 500% annual interest, encrypted messaging. 100% of the supply is available for the initial coin offer, the price will be determined at the end of the ico based on total investments. Initial coin offer will be closed Tuesday 11 / 11 / 2014, the Coin will be distributed to all investors the same day. roulette Folklore combiner: first block is hashed with SHA2, then 16 rounds of hashing are performed, each round with randomly chosen algorithm from the set of 16 hashing algorithms: (BLAKE, Blue Midnight Wish, CubeHash, ECHO, Fugue, Grøstl, Hamsi, JH, SHA3, Luffa, SHA2, Shabal, SHAvite-3, SIMD, Skein, Whirlpool). x17 Folklore combiner: composed of BLAKE, BMW, Grøstl, JH, Keccak, Skein, Luffa, CubeHash, SHAvite, SIMD, ECHO, HAMSI, Fugue, Shabal, Whirlpool, SHA2big, Haval 5-pass nxt NXT 2nd gen electric_ele.png 2014-01 ELE Electric 2014-01-02 60 superseded 10000000000 bitbaycoin_bay.png 2014-11 10 50 BAY Bitbay 2014-11-07 100% 64 closed source 1000000000 Coin Specs (POS 2.0):, Block time: 64 seconds, Nominal stake interest: 1% annually, Min transaction fee: 0.0001 Bay, Confirmations: 10, maturity: 50, Min stake age: 8 hours, no max age, Total: 1 Billion, Ticker: BAY, Smart Contracts , Advanced Smart Contracts: Machine to Machine advanced contract fulfillment (IOT: Internet of Things), Multi-Sig, Joint Accounts, Muti-Coin Wallet Features, Decentralized marketplace (A new approach different from OpenBazaar), POS 2.0 (Theoretical “51% attack” is nearly impossible), Convert BitBay into a hedging coin (price will be pegged to RnB), Windows, Mac, Linux, and Mobile Staking Wallets (Android, iPhone, Windows), In-wallet resources (Chat, Block Explorer, Gaming, Community)., Proof of Developer (POD) and Crypto Certify ratings in progress. bitcoin2 Bitcoin2 protocol. airdrop-engagement Dissemination first via airdrop, then also media engagement cfc-pow Dissemination via crowdfund campaign and by proof of work scrypt-n Adaptive N-factor scrypt colbertcoin_clbc.png 2014-04 CC Colbertcoin 2014-04-12 unknown Protection scheme is not known or undeclared twe Chain of 11 hash algorithms faucet Dissemination via faucet poq Dissemination via performing in-game quests. Grøstl NIST 2nd round candidate engagement-pow Dissemination via engagement, then pow. glowshares_gsx.png 2014-11 50 GSX Glowshares 2014-11-10 60 1000000 Proof of Stake, Block Time: 1 Minute, Interest Rate: 7% annually, Minimum Coin Age: 8 hours, Maximum Coin age: 45 days, Block Maturity: 50 blocks yescrypt BOOST Y-scrypt node Node, rename of NXTL, NXT Lite adcoin_xad.png 2014-11 XAD Adcoin 2014-11-11 60 9800000 ADCoin(XAD) is a Scrypt based cryptocurrency with 60 seconds block time. As you can see from the title, ADCoin will be the game changer of cryptocurrency and online advertising industry. poc Dissemination via proof of chain boinc Based on research contribution to BOINC clearinghouse ClearingHouse por Protection solely by Proof-of-Resource boardcoin_board.png 2014-11 120 BOARD Boardcoin 2014-11-11 100% 60 12000000 We will have marketplaces on every forum where members can buy or sell stuff for Boardcoin. We attempt to back 1 boardcoin = $0.1 using our digital assets such as advertisements and classified listings. ico-freebie-pow Distribution via ICO, giveaway and Proof of Work. Dagger Dagger 2nd gen randpaulcoin_rpcd.png 2014-11 RPCD RandPaulCoin 2014-11-05 100% 20 Bitshares X copy 1000000 As per the BitShares community consensus, we will initialize the Genesis block with 10% AGS, 10% PTS — and as stated earlier, 80% for owners of Ron Paul Coin. dpos Protection solely by Divined Proof-of-Stake bitshares BitShares radix 2nd gen, no details, reportedly in development radix RADIX, in permadevelopment scrypt scrypt, based on tarsnap kind Goods or services in kind interest Interest pow-pos Proof-of-Stake, augmented by a fixed-duration Proof-of-Work phase hexioncoin_hxc.png 1 block 2014-11 HXC Hexion 2014-11-13 347.2 60 1000000 SHA256D, 1,000,000 Total PoW Coins, 347.2 HXC per block, 1 minute block intervals, 1 block difficulty adjustments, 2880 PoW Blocks ( 2 days ), PoS min age 1 hr - max age unlimited, PoS interest 10,000% Yearly entropybitcoin_ebt.png 2014-01 EBT Entropybit 2014-01-14 1% 1000 30 200000000 (see below). Scrypt-based cryptocoin with 1000 coins per block and 250 million total coins. ('h', 100000, 'b') nxthorizon Next Horizon noodlyappendagecoin_ndl.png 2014-01 NDL NoodlyAppendage scrypt-n-f Uses a fixed N-factor to avoid anticipated network issues fudcoin_fud.png 2014-11 60 FUD FUDcoin 2014-11-13 240 20001561 Ticker Symbol: FUD, Algo: SHA-256, Block Time: 4 minutes (240 seconds), Block Reward: Smoothly decreasing exponential (see chart below), Total PoW Blocks: 43,200 (~120 days), Money Supply: PoW 20,001,561 FUD, PoS Rate: 35%, PoS Min / Max Age: 6 days / 18 days, Mining/Minting Spendable: 60 blocks, Addresses begin with “F” novel Novel, singleton acro ACRO, in permadevelopment pow-ico Dissemination via proof of work and by purchase cfc Dissemination via crowdfund campaign pow Consensus mechanism obtained from Proof-of-Work x15 Folklore combiner. Chain of 15 hash functions nzh NHZ NXT New Horizon 2nd gen fico-ico-airdrop Dissemination by fixed ICO, then ICO, followed by airdrop bitcoin Bitcoin chance Gaming-specific meta-protcol doublecoin DoubleCoin, facetious clubbed Dissemination to membership Twister Twister poc-sale Dissemination via proof-of-chain and sale Luffa Luffa bcrypt blowfish-crypt scrypt-n-m Scrypt-n using M modifier, ASIC-hostile x11 Chain of 11 hash functions edgecoin_edge.png 25 blocks 2015-06 50 EDGE Edge 2015-06-07 0 100 60 1000000 EDGE: A I2P-Centric Gaming Crypto-Currency v.1.0 fixcoin_fix.png 2014-04 FIX Fixcoin block range coin rewards and 27 million total coins. ivugeoevolutioncoin_iec.png 2015-04 IEC IvugeoEvolutionCoin 2015-04-17 7.6 60 100000000 [IEC] IvugeoEvolutionCoin [SCRYPT][POW] - Gold Backed, Scrypt Algorithm, Proof of Work, Block time 1 minute, 7.6 coins per block, 100 million total coins. Every single coin has certain value in gold.To avoid reducing the value of our COIN it must always be balanced gold standard. Now we have 60% of the coins covered in gold(we have our gold in Gold wire - it's used in Electronics) with worth 36 million euros. Gold standard will be always increased with how many coins will be mined. Since we are trying to diversify risk, next increasing of gold standart will be in Troy ounces psilocybincoin_psy.png 2015-05 PSY Psilocybin 2015-05-27 60 2141400 [PSY] Psilocybin | SporeNet | No ICO/Premine | SHA256d | NINJA!, PoW Reward Structure:, Blocks 1 - 250 = 1500, Blocks 251 - 500 = 1000, Blocks 501 - 750 = 800, Blocks 751 - 1,000 = 600, Blocks 1001 - 1,250 = 400, Blocks 1251 - 1,500 = 400, Blocks 1501 - 1,750 = 600, Blocks 1751 - 2,000 = 800, Blocks 2001 - 2,250 = 1000, Blocks 2251 - 2,500 = 1500 positroncoin_tron.png 2015-04 100 TRON Positron 2015-04-11 0 90 2000000 Positron has a rapidly changing Proof of Work and Proof of Stake system, and it's first goal is to test an evolving POS system and observe it's effects on the blockchain and in the altcoin marketplace. Positron is built on the foundation of Bitcoin, PPCoin, Novacoin, and BitcoinDark, with a modified POS system., Short: TRON, Algorithm SHA256, 90 Seconds Per Block, 100 Blocks to Confirm, 20MB Blocksize, Proof of Work rejected after block 3400, Total approx potential coins from POW: 900,000, lower with mixed POS at block 2700. 3 days mining., Dynamic POS last approx one month. End Block 21,000, Total Approx Coins from first month of Dynamic POS: 614,000, Total Approx Total Coins 1.5m. After block 21,000 POS goes to approx 9% yearly. icecoin_ice.png 2013-06 ICE IceCoin 2013-06-11 DoA, relaunched by someone else and dead again. distrocoin_distro.png 1 block 2014-05 DISTRO Distrocoin 2014-05-12 0.7% 50 100000000 Stopped 5/15/2014. Sources removed. ('h', 1000000, 'b') bluechip_bch.png 2014-06 10 30 BCH Bluechip 2014-06-11 1% 60 15000000 BlueChip Specifications:, X13 algorithm, 15 million coins mined during the PoW phase, 1% Premine, 7 Day PoW, 15 % PoS, Confirmations - 10, Maturity - 30, PoW BlueChip Block Reward Structure:, 2-100: 1 BlueChip, 101-3100: 2,000 BlueChips (6mil ~2 days), 3101-8100: 1,000 BlueChips (5mil ~3.5 days), 8101-10,100: 2,000 BlueChips (4mil ~1.5 days mybroscoin_mbc.png 24 hours 2014-01 MBC Mybrocoin 2014-01-18 0 100 120 premined 100000000 re-target every 2016 blocks, 100 coins per block, and 100 million coins. Mario Bros Coin - MarioBrosCoin Abbreviation: MBC Algorithm: SCRYPT Date Founded: 1/18/2014 Total Coins: 100 Million Confirm Per Transaction: Re-Target Time: 24 Hours Block Time: 120 Seconds Block Reward: 100 Coins Halved 840k coins Diff Adjustment: Premine:187 Blocks. ('h', 840000, 'c') whycoin_why.png 2015-10 50 WHY Whycoin 2015-10-15 100% 90 50000000000 Type: Proof of Stake (PoS), Block Time: 90 seconds, Stake Interest: 5%, Minimum Coin Age: 8 hours, Block Maturity: 50 blocks, 50 Billion pre-mined with safety in mind that if all 50 Billion were to be staked at 20% interest, it would be a 12 billion rise yearly. This has been stated to not be the case, but on the off-chance it were consistently staked it would run through safely for a bit under 3 years. If later on this coin were to achieve widespread adoption, an upgrade and refactoring of the system could take place with a hard fork to push it beyond these limitations. firecoin_fec.png 2014-01 FEC Firecoin orangecoin_oc.png 2014-05 OC OrangeCoin re-target every 1 block, 5000 coins per block, and 200 million total coins. emiratescoin_uae.png 2014-05 UAE EmiratesCoin 651000000 930000000 1) About UAE United Arab Emirates - the land of great prospects. Bitcoin isn't banned here. Moreover, here are installed ATMs which give an opportunity to buy Bitcoins. Such possibility significantly promotes the development of cryptocurrency. The number of rich citizens and tourists in the country is great. National cryptocurrency can attract wealthy Arab Investors to the EmiratesCoin as well, as to the Bitcoin. It allows to strengthen and even increase the Bitcoin's rate. We are planing to distribute the coins to the population of the EmiratesCoin to catch Arab investor's attention. This operation will be carried out this summer. Our currency is created for direct exchange for Bitcoin. Trades will be able to exchange EmiratesCoin and vice versa right in our purse using exchange BTC/UAE, development of which we will begin in the near future. United Arab Emirates is a new state, which has a great stocks of natural resources, such as oil and natural gas. Every fifty-fifth country citizen, according to statistics, is a millionaire. Gross domestic product is about 384,196 billion dollars. So, citizens of this country are always ready to go on invests in perspective economic spheres, one of which is a cryptocurrency exchange. So Implementation of the national cryptocurrency could be a new breakthrough in the economic of UAE. 2) Specifications Total coins: 930,000,000 Premine: 651,000,000 IPO and Bounties: 23,250,000 For UAE population: 627,000,000 Avaible for mining: 279,000,000 PoW:X11 DGW 60 second block times Block Rewards 1-300,000 500UAE 300,001-600,000 300UAE 600,001-900,000 100UAE 900,001-1,000,000 90UAE 3) Airdrop AirDrop will be held in 6 stages. Each resident will receive 60 coins a half years. The first stage will be held the first of June and here will be given 15 coins. The following steps will be carried out everyfour months with reduction in the number of coins in each stage by 2. (1st stageâ15 coins, 2ndâ13..., 6th stageâ5 coins). Coins can get any UAE citizens that have Emirates Identity Card. We have links with some private companies in the UAE that have an opportunity to conduct verification by the identification card number of UAE citizen. This will allow us to avoid fraud by unscrupulous participants of our event. Here are five simple steps to getting EmiratesCoin for UAE residents: 1) Name card's holder should proceed to the section of our website, called AirDrop. 2) On the next page, the user must enter his identification number (ID). 3) If the ID, after checking carried us, belongs to the citizen of region Arab Emirates, the user will be transferred to the next page. 4) On the new page, the user will be offered a choice: to authenticate through a personal page on Facebook, or via cell phone number. At the appropriate place will be sent a confirmation code that should be entered in a special field on our web site. 5) Upon successful passing of the last stage, the user should enter his EmiratesCoin purse number in the window that appears. If this wallet doesn't exist then the new unique number wallet is created for user. In the near future purses released for mobile devices. At this stage the process of receiving the coins to the user is ending. 4)IPO Your investment will serve as a support for EmiratesCoins. All the money will go to the improving equipment, ensuring stable operation of the servers, advertisement, creating an online store and exchange. All of this will consolidate UAE coin on the cryptocurrency exchange as one of the most perspective and reliable coins. Total coins for IPO: 23,000,000 Price: 0.0000025 UAE/BTC IPO Wallet: How to invest? Click on the link: Step 1: send BTC to 1BzgQ3sRNABgSHcxairjEtXfLBwZQBKCQ4 Step 2: Enter your bitcointalk name(if you have it) in "Name". Step 3: Enter your REAL e-mail in "Email address". Step 4: Paste your TXID in "Transaction ID(TXID)". Step 5: Click "Send". 5) Extra Info Wallet preview: Video: Bounties: Spanish translation - 5,000 uae Chinise translation - 10,000 uae Russian translation - 10,000 uae German translation - 5,000 uae Italian translation - 5,000 uae Fisrt Exchange - 50,000 uae Second Exchange - 50,000 uae First Mining Pool - 35,000 uae Second Mining Pool - 35,000 uae Fisrt p2p Pool - 20,000 uae Second p2p Pool - 20,000 uae megcoin_meg.png 2014-05 MEG Megcoin 2014-05-03 -1 eticoin_etc.png 2014-03 ETC Eticoin 2014-03-01 60 13500000 Posted on March 1, 2014, issue special coins E is the total four-year total 13.5 million, of which 7.2 million the first year, 2,000 a day, the second year of 3.6 million, 1,000 a day, the first 1.8 million years, 500 a day, in the fourth year 900 000, 250 a day. The average daily total allocation for 24 blocks, each one hour to produce a block and equal number of each block of the mine pool. All mining machine operator force a mining exploration focus, this block is decrypted when open, then the block is automatically assigned to all mineral pools involved in mining ore machines which, based on how much is allocated to each respective mining machine the ability to run integrated computing competence. coinmarketscoin_jbs.png 1 block 2014-09 10 500 JBS Coinmarketscoin 2014-09-01 60 rebrand to jumbucks 7750000 Specifications,., Ticker symbol: JBS (1 Jebus. 0.00000001 JBS = 1 Baby Jebus. Plural is Jebii), Proof of work, Algo: scrypt, Block reward:, 1-120: 1 JBS (Fair launch), 120-31000: 250 JBS, no halving, Max height: 31000 (after this network will not accept PoW), Max supply after PoW ends: ~7.75 million JBS sproutcoin_sprt.png 2015-06 SPRT Sprouts 2015-06-30 150 -1 Sprouts - The Entry level Cryptocurrency., Hybrid - PoW/PoS, PoW Algo - SHA256D, PoS Reward - 5% 5Days, BlockReward - Depends on Difficulty, Blocktarget - 2.5 minutes sjwcoin_sjw.png 10 hours 2015-06 SJW SJWcoin 2015-06-20 5000 60 tacocoin clone 50000000 SJWcoin.com, you better check that privilege, 5K blocks every minute, 10 Hr Re-Target, Merge-Minable, Not Racist, Reddit tipbot (/u/sjwcointipbot), block-explorer , Hates misoginysts ('h', 50000, 'b') ponycoin_pony.png 1 hr 2015-07 PONY Ponycoin 2015-07-22 60 60 108000000 Ponycoin is built off of the Bitcoin 0.8 wallet. turbocoin_xtb.png 2014-01 XTB Turbocoin candlecoin_cd.png 2015-11 CD Candlecoin 2015-11-22 5% 60 30000000 CANDLECOIN, CD, Algo : SHA256d , PoS 30 % Annual , Blocktime - ~60 seconds , RPCPORT : 38876, P2PPORT : 38877 , Min. Stake Age : 3 Hours , Max. Stake Age : 90 Days, 30 Million CD honorcoin_xhc.png 2014-06 80 XHC Honorcoin 2014-06-04 2% distributed 1620 20 21000000 Algorithm: X11 | 100% PoS (After PoW), Total Coins [PoW]: 21,000,000, Block Times: 20 seconds, PoW Blocks: 12960 | 3 Days, Block Reward: 1620 coins, Blocks <101: 1 coin, Coin Maturity: 80 blocks, Annual Interest: 5%, Min Coin Age: 6 hours , Coin Age Maturity: Unlimited, Free Distribution: 2% | 420,000 coins axiocoin_axio.png 2014-06 4 50 AXIO Axiocoin 2014-06-29 60 100000000 soundbit_snd.png 1 block DGW 2014-07 SND Soundbit 2014-07-14 200000 120 -1 Super secure hashing algorithm: 15 rounds of scientific hashing functions (blake, bmw, groestl, jh, keccak, skein, luffa, cubehash, shavite, simd, echo, hamsi, fugue, shabal, whirlpool), Block reward is controlled by moore's law: 2222222/(((Difficulty+2600)/9)^2), GPU/CPU only mining - no ASIC!!, Block generation: 2 minutes, Difficulty Retargets every block using Dark Gravity Wave, Est. ~7M Coins in 2015, ~13M in 2020, ~23M in 2030, Anonymous blockchain using DarkSend technology (Based on CoinJoin): Beta Testing, PREMINE 200k coins for IPO and network testing. This is less than one percent premine! shoecoin_shoe.png 2014-07 100 SHOE Shoecoin 2014-07-16 yes 500 60 closed source 2500000 Specifications, Algorithm: X15, Total coins: 2,500,000, Proof of Stake Reward: 11, Block Time: 60 seconds, Coin Maturity: 100 blocks, Min Coin Age: 3 day , Min Tx Fee: 0.00000001 SHOE, Premine: yes, Block Rewards, 500 SHOE, PoW: 5000 Blocks xenoncoin_xec.png 2014-09 XEC Xenoncoin 2014-09-11 0.5% 20 60 100000000 We are releasing XenonCoin as a Proof of Work only coin in the hopes of building a secure blockchain and giving something back to the miners who have invested big money in rigs only to see it all go to waste on PoS coins. Xenoncoin follows Myriad's implementation of Multi-PoW and we hope to work closely with not just the community but other Developers looking to make advancements with Multi-PoW coins. We do have further annoucements to make regarding ongoing development support and we will be looking to test some new features shortly after launch., Algorithim: Multi PoW. (Scrypt, SHA256D, Qubit, Skein or Myriad-Groestl), Total Coins: 100 Million, Starting Subsidy: 20 XeC Per Block, Minimum Subsidy: reward will never be lower than 1.25 XeC, Block Halving: Block rewards will halve every 2102400 blocks (around 2 years), Ticker: XeC, RPCPort: 16669, P2Port: 16668, Premine: 0.5% ('h', 2102400, 'b') gsccoin_gsc.png 2014-04 GSC GSCcoin Scrypt dime_dime.png 2013-12 DIME Dimecoin re-target every 65536 sec, 1024 coins per block, and 460 billion total coins. sapiencecoin_xai.png 2014-11 112 XAI Sapience 2014-11-19 60 1123581 The total number of coins in Sapience is 1,123,580, based on the Fibonacci sequence., Proof-of-stake interest rate is 11% per year., Coins mature after 112 blocks. florincoin_flo.png 90 blocks 2013-06 FLO FlorinCoin 2013-06-17 0 100 40 160000000 First coin with transaction comments Florin Coin - FlorinCoin Abbreviation: FLO Algorithm: SCRYPT Date Founded: 6/17/2013 Total Coins: 160 Million Confirm Per Transaction: Re-Target Time: 90 Blocks Block Time: 40 Seconds Block Reward: 100 FLO, halving every 800,000 blocks Diff Adjustment: Premine: None. ('h', 800000, 'b') luckysevenscoin_l7s.png 7 minutes 2015-08 77 L7S LuckySevens 2015-08-10 22.85% 77 77777777 Algo: Scrypt, Pre-Mine: 17 777 777 to cover investments, Total Coins: 77 777 777, Block Time: 77 secounds, Coin Maturity: 77 Blocks, Retargeting: 7 minutes fuzzballcoin_fuzz.png 2015-09 FUZZ Fuzzballs 2015-09-20 0 500 30 1000000 No premine, no ICO., Specs, Algo: Scrypt, Ticker: FUZZ, Block Target: 64 seconds, POW:, Block > 500 FUZZ, Halves every 50000 blocks ('h', 50000, 'b') sunnysideupcoin_ssu.png KGW 2015-11 SSU Sunnysideupcoin 2015-11-26 10% 50000 600 1000000000 Coin Name: sunnysideupcoin , Coin Abbreviation: SSU , Coin Type: Pure PoW , Hashing Algorithm: Scrypt , Dificulty Retargeting Algorithm: Kimoto Gravity Well, Time Between Blocks (in seconds): 600 , Block Reward Type: Simple , Block Reward: 50,000 , The block reward will cut in half every 2.28 Months , Premine: Yes , Premine Amount: 10% , Total Coins: 1,000,000,000 ('h', 2.28, 'm') netbitscoin_nbs.png 2015-04 NBS Netbits 2015-04-27 1.9% 4000 120 1500000000 SHA256, POW, TOTAL COINS: 1,500,000,000, Time per block: 2Min, Reward per block: 4,000, block halves every 200,000 blocks, premine: 1.9% ('h', 200000, 'b') rublebitcoin_rubit.png 10 2015-11 120 RUBIT RubleBit 2015-11-10 4% 40 60 100000000 NAME: RubleBit, Short Name: RUBIT, algorythm: Scrypt, coin supply: 100000000, Developers safe: 4000000, coin mining available: 100000000, block generation time: 60 seconds, half reward: 1 000 000 blocks, block max size: 8 Mb, block reward: 40 RUBIT, start difficulty: 0.00024414, change difficulty: 10 blocks, coinbase maturity: 120 blocks ('h', 1000000, 'b') capitalcoin_cptl.png 2014-05 CPTL CapitalCoin 0 100000000 Launch Date May 01 2014 11:59 PM EST Total Coins 100 000 000 CPTL Premine 0 Blocks Block Reward 1 000 CPTL Block Time 30 Seconds Transaction Time 120 Seconds Difficulty Retarget Every Block Algorithm Scrypt PoS Annual Interest 10% Minimum Stake Age 12 Hours Maximum Stage Age 2 000 Days electronicbenefittransfercoin_ebt.png 2014-01 90 EBT ElectronicBenefitTransfer 2014-01-28 1% 1000 30 250000000 Welcome to the FUTURE of Electronic Benefit Transfer! EBT Coin! Do you think you missed out on the BTC train? Well think again! Now there is assistance for the crypto impaired, EBT is here for you less fortunate crypto holders so get on board! For most of its history, the Food Stamp Program used paper denominated stamps or coupons worth US$1 (brown), $5 (blue), and $10 (green). In the late 1990s, the food-stamp program was revamped, and.” What if in the future “card” is replaced with “coin”? Could cryptographic currency be the future of electronic benefit programs?, Coin Information: Scrypt, 250 million coins, 30 second block time, PoW Reward: 1,000 halving every 100k blocks - final subsidy of 1 coin after block 1,100,000, PoS Reward: Begins on day 30, mature at day 90, 1% (2.5m) pre-mine for giveaways/bounties (see below) ('h', 100000, 'b') sexcoin_sxc.png 8 hours 2013-05 6 SXC Sexcoin 2013-05-28 0 100 60 premined 250000000 JunkCoin clone, overtaken. Sex Coin - SexCoin Abbreviation: SXC Algorithm: SCRYPT Date Founded: 5/28/2013 Total Coins: 250 Million Confirm Per Transaction: 6 Per Transaction Re-Target Time: 8 Hours Block Time: 1 Minute Block Reward: 100 coins per block, halved every 600,000 block Diff Adjustment: Premine: Premined first 2 blocks for bounties. ('h', 600000, 'b') euphoriacoin_eup.png 2 blocks 2014-03 EUP Euphoriacoin 2014-03-13 5% 48000 60 11000000000 Algorithm - euphoria(modified scrypt-n) - gives great performance with nvidia cards too., Max Coins - 11 Billion, Initial Block Reward - 48000, Block Reward Halving - Every 80,000 Blocks, Block Time - 60 seconds., Difficulty Retarget - 2 Blocks, Premine - 5% for the IPO ('h', 80000, 'b') hardforkcoin_hfc.png 1 block 2014-08 HFC Hardforkcoin 2014-08-01 0.25% 237 30 50000000 Instead of deploying complicated, CPU-friendly algorithms, we use use simple GPU and ASIC friendly algorithms. And as soon as an ASIC is on the verge of being made, we simply change the algorithm by automated hard forking, cause it's so easy to implement it for us regardless of how popular the coin is. Specs: 30 Seconds block Block reward 237 Difficulty retarget every block. Premined 0.25% (address will be posted soon). Blake Constant 237 coins per block 500 million coins. sovereign_sov.png 2014-05 SOV Sovereign 6750000000 jixiangcoin_jxc.png 2014-01 JXC Jixiangcoin 2014-01-15 mozzsharecoin_mls.png 48 hours 2014-08 MLS Mozzshare 2014-08-05 60 210000000 Algorithm: HEFTY1, Proof: POW, Total coins: 210 million, Block time: 60 seconds, Ripening time of latest block: 48h brothercoin_brc.png 2014-06 BRC BrotherCoin 2014-06-10 60 100000000 X11 Algo, Progressive Difficulty Adjustment, TOTAL AMOUNT one hundred million, BLOCK REWARDS, 60 seconds block time , TOTAL one hundred million aryacoin_arya.png 2014-06 ARYA Aryacoin 2014-06-05 100 60 7500000 PROOF OF WORK:, Block Reward: 100 , Block 1: IPO/SHARE coins + 3 shares for bounties, Block 2 till 150:0 coins per block, (Block 1 - 100 will be mined for test), Block 151: 100 coins , Block 7000: 70 coins, Block 12000: 40 coins, Coins from PoW: 2.2/2.5 million approximately, Block Reward: 100 , Block Time: 60 sec, Diff retarget: Every block, PoW Distribution: 10 days, PROOF OF STAKE:, PoS interest: 20%, PoS block: 12000, Supply: 5 million, Minimum age: 24 hours, Max age: 30 days, Coin maturity: 50 block covencoin_cov.png 30 minutes 2015-04 10 COV Coven 2015-04-18 90 33000 Algrithm SHA256, Retarget 90s, POS/POW, Min Stake age 1h, Max Stake age 30d, Maturity 10 confirmations, Specs:, blocks 1-100 : 0, Blocks 101-200: 50, blocks 200-400:45, blocks 400-600 :40, blocks 600-800 30, blocks 800-1000 20, blocks 1000-1200 :10, Supply 33k currencyoflightcoin_cura.png 2015-11 CURA CurrencyofLight 2015-11-17 100% 0 60 closed source 444444444444444 CURA are issued at a fixed rate of 671 million units per day (671,000,000 miles per hour is the speed of light). CURA uses a proof of labour algorithm where people are rewarded for positive contribution to society. CURA uses Multichain Bitcoin privilege based technology for zero hour accountability. multichain Multichain protocol. cinnicoin_cinni.png 2014-04 CINNI Cinnicoin 868 coins per block and 15 million total coins. cryptospotscoin_cs.png 10 mins 2015-02 5 CS CryptoSpots 2015-02-28 100% 673 60 service layer 60000000 Virtual PoW, ~673 CS per block, ~1000000 CS monthly distributed , ~12000000 CS distributed in 1 year(total supply for PoW), POS is 7%, POS Maturity is 1 hour bernankoin_bek.png 2013-12 BEK Bernankoin 2013-12-17 satire -1 itcoin_xit.png 1 block 2015-05 120 XIT ITcoin 2015-05-15 16.5% 60 13000000 POW & POS, Algorithm: X11, Ticker: XIT, Total coin supply: 13 000 000, Block time: 60 seconds, Coinbase maturity: 120 blocks, Difficulty retargeting after every block , POW, Blocks 1-200 - reward 0 coins to guarantee a fair launch for all miners, Blocks 200-43400 - reward 100 coins - approximately 30 days of mining, Blocks 43400-86600 - reward 50 coins - approximately 30 days of mining, Total POW coins: 6 480 000 , POW phase will end at block 86600 and POS will begin., POS, Blocks 86600-129800 - reward 100 coins - approximately 30 days, Blocks 129800-655400 - reward 0.1 coins - approximately 1 year, Total POW and POS coins combined: 10 852 560, Premine: 2 147 440 = ~16,5 % of the coinbase., 0,5% of the coinbase will used for promotions and rewards., 1% of the coinbase will be distributed for free to the community. 0.2% of the coins will be given out every, month through our website. You will need to register to our forums to participate. Everyone will be identified, with the registration, e-mail address, ip-address and finally in our chat in the wallet. This way we will ensure a fair distribution., The registration for the giveaway will start on the first day of the month (june) and coins will be distributed on the last day of the month., This process will be repeated during the next 5 months., 15% of the coinbase will be sold publicly to investors starting now and ending next thursday 18.00 GMT. sapientacoin_sap.png 1 block 2014-06 SAP Sapienta 2014-06-23 2% 30 180 premature 12000000 X11 - POW, 180 secs x block, 30 coin x block, Retarget every block with KGW, 12,000,000 Total Supply, Premine 2% rootcoin_root.png 2014-07 ROOT Rootcoin 2014-07-28 99 300 3000000 algorithm: scrypt (community voting), blockreward: 99 ROOT, total coins: 3 million coins, blocktime: 5 minutes, PoS reward: 3%, rpcport: 27377, p2pport:28388, qrcode support, POW-CUTOFF: block 3700, , +100 blocks were instamined at the offical pool., about 380 blocks were instamined through solominers AFTER launch, and before the official pool started hashing. thats a total of 38k ROOT. , The codebase of ROOT was fork from another scrypt-coin, and the GetNextTargetRequired(), the function which settles the difficully,, reset the first 450 blocks with the starting diff. leading the first 450 blocks, to be mined superfast. - as this was topic for some FUD/troll posts, i just, added it to the OP to make it public. pleiadescoin_leia.png 2014-08 LEIA Pleiadescoin 2014-08-12 0 60 20000000 Tickername: LEIA, PoW Algorithm: SHA2-384, Mining TX Reward: 20 LEIA, The first 10000 mining tx as a fee of 2 LEIA, miner gets only 18 LEIA, Fees: 0.1% transaction fee, min 0.00001 LEIA, Total coins: 20,000,000 LEIA, Private message system, BuildIn CPU miner smartchipscoin_sc.png 2015-09 4 SC Smartchips 2015-09-14 1000000 60 rebrand of IsotopeCoin 1826000 Currency name: Smartchips, Currency symbol: SC, Algo: sha256d, Total coins in PoW: 130000 SC, Total coins in DPOS: 280000 SC circa, Coins for Initial Boost Crowdfunding: 1 Million, Total coins after IBC + PoW + PoS: 1826000 SC (circa), Confirmations for block maturity: 100, Block time: 60 seconds, Confirmations for TX: 4 , RPC port: 25391 , P2P port: 29398 , Minimum stake age: 4 hours, From old XTP thread. octocoin_888.png 888 seconds 2014-04 888 OctoCoin 88888888 Scrypt POW. 88,888,888 Total Coins. 88 second block targets, Re-targets every 888 seconds. Re-target calculated using a 40 block sample and .25 damping imperialcoin_imp.png 2014-03 IMP ImperialCoin 1020000000 SHA256 algo 1,024,000,000 coins to be issued Halves every 120400 blocks INITIAL 1024 block reward RPC Port 10241 Port 10240 Block Explorer is UP - Check Forum! wearesatoshicoin_was.png 30 seconds / 2 blocks 2014-04 WAS Wearesatoshi 100000 social comment 40000000000 ('r') euphoriacoin_euph.png 2014-11 100 EUPH Euphoriacoin 2014-11-20 2% 60 10000000 Coin Name: EUPHORIA, Coin Ticker: EUPH, Total Coins: 10 Mil, Algo: X13 POW / POS, Premine: 2%, Block Time: 1 min, POS 4%, Block Rewards, 0-100 -- Premine and 1 reward. , 101-3001 -- 500 EUPH, 3001 -- 6000 -- 1000 EUPH, 6001 -- 9000 -- 250 EUPH, 9001 -- 12000 -- 125 EUPH, 12001 -- 1500 -- 500 EUPH, Coin Maturity -- 100 Blocks ohmygodcoin_omg.png 2014-12 90 OMG OhMyGod 2014-12-03 1.4% 60 2150000 TICKER: OMG, ALGO: X11 (enjoy low gpu temperatures), PoW+PoS, PoS interest: 55% (YES, 55% annual interest!), PoS will start at block 9560, 50 blocks before end of PoW to ensure smooth transaction, Coin Maturity: 90 blocks with 2 hours Min Age, MAX SUPPLY: 2.150.000 , Pre-mine: 30k OMG ( 1,4 % Of total supply), 10k of pre-mine will be distribuited as bounties ( see bounties section), Wanna instamine? Not really, first 60 blocks got ZERO rewards!, First 26 block are already mined,first 10 for pre-mine plus another 26 with ZERO reward to check networ stability., Block structure will change every 1k blocks after block 60: 400-300-200-150-120-100-80-100-150-250, PoW will end in about 2 weeks. redoakcoin_roc.png 1.5 days 2014-09 ROC RedOakCoin 2014-09-14 129.53 150 72000000 RedOakCoin, Standard Scrypt Algorithm, 2.5 Min., Transaction Time, 1.5 Day(s) Re-Target Time, 72 Million Max Subsidy, 129.53 *ROC Reward (Initial Inflationary Term) nerdycoin_nerdc.png SafeGW 2015-05 6 40 NERDC Nerdycoin 2015-05-14 0 0.6 180 clone of URO 10000000 NERDC] NERDYCOIN, Currency Specifications, Algorithm: X11 PoW, Target Block Time: 3 minutes, Block Reward: 0.6 NERDC, Difficulty Retargeting Algorithm: Safe Gravity Well, Genesis Block Premine: 0 Uro, Standard Mined Block Maturity: 40 Confirmations, Recommended Transaction Confirmations: 6, Default RPC Port: 36348, Release date from current UROCOIN Block 175000 neocoin_nec.png 2013-07 NEC Neocoin 新币 2013-07-26 0 21 100 80000000 NVC-like, with messages. Multiple failed relaunches until finaly ok. Neo Coin - NeoCoin Abbreviation: NEC Algorithm: SCRYPT Date Founded: 7/26/2013 Total Coins: 80 Million Confirm Per Transaction: Re-Target Time: Block Time: 1.67 Minutes Block Reward: 21 Coins Diff Adjustment: Premine:. 1% NVCS, SA 30/90, Coins 3.8 secondscoin_sec.png 2013-12 SEC Secondscoin re-target every 1 hour, 60 coins per block, and infinity total coins. Mixed hashing clone. mlkcoin_mlk.png 2014-05 MLK MLKCoin 3.5 coins per block and 2 million total coins. baobeicoin_bbc.png 2013-12 BBC Baobeicoin 2013-12-20 0 96000000 12 coins per block and 96 million total coins. Baby Coin - BabyCoin Abbreviation: BBC Algorithm: SCRYPT Date Founded: 12/20/2013 Total Coins: 96 Million Confirm Per Transaction: Re-Target Time: Block Time: Block Reward: Diff Adjustment: Premine:. nuggets_nug.png 2013-07 NUG Nuggets 2013-07-15 yes re-target every 21 blocks and 49 coins per block. Premined LTC clone with random superblocks and constant base reward. pentablake Folklore combiner: five rounds of BLAKE 512. cloudtokenscoin_cloud.png 2014-11 CLOUD Cloudtokens 2014-11-19 100% 60 2000000 File hosting and cloud platform. 100% PoS via Escrowed ICO, Total Coins: 2 Millions CloudTokens, Min Stake Age: 8 hours, Max Stake age: 12 hours, Variable PoS Stake interest:, First Year 15%, After first year 5.5% basecoin_bac.png 2013-09 BAC Basecoin re-target every block, POW / POS, 1 coins per block, and 6 million total coins. Specifications:, POS+POW, Each normal block has 1 Basecoin, 1 minute block time, Daily generation is 1440 coins, basecoin is really a rare coin., Difficulty retargets every block, mining payout will be stable all over the lifecycle of Basecoin, Expected total mined coins will be 6000000 Basecoins, 4 confirmations for transaction, 50 confirmations for minted blocks, Support transaction message, Diff start at 0.01, The default ports are 21206(Connect) and 21207(RPC). perfectcoin_perc.png 3 days 2014-02 PERC Perfectcoin 40 100 100000000 memorycoin_mmc.png 2013-12 120 MMC MemoryCoin 记忆币 2013-12-25 variable 120 10000000 Algorithm: Momentum + SHA512 Block Time: 6 minutes Port: 1968 Codebase: ProtoShares 0.8.6 (Bitcoin 0.8.5) Block reward: 280 MMC, 5% reduction every 1680 blocks Total Coins: 10 Million coins in the first 2 years, 2% inflation thereafter Difficulty Retargeting: Every block with the Kimoto “Gravity Well” xencoin_xnc.png 9 hours 2013-06 6 XNC XenCoin 2013-06-19 0 (200, (10000, 10000, 2000)) 20 2100000000 Premined LTC clone Xen Coin - XenCoin Abbreviation: XNC Algorithm: SCRYPT Date Founded: 6/19/2013 Total Coins: 2.1 Billion Confirm Per Transaction: 6 Blocks Re-Target Time: 9 Hours Block Time: 20 Seconds Block Reward: First 10,000 blocks reward is 2000. After 10,001 block - 200 coins per block, halves every 3 years (4,665,600 blocks) Diff Adjustment: 9 Hours Premine:. ('h', 3, 'y', 4665600, 'b') talkcoin_tac.png 2014-05 TAC Talkcoin 2014-05-17 experimental Uses coins as address mechanism doubloon_dbl.png 2013-05 DBL Doubloon 2013-05-15 8000000 6.77 coins per block and 8 million total coins. LTC clone. 0 starting diff. NEW! - Chest Version 0.9.0 Out now! SOURCE : Windows Binaries: WEBSITE: BLOCK EXPLORER: BLOCK CRAWLER: CHAT: GAMBLING: Blackjack! FAUCET: POOLS:. lgbtqoincoin_lgbtq.png KGW 2015-08 LGBTQ LGBTQoin 2015-08-18 25000 50 300 50000000 LGBTQoin [LGBTQ], SPECIFICATIONS, Algorithm: X11, Coin Type: Pure PoW, Total Coins: 50,000,000, Block Time: 5 mins, Block Reward: 50, Difficulty Retargeting Algorithm: KGW, Block Halving Rate: 500,000 ('h', 500000, 'b') genstakecoin_gen.png 2015-06 10 20 GEN Genstake 2015-06-03 1000 1 60 15000000 Block time: 60 seconds, Min transaction fee: 0.0001, Confirmations: 10, maturity: 20, Max Coins: 15,000,000, Premine: 1000 GEN , P2P port: 9341, RPC port: 9342, addnodes (hardcoded):, node1.genstake.com, node2.genstake.com, Proof-Of-Work, Algo: Scrypt, Block Reward: 1 GEN, Halves Annually, Max Height: None, Proof-Of-Stake, Block Reward: 20 GEN, Halves Annually, Min age: 1 hour, Max age: 30 days ('h', 1, 'y') bitchips_chp.png 2011-11 CHP BitChips 2011-11-23 groupcoin_grp.png 2014-03 GRP GroupCoin 2014-03-15 experimental Experimental precursor to DevCoin Groupcoin is the precursor to devcoin, in which we tested some ideas to fall back on in case we could not do what we wanted devcoin to do. daffybuckscoin_dfy.png 2015-04 DFY DaffyBucks 2015-04-27 1951937 1000 600 4151937 DaffyBucks is scrypt with a coin total of 4,171,937, 1,900,000 of which will be sold through an ICO to fund the project, and 2.2 million which will be rewarded to miners through a long Proof of Work (PoW) mining period. The ICO funds will be used to fund development on the project, and then also allow non-miners a way to join the community. To further secure the network and reward the community for their support, all mined and purchased coins will can be staked at a 20% annual rate. , Name: DaffyBucks, Ticker: DFY, Coin Type: PoW/PoS Hybrid, Algorithm: Scrypt, Block time: 600 seconds, Block Reward: 1000 DFY, Halving Rate: 1100 blocks, Premine Amount: 1951937, Total Coins: 4151937, ICO Total: 1900000, Reserved for giveaways and bounties:51937, Yearly Interest %: 20.00, Minimum Stake Age (in days): 15, Maximum Stake Age (in days): 90 (h,1100,b) noahcoin_noah.png 2014-07 10 80 NOAH NOAHcoin 2014-07-10 1% varying 120 40000000 Algo: X13, Total coin: 40 Million , Block time: 120 seconds, Confirmations on Transactions: 10, Maturity: 80, 1% Premine, 3% IPO, POW REWARD, Block 1: Premine + IPO, Block 2-100: 1 NOAH, Block 101 - 10000: 100 NOAH, Block 10001 - 20000: 150 NOAH, Block 20001 - 30000: 200 NOAH, Block 30001 - 40000: 150 NOAH, Block 40001 - 50000: 100 NOAH, Block 50001 -100000: 200 NOAH, POS : 10% after block 100 000 POS yacoin_yac.png 2013-05 YAC Yacoin 雅币 2013-05-05 0 25 60 2000000000 Multi-hash based cryptocoin with dynamic re-targeting, 25 coins per block, and 2 billion total coins. Your Alternative Currency. NovaCoin fork, with modified scrypt hashing Ya Coin - YaCoin Abbreviation: YAC Algorithm: SCRYPT Date Founded:5/6/2013 Total Coins: 2 Billion Confirm Per Transaction: Re-Target Time: Block Time: 1 Minute Block Reward: 25 Coins Diff Adjustment: Premine: None. Novacoin-based. 5% yearly (0.4%), SA 30/90 joincoin_j.png 1 block Digishield 2014-08 J Joincoin 2014-08-13 1400000 45 2800000 Joincoin, Exchange symbol: (J), Total coins: 2,800,000, Coin distribution: 1,400,000 through mining/ 1,400,000 through Pre-sale, Proof of Work, Block Time: 45 seconds , Difficulty Adjustment using: Digishield , Anonymous Blockchain: Provided by ToR (already implemented) goldbar_gdb.png 2014-04 GDB GoldBars 400000.0 racecoin_race.png 2 minutes 2014-08 10 30 RACE Racecoin 2014-08-11 1% 30 9679800 Racecoin is a PoW/PoS-based cryptocurrency, Algorithm: X11, PoW/PoS, Symbol: RACE, Total from POW Mining: 9 679 800, From To Reward, 1 100 200, 101 300 500, 301 1,000 800, 1,001 2,000 2,000, 2,001 3,000 3,500, 3,001 4,000 2,000, 4,001 5,000 1,500, Time per Block: 30 s, Proof-of-Stake interest will begin being generated after block 500., Premine: 1%, Stake interest: 5% per year, Proof-of-Stake Minimum Age: 2 hours, Proof-of-Stake Maximum Age: No Limit, Difficulty readjusts every 2 min, Confirmations on Transactions: 10, CoinBase Maturity: 30 sambacoin_smb.png 2014-03 SMB Sambacoin 892000000.0 re-target every 4 hours, 400 coins per block, and 891 million total coins. skeincoin_skc.png 2013-11 SKC Skeincoin 2013-11-01 0 32 120 17000000 Skein-SHA2 based cryptocoin with POW, 32 coins per block, and 17 million total coins. Skein Coin - Skeincoin Abbreviation: SKN Algorithm: SHA-256 Date Founded: 11/1/2013 Total Coins: 17 Million Confirm Per Transaction: Re-Target Time: Block Time: 2 Minutes Block Reward: 32 SKC, halving every year Diff Adjustment: Premine: None. ('h') newbiecoin_newb.png 2014-06 NEWB Newbiecoin 2014-06-01 -1 Name: NewbieCoin, Symbol: NEWB, Launched time : 2014-06-01 06:45:46 (UTC), Newbiecoin (NEWB) is a protocol, coin, and client used to run decentralized applications such as creating and backing decentralized crowdfunding projects , betting on dice rolls and other games in a decentralized casino, etc. The protocol is built on top of the Bitcoin blockchain. Coins were created by burning Bitcoins during a creative “Twin Proof of Burn” period., Newbiecoin is modified from ChanceCoin's open source with new features such as TWIN-POB,POS,new bet rule and new decentralized applications such as decentralized crowdfunding . pyongyangcoin_xpg.png 2015-01 XPG PyongyangCoin 2015-01-11 0 600 500 This is clearly the best coin of 2015! Let's keep it running for the lulz. Max Coins: 500, Specs: Same as BTC hamstercoin_ham.png 2013-12 HAM Hamstercoin 1000 60 diodecoin_dio.png 2014-11 DIO Diode 2014-11-20 1% 1500 60 4500000 Diode has a simple, effective, easy and fast distribution that spans over 3000 Blocks. Hashing will be done with the X13 Algorithm, and PoS will start at block 2800., Algorithm: X13, Block Reward: 1500 DIO, Block Time: 60 Seconds, Max Diode: 4,500,000 DIO, PoW Phase: 3000 Blocks, PoS Starts: Block 2800, PoS Interest: 5%, Pre-mine: 1% (45k DIO) alibacoin_abc.png 2014-02 5 30 ABC Alibacoin 2014-02-07 30 20000000 Scrypt Algo, Total Coins: 20000000, Block Times: 30 Seconds, Confirmations Mined Blocks: 30, Transaction Conformations: 5, BLOCK REWARDS, 0-80640 100 Coins, 80641-161280 50, 161281-241920 25 Coins, 241921-322560 10 Coins, 322561-403200 5 Coins, 403201-483840 2 Coins, 483841-564480 1 Coins, 564481-645120 0.5 Coins, 645121- 725760 0.2 Coins, 725761+ 0.1 Coin, reducing chicoin_chi.png 2013-12 CHI Chicoin a re-target every 5,472, 50 coins per block, and 77 million total coins. x11coin_xc.png 2014-05 XC X11coin 2014-05-08 90 33300000 POS interest: 3.33% per year, PoSmin 8 hours, PoSmax 30 Days. Abandoned and re-announced as Xcurrency. analcoin_anal.png 1 block 2015-03 ANAL AnaLCoin 2015-03-13 6 40 600000 AnaLCoin - The Coin for Analysts !, Algo: SHA256, Abbreviation: ANAL , Max number of coins ( Including POS phase) : 600,000, Timing of block (in seconds) : 40, Difficulty Retarget every block, Coins per Block (during pow phase) :6, Total POW: 300,000 coins, Block number when POW ends : 50000, POS interest per year : 2,25%, Min stake age : 24 hours, Max stake age : 30 days, In-Wallet Trading ! bitleu_btl.png 2014-04 BTL Bitleu 30 blocks premined 5000000000 Bitleu is intended to be ASIC and multipool resistant using a unique combination of hashing and mixing functions, Scrypt Jane. Low block rewards that slowly increase over the first year and an increasing of the N-factory will provide a more attractive mining environment for a wider variety of participants and discourage the larger farm operations to over mine and flood the exchanges with Bitleu. There will be 5,000,000,000 Bitleu created (5 billion). with 50% premined. The 50% of premined coins will be distributed as follows: 0.1% of total coins to the coin founder to offset expenses incurred and to pay for continued expenses including paymens for advertising, coding/application developement, consultations with lawyers and other related expenses. 4.9% of total coins to the Bitleu Foundation to cover operating expenses and used for investments and promoting the Bitleu ideals. A full transparency policy will ensure fair play and promote community participation. 45% of total coins will be distributed to Romanians as free grants to encourage their participation in the Bitleu community. Any of the remaining Bitleu will be surrendered to the Bitleu Foundation and used for operating expenses, investments and developing the Bitleu network. 5,000,000,000 Billion total coins 2.5 Billion coins Premined 2.5 Billion coins for mining in 16 years Block Time: 60 Seconds Confirmations Mined Blocks: 30 Transaction Confirmations: 5 Port: 22641 R bitcoinfastcoin_bcf.png 2014-12 60 BCF BitCoinFast 2014-12-04 1000000 10 30 premined 33000000 Algorithm: Scrypt [POW/POS], Total initial supply: 33 Million, POS=25% Annually, Block time: 30 seconds, 10 Coins per block | **(100 Coins per Block for first 24 HRS for early adopter)**, Block Maturity: 60 blocks, Pre-mine= 1 Million with 300k going to Cryptex Exchange for having it listed at launch., POS INFORMATION, Network-Stake- 25% year, Min Coin Age: 1 day (24h) instaminenuggetscoin_mine.png 33 hours 2015-02 3 MINE InstaMineNuggets 2015-02-25 3% 30 180 21649485 Coin Type: Litecoin Scrypt, Halving: 350000 Blocks, Initial Coins Per Block: 30 Coins, Target Spacing: 3 Minutes, Target Timespan: 33 Hours, Coinbase Maturity: 3 Blocks, Pre-Mine: 3% = 649485 Coins, Max Coinbase: 21000000 + 649485 = 21649485 Coins ('h', 350000, 'b') infinium8_inf8.png 2014-07 INF8 Infinium-8coin 2014-07-18 0 90 -1 LAUNCHED 7/18/2014 12:00 UTC, CryptoNote-based digital currency,. , Features, Untraceable payments, Unlinkable transactions, Blockchain analysis resistance, Adaptive parameters, Details in the CryptoNote white paper, Specifications, PoW algorithm: CryptoNight [1], Max supply: Infinite, Block reward: increasing with difficulty growth [2], Block time: 90 seconds, Difficulty: Retargets at every block, [1] CPU + GPU mining (about 1:1 performance for now). Memory-bound by design using AES encryption and several SHA-3 candidates. , [2]). Both practices are counterproductive. , Infinium-8 s the cryptocoin with blockreward increasing together with difficulty. , Block reward formula is a fixed point implementation of log2(difficulty). This means that block reward increases much slower than difficulty: for 1 coin to be added to block reward a difficulty must be increased twice. This rate looks to be reasonable, Block_reward = log2(difficulty) * 2^40(difficulty), Downloads , Win64: , Linux: , MacOS: , Source: , How to start mining, 1. Download or compile binaries, 2. Start infiniumd, 3. Wait until network is synchronized. You will be notified with “SYNCHRONIZED OK” message, 4. Run simplewallet, 5. Create a new wallet or open the existing one, 6. Type start_mining %number_of_threads%. , Example: start_mining 3, 6.* You can start mining from the daemon with start_mining %wallet_address% %number_of_threads% command. , Example: start_mining inf8U2kpouwiAr2cMLHsTqVijc 6, Pools , Xmining Mining Pool , - Only 1% commission;, - Most stable and powerful servers;, - Fast and responsive support;, - Based on the updated node-cryptonote-pool, Extreme Pool, - Fast Payouts directly to your private wallet., - Address Validation, - Variable Difficulty based on your individual CPU performance, - Verified Secure, - Servers hosted in our own Data Center, not hosted by an online 3rd party provider, Hashhot Pool ethereum_eth.png 2014-01 ETH Ethereum 2014-01-23 bitcoin125coin_btc12.png 1 block 2015-10 12 200 BTC12 Bitcoin12.5 2015-10-12 1250000 12.5 125 12500000 POW/POS Bitcoin12.5 (BTC12), Algo: QUARK, Reward: 12.5 Coins, Confirmations: 12 , maturity: 200, Diff algo: Digishield Retarget: Every block, Max amount: 12.500.000 BTC12.5, Premine for ICO 1250000 , Price Buy 0.000019 BTC 23,75 BTC, Unsold coin will be destroyed, Block time: 125 seconds, Last block PoW 15000, transaction fee: 0.0001, PoS Min. 7 Hour , Pos Max. Unlimited, Min stake confirmations: 450 nyancoin_nyan.png 2014-01 NYAN NyanCoin 337000000 thcoin_th.png 2014-06 10 30 TH THcoin 2014-06-06 0 30 10000000 Algorithm: X11 POW+POS, Coin name:bitcoin Th/s(THcoin), Short name: TH, Total coin: 10,000,000, POW REWARD, 500 - 4000, 2000 - 2000, 5000 - 1000, 10000 - 500, POW LAST BLOCK: 10000, Block time: 30 seconds, POS Min age: 10 minute, POS Max age: unlimited, Confirmations: 10, Maturity: 30, Stake interest 10%, PORT 12233, RPC 22233, NO IPO, NO PREMINE friendshipcoin_fsc.png 2014-03 FSC Friendshipcoin 37400000000.0 jennycoin_jny.png 1 block 2014-04 JNY JennyCoin yes 867.5309 60 868000000 Total Coins - 867530900 1 Minute Blocks Difficulty will change every block Algo - Scrypt Block Reward 867.5309 Blocks 1-10 1% Premine total of 8,675,309JNY Blocks 11-60 0 coins mined before launch to mature premine Blocks 61-100 10 coin block reward Blocks 101-500 50 coin block reward Blocks 501-1000 100 coin block reward Block 1001 867.5309 coin block reward bitgem_btg.png 2013-05 BTG Bitgem 2013-05-16 1500000 re-target each block, POW / POS, and less then 1 coins per block. NVC scarce clone. 0 starting diff Novacoin-based. 3% NVCS (0.2%), SA 30/90, Coins 0.023 quantum2_qtm2.png 2014-07 10 QTM2 Quantum2 2014-07-09 0 60 10000000 QTM2, Specifications, Pure Proof-of-Stake (PoS), Max coins: 10 million QTM, Transaction fee: 0 QTM, Min stake age: 30 minutes, no maximum age, Stake interest: 0%, Confirmations: 10 androidtokens_adt.png 2013-08 ADT Android token 66000000000 re-target every block, POS, 524,288 coins per block, and 180 billion total coins. Just a clone. Android Token is based off of Infinitecoin using scrypt as a proof of work scheme. Android Token also known as ADT is one of the fastest if not the fastest confirm rate coin out there. Features Proof of Stake 0.1%, 30 days/60 Max Weight Tx Fee = 0.1% 180,000,000,000 Max Coins Ever 524,000 Coins Per Block, Halving every 100,000 Blocks 30 Second Blocks Retarget every block, with 70 Confirms per block 6 Confirms per TX. worldtopcoin_wtc.png 2014-02 WTC WorldTopCoin 2014-02-05 60 CN-focused 30000000 jerkycoin_jky.png 2014-01 JKY Jerkycoin re-target every 300 blocks, 144 coins per block, and 76 million total coins. thinkandgrowrichcoin_tagr.png 1 block 2015-06 TAGR Thinkandgrowrich 2015-06-18 20000000 60 53000000 Algorithm: x15, Block time: 60 seconds, Difficulty retargets each block, Block Reward: , Block 1-2800 = 10,000, 2800-5600 = 5,000, 5600-8400 = 2,500, 8400-10,200 = 1,250, PoS interest: 7%, minimum age 8 hours, max unlimited, One coin is divisible down to 8 decimal places, First 20 million coins are for ICO, first 2000 blocks, Total POW coins: 32,000,000 coins, 32.0 million, Total coins ICO and PoW: 53.0 million chakracoin_ckc.png 1 block 2014-05 3 50 CKC Chakracoin 2014-05-06 60 2500000000 Pure PoS Coin. PoS Minimum Coin Age 1 day. Variable interest, stake (annualized rate) years: 1 - 30%, 2 - 20%, 3 - 10%, 4 - 5%, 5 - 2%, 6+- 1% annual stake amerocoin_amx.png 2014-02 AMX Amerocoin re-target every 2016 blocks, 25,000 coins per block, and 21 million total coins. leaguecoin_lol.png 2014-05 LOL LeagueCoin lioncoin_lion.png DGS 2014-06 3 200 LION LionCoin 2014-06-15 15 60 10000000 Algorithm: X11, Total Money Supply: 10,000,000 Million LION, Block Time: 60 seconds, Block Reward: 15 LION ( 2-200 1 LION reward), Difficulty Retarget: DigiShield, Coin Maturity: 200 blocks, Confirms: 3, NO PRIMINE & NO IPO softwarecoin_soft.png 2014-04 SOFT Softwarecoin truthcoin_csh.png 2015-08 CSH Truthcoin 2015-08-10 600 sidechained 21000000 simicoin_simi.png 2014-03 SIMI Simicoin 2014-03-19 50 300 21000000 cnote_cnote.png 2014-01 CNOTE C-Note 2014-01-08 re-target every 100 blocks, 100 coins per block, and no maximum total coins. chichicoin_uuc.png 2014-02 UUC ChiChi Coin 2014-02-12 83000000 flexiblecoin_flex.png 2014-08 FLEX Flexiblecoin 2014-08-25 1% 60 1000000 blackdragoncoin_bdc.png 2014-05 BDC Blackdragoncoin 2014-05-05 geistgeld_gg.png 2011-09 GG GeistGeld 2011-09-09 Experiment on BTC algorithm, how fast can it go? just for science. glaricoin_glc.png 2014-02 GLC Glaricoin 2014-02-14 100 60 11000000000 SHA256D, 11000000000 GLC Total, 100 GLC/Block, Halving after 100000 Blocks ('h', 100000, 'b') paimaibicoin_pmb.png 2014-04 3 30 PMB PaiMaiBi 2014-04-16 80000000 32 30 580000000 Auction currency, the English name PaiMaiBi, referred to PMB, Xi'an shares another company to build a network heavyweight virtual currency, coins and its users, C shares trading network go hand in hand. PMB total of 580 million, the official pre-dug 80 million for the subscription, the total output of the first year of mining in which the first month 11,040,000 2,760,000 1,380,000 rest of the second month after the 690,000 per month until 59 years later, exhausted . PMB using the now popular POS + POW mechanism, PMB generated every 30 seconds a confirmation block, mining 30 confirmed mature, trading three confirmed mature. 101-1000 blocks of ore pool test blocks, each containing two coins. 1001 After each containing 32 PMB, every 30 days production decreased 50%, each production was lower than 8 PMB, automatic correction for 8PMB, until all exhausted after 59 years ('r', 0.5, '30d') cleverhashcoin_chash.png 2014-10 30 CHASH CleverHash 2014-10-17 100% 0 60 123995 All (HASH) tokens have already been produced in the genesis block and are secured using multiple rounds of 13 different hashes, making (HASH) one of the securest and most sophisticated crypto-currency equivalents available today. Cleverhash “CHASH” the X13 PoW/PoS Cryptocurrency., Algorithm: X13 POW/POS, Ticker: CHASH, Current Version “3.0.0.0”, Max Coins: 123,995 CHASH, RPC Port: 28195, 30 Blocks to mature, Block Reward Schedule:, Block 1 is 123,995 CHASH, Block 2-500,000 are 0 CHASH trickycoin_trick.png 1 block 2015-03 10 30 TRICK TrickyCoin 2015-03-27 60 50000000 Name: TrickyCoin, Ticker: TRICK, Pow / Pos, x11, BlockTime: 60 Seconds, Retarget: Every Block, ICO: None, Pre-mine: 0%, Confirmations on Transactions: 10, CoinBase Maturity: 30 ledgercoin_ldr.png 2015-10 7 LDR Ledgercoin 2015-10-18 1000 0.0001 60 1000000 Type: POW/POS, Algo: Sha256d, Pow Rewards:, Block 1 = 1000, Block 2 Onwards = 0.0001 Per Block, PosMin Stake = 1.5 hours, Max Stake = 7 Days, Pos Rewards:Block 1 - 10,000 = 1500%, Block 10k - 20k = 150%, block 20k onwards = 15%, every 50k blocks there will be a 50coin reward until block 1mil , Annual POS = 15%, Confirmations: 7 givebackcoin_giv.png 2014-05 GIV Givebackcoin 2014-05-15 Scrypt/SHA256D/Qubit/Skein/Groestl Algos 611coin_sil.png 48 blocks 2015-11 SIL 611coin 2015-11-03 0.611 60 611000 Forked from Namecoin the Bitcoin protocol is augmented with 611 operations, to reserve, register and update names. In addition to DNS like entries, arbitrary name/value pairs are allowed and multiple namespaces will be available. This will include a personal handle namespace mapping handles to public keys and personal address data., algo sha256, 0.611 coins per block; but with constant reduction: on average half the coin value every 2^18 blocks., re-target every 48 blocks, merged mined with Bitcoin, proof-of-work ('h', 262144) girlcoin_girl.png 2014-04 GIRL GirlCoin <5% 10000000000 Block Time: 30 Seconds Difficulty Retarget Time: 1 hour Premine: under 5% for Miley Cyrus Rewards Block 1 â 200,000 : 1-50,000 GIRL Reward Block 200,001 â 300,000 : 1-40,000 GIRL Reward Block 300,001 â 400,000 : 1-20,000 GIRL Reward Block 400,001 â 500,000 : 1-10,000 GIRL Reward Block 500,001 â 500,000 : 1-5,000 GIRL Reward Block 600,001 â 700,000 : 1-2,500 GIRL Reward Block 700,001 â 800,000 : 1-1,250 GIRL Reward Block 800,001+ : 500 GIRL Reward eccoin_ecc.png 2014-04 ECC ECCoin 1000000000.0 praycoin_josh.png 1 block 2015-02 16 150 JOSH Praycoin 2015-02-13 0 400 30 satire 64000000 Algo: SHA256, Ticker: JOSH, Diff Retarget: Every Block, POW/POS 10k Blocks POW, then POS, Max number of Coins : 64,000,000, 30 Second Blocks, Coins per Block :400, Block number when PoW ends : 160,000, PoS Reward per year : 15%, Max stake age : No max, Min stake age : 24 hrs, 150 confirmations required for mined blocks, 16 confirmations required for transactions, No premine eurocoin_euc.png 400 blocks 2015-07 20 EUC Eurocoin 2015-07-01 0 50 120 20000000 Mining will start July 1st 2015 at Noon 12:00 UTC, No premine, Based on Bitcoin 0.9.3 SHA256 source, Block target: 2 minutes, Difficulty retargets every 400 blocks, Block reward: 50 EUC, halving every 200,000 blocks, Total coin supply: 20,000,000 EUC, Coinbase maturity: 20 blocks ('h', 200000, 'b') scrasic_scr.png 2014-03 SCR Scrasic 2014-03-20 p7coin_p7c.png KGW 2015-04 P7C P7Coin 2015-04-11 0 270 30 270000000 P7COIN P7C Algorithm: Scrypt, POW, Kimoto Gravity Well, Reward: 270 P7C, Block Time: 30 sec, Havling every : 500000 blocks, Total coins: 270000000 P7C, No pre/ipo/ico! ('h', 500000, 'b') caixcoin_caix.png 2014-05 CAIX CAIx 2014-05-09 15 relaunch 50400000 Startup Money Supply: ~ 1.6 Million CAIx, Stake Interest: 5% Annually, Confirmations: 10, Maturity: 500, Minimum Stake Age: 8 hours, Ports: RPC=1510 P2P=1511 diamondbackcoin_dbk.png 2014-08 DBK DiamondBack 2014-08-16 384 1040000 Symbol: DBK, Algorithm: CryptoNight, Block time: 384 seconds. Difficulty retargets each block. Total coins 1. 04 Million cryptofocuscoin_fcn.png 2015-05 3 10 FCN CryptoFocus 2015-05-07 1000000 60 220000 Algorithm: X11, Ticker: FCS, Block time: 60 sec, PoS Interest: 10%, min TX fee: 0.001, Tx confirmations:3, Confirmations:10, Total supply: XXX, Ico total: 1M chingelingcoin_chl.png DGW 1 block 2015-04 20 CHL Chingeling 2015-04-28 100% 60 1000000 a Merchant-based crypto-currency with a $1 Buy-Back-Guarantee & Internal Market-place in wallet. retrocoin83_r83.png 2015-08 8 R83 Retrocoin83 2015-08-31 25000000 83 60 swindle, incomplete source 83000000 RETROCOIN83 [SCRYPT] R83 viacoin_via.png 1 block 2014-07 VIA Viacoin 维尔币 2014-07-18 0 5 24 92000000 Welcome to Viacoin!,., We do not consider it safe or viable long-term, to build out potentially billion dollar ecosystems that rely on an upstream service we don’t have control over. The current attempts at embedding services in the Bitcoin protocol are constantly having to design workarounds as Bitcoin plays “whack-a-mole” with them., Viacoin is designed from the ground up to be both a digital currency and provide the backbone of ClearingHouse protocol, there is a 100% guarantee that ClearingHouse will always remain compatible with Viacoin., Furthermore Bitcoin is just too slow for these kinds of applications with 10 minute confirmation times which are often much longer (up to 1 hour). For decentralized exchange or shopping, that latency is simply not viable., Viacoin is blazingly fast, utilizing block targets of 24 seconds, it is 25 times faster than Bitcoin., UPDATE: You can learn more about the story of Viacoin and ClearingHouse here, Specifications:, Name: Viacoin, Symbol: VIA, Total supply: ~92,000,000 (92MM), Algorithm: scrypt (POW), Block reward: 5, Block time: 24 seconds, Difficulty retarget: every block, NO premine. NO instamine. Presale available (read about the presale below)., Viacoin presale is now closed! fastcoin_fst.png 2013-05 FST Fastcoin 2013-05-29 32 12 premined 165888000 re-target every hour, 32 coins per block, and 165.8 million total coins. LTC clone. 12 sec blocks. 0 starting diff. Fast Coin - FastCoin Abbreviation: FST Algorithm: SCRYPT Date Founded: 5/29/2013 Total Coins: 165.888 Million Confirm Per Transaction: Re-Target Time: Block Time: .2 Minutes Block Reward: 32 Coins Diff Adjustment. sydpakcoin_sdp.png 2 blocks 2015-08 5 SDP Sydpakcoin 2015-08-09 25000 180 -1 Sydpakcoin, SDP, Specifications, Algo : X13, Blocktime : 180 Seconds, Diff Adjust : every 2 Blocks, Proof of Stake : 2 % Per Year, Proof of Stake Min.Coin Age : 120 Minutes, Proof of Stake Max.Coin Age : Unlimited, Premine = 25000 Coins, Block 1 - 2400 = 50 Coins Reward , Block 2401 - 4800 = 25 Coins Reward , Block 4801 - 7200 = 12 Coins Reward , Block 7201 - 9600 = 6 Coins Reward , Block 9601 - 12000 = 3 Coins Reward , Last PoW Block : 12000, Total Block Reward during Proof of Work : 255400 gotfomocoin_gtfo.png 1 block 2015-05 GTFO Gotfomo? 2015-05-28 100000 60 200000 GotFomo? A Coin to Change The Universe Sick of giving your money away to people you don’t know? GET GOTFOMO?, SHA256 Algorithm, 60s block times, retarget every block, 100k Premine on block 2 (fair launch), 100-1000 blocks pays 100, 1000-2000 pays 50, 2k-3k pays 25, 3-4k pays 12, 4-5k pays 5, This will make ROUGHLY 200k , HIPOS will work like this:, It picks a random number 1-10, then between blocks, 0-1k it takes the random number x 100, 1-2k is random number x 25, 2-3k is random number x 12, 3-4k random number x 6 , 4-5k is random number x 3 , Then a variable rate based on the netweight after block 5k nigeriacoin_ngc.png 1 block Digishield 2014-03 NGC Nigeria Coin 47% burn 41900 120 locality 88000000000 Skein algorithm that runs cooler on GPUs and is 30% more energy efficient compared to Scrypt. It also dodges the Scrypt ASICs coming out for fairer initial distribution 2 minute block target (because anything less will have too many orphan blocks) 41900 coins per block (as suggested. it's just too funny to pass up. thank you otila) 88 billion total coins (we Nigerian's love the Chinese and the Chinese love 8's) Difficulty retargets every block (looking at a modified KGW algo) 3% dev premine for IPO, bounties, giveaways, development, support and maintenance, new feature developments etc. 47% inflation adjustment premine (NEW CONCEPT, we will destroy this premine via Proof of Burn to adjust for inflation created by PoW mining) orobit_oro.png 2014-02 ORO Orobit 50 coins per block, and 21 million total coins. ambercoin_amber.png 1 block 2014-08 7 128 AMBER Ambercoin 2014-08-17 50 120 premined 1000000000 Algorithm: X11, Total Coins: 1,000,000,000 (2,550,000 until the end of PoW, the rest in PoW/PoS hybrid age), PoW until Block 50,000, PoW/PoS hybrid after Block 50,000, Block Time: 2 minutes, Difficulty Retarget: every block, Confirmations on Transactions: 7, Confirmations on Mined Blocks: 128, Secure and Open Source, QR code support, POS INFORMATION, Stake Interest: 5% annually, Minimum Stake Age: 2 days, BLOCK REWARDS, Blocks 1 - 1000 = 100 AMBER, Blocks 1001+ = 50 AMBER minicoin_mini.png 2014-02 MINI Minicoin 1000000000.0 Quark-based cryptocoin with re-target every 20 blocks, block range coin rewards, and 1 billion total coins. cinnamoncoin_cin.png 2013-09 CIN Cinnamoncoin 0 27000000 a linear block re-target, POW / POS, 64 coins per block, and 27 million total coins. SPECIFICATIONS Scrypt Proof of Stake 5 Minute Block Target Linear Diff Retarget Block reward < 300 = 1 Coin to mitigate premine/insta-mine 1% proof of stake after 60 days of coin age 64 Coins Per Block, Halving every 2 years ~27,000,000 Million Proof of Work Coins over 10 years .01 Set Tx Fee 40 Confirms for Minted Coins to be spendable POW mining for 14 years 3 block transaction confirmation Public Key Bit: 28 PORTS RPC PORT: 19125 (test net 29125) P2PPORT: 19126 (test net 29126) . 1% yearly, SA 30/90 bigbullioncoin_big.png 2014-07 6 15 BIG BigBullioncoin 2014-07-29 1% 36 600 10000000 Specifications:, SHA-256, Ticker - BIG, 36 coins per block, Halving: 147,000 blocks, 10 minutes block targets, Block Maturity - 15, Transaction Confirmation - 6, 10 million total coins, 1% DEV Private PREMINE! - will be used for Promotion, Developing Tools Services and more ('h', 147000, 'b', ) predatorcoin_prdt.png 6 hours 2014-11 5 PRDT Predatorcoin 2014-11-14 1% 8250 60 1024650000 PREDATORCOIN is a rare, Scrypt Proof of Work only cryptocurrency. The maximum supply is 1.024.650.000 PRDT (coins) ever. Block target time is 1 minutes, which makes it very fast. PRDT uses KGW and a progressive difficulty adjustment to protect miners.Wallet Clone Scrypt code was used with the Litecoin., Coin specification :, Code clone from LitecoinD, PREDATORCOIN (PRDT) is only Scrypt, Max coin 1.024.650.000 PRDT , Halving 62100 blocks, reward block PRDT 8250 coins for Block, Target spacing 1 min, Target timespan 6 h, Diff retarget, Premine 1 %, Confirmation-maturity 5 blocks ('h', 62100, 'b') deafdollarscoin_deaf.png 168 hours 2014-11 10 DEAF DeafDollars 2014-11-16 10% 1 600 466667 Coin Type: Litecoin Scrypt, Halving: 210000 blocks, Initial Coins Per Block: 1 Coins, Target Spacing: 10 Min, Target Timespan: 168 H, Coinbase Maturity: 10 Blocks, Pre-Mine: 10 % Pre-Mine Wallet, Max Coinbase: 420000 + 46666.7 (10% Pre-Mine) ('h', 21000, 'b') adamantiumcoin_adm.png 20 minutes 2015-07 ADM Adamantium 2015-07-12 60 product support 1000000 Specifications: X11, POW TO POS, TOTAL POW: 10 000 Blocks, POW Reward: 100 ADM, POS Reward: 1 ADM, Block timing: 60 seconds, Difficulty retargeting: 20 Blocks, Stake age: 8 hours, BOOK ICO premine: 125k. ADM is a X11 POW to POS coin with a 1 week (10 000 x 60 second blocks) POW period of 100 coins per block. This gives a total of 1 million coins MAX since POS kicks in during the POW period and POS blocks are counted towards the POW period blocks. POS block reward is 1 ADM. darkdarkcoin_dark.png 2015-03 60 DARK Darkcoin 2015-03-25 0 240 rebrand of fudcoin 12908369 Ticker Symbol: DARK, Algo: SHA-256, Block Time: 4 minutes (240 seconds), Block Reward: Smoothly decreasing exponential (see chart below), Total PoW Blocks: 43,200 (~120 days), Money Supply: 12.8 M DARK, PoS Rate: 35%, PoS Min / Max Age: 6 days / 18 days, Mining/Minting Spendable: 60 blocks, Addresses begin with ìFî, Ability to create Stealth Addresses decreasing exponential riskcoin_risk.png 2015-02 RISK Riskcoin 2015-02-14 60 1000000 Blocks: 11520 BLOCKS, Block Time: 60 sec, Rewards: Block 1 : 75,000 RISK ICO COINS, Rewards: Block 2 - 2880: 10 RISK, Rewards: Block 2881 - 5760: 20 RISK, Rewards: Block 5761 - 11520: 15 RISK, Rewards: POS: 50% atomcoin_ato.png 2013-12 ATO Atomcoin re-target based on a moving average over 80 blocks, 1024 coins per block, and 100 million total coins. bastion Folklore combiner of 8 hashfns, one round each of (ECHO, Luffa, Fugue, Whirlpool, Shabal, Skein and Hamsi) including three random rounds plus an additional round of HEFTY-1. torrentcoin_tc.png 2014-06 TC Torrentcoin 2014-06-07 0.5% 150 8400000 Total Coins:84 Million Coins, Super secure hashing algorithm: 13 rounds of scientific hashing functions (blake, bmw, groestl, jh, keccak, skein, luffa, cubehash, shavite, simd, echo,hamsi,fugue), Block reward is controlled by: 2222222/(((Difficulty+2600)/9)^2), CPU/GPU mining, Block generation: 2.5 minutes, Difficulty Retargets using Dark Gravity Wave, Premine:0.5% africanblackdiamond_abd.png 1 block 2014-05 ABD African Black Diamond 53000 1000000000 Scrypt, re-target 1 block, 53000 block reward, 1000000000 total coins. olympiccoin_oly.png 2014-02 OLY OlympicCoin 323000000.0 brigadecoin_brig.png 1 block 2014-06 4 30 BRIG Brigadecoin 2014-06-19 0 1000 30 2000000000 Brigadecoin is from a Team of Coin Developers which are Involved in the Development of lot of Cryptocoins.Brigadecoin is POW/POS Hybrid coin with X11 Algorithm and coin control feature, X11 Algo uses eleven hashing functions from the Blake algorithm to the Keccak algorithm making it very secure which really is needed for coins that do so well for CPU’s., Specifications:, Max Coins:2000000000 , Block time: 30sec, Block reward:1000 BRIG, Last POW block 500000 (300 Days of mining), Total POW coins: 500000000, Block retarget: every block, POS interest: 10% per year, Coins stake after 10 hours, Confirmations per transaction 4, Coins mature after 30 blocks, Premine: 0 (0%) daycoin_dac.png 2014-01 DAC Daycoin re-target every 2016 blocks, 50 coins per block, and 19.7 million total coins. spacebucks_sbx.png 2014-06 SBX Spacebucks 2014-06-12 0 1 180 1000000 3 Minute Block Times, 1 Coin Per Block, 1,000,000 Coins Total x15coin_x15coin.png DGW 2014-06 10 80 X15COIN X15Coin 2014-06-04 1% 1000 60 10000000 X15COIN uses the latest X15 Algorithm. X15 is the latest algorithm ,15 ROUNDS OF SCIENTIFIC HASHING (blake, bmw, groestl, jh, keccak, skein, luffa, cubehash, shavite, simd, echo, hamsi, fugue, shabal, whirlpool). CPU or GPU mine X15COIN., Algo: X15, Total coin: 10 Million , Block time: 60 seconds, Difficulty Retargets:DGW, Decentralized MasterNode Network, Confirmations on Transactions: 10, Maturity: 80, 1% Premine -- For bounties, development., POW REWARD, Total Block: 10 000, Block 1- 10 000: 1 000 X15Coin, After block 10 000 POS will start ., 1% Annual PoS. bones_bones.png 2014-03 BONES Bones 3800000.0 re-target every 6 hours, 7-10 coins per block + difficulty bonus rewards, and 3.8 million total coins. buycoin_buy.png 2014-08 BUY Buycoin 2014-08-17 0.99% 99 99 99000000 esportscoin_esc.png 2014-07 ESC ESportsCoin 2014-07-23 0 30 1200000 26th July - First halving, PoW reward is now 42, 23th July - ESC launched successfully and smoothly, detailed developments time frame will be announced this weekend!, ESportsCoin, The ultimate solution of cryptocurrency for E-Sports industry, Fast Transactions & Secure & Anonymity & Dedicated & Rare, Website:, Follow us:, Chat:, ESend beta Whitepaper, Coin Specs, Algo: X11, 7 days PoW, PoW height: 20160, Total coins after PoW: ~1.2M, Block time: 30 seconds, Block rewards: 84 initially, halving at block 6720, 13440, (1 reward for first 150 blocks), PoS starts block: 15000, PoS interest: 5.9%, PoS Min & Max ages: 24 hours & 30 days, No Premine except IPO, P2P: 14528 RPC: 14529 worldfootballcoin_wfc.png Digishield(0) 2014-05 10 WFC World Football coin 1 block 39 60 3200000 World Football Coin is created for football fans around the world joining crypto currency miners together in football competition using there miner software to compete. Initial Public Coin Offering (IPCO) is 10% of total coins and will be sold to early investors from the World Football Coin store. No premine except 1 block reserved for IPCO as explained in block reward calculation table. boycoin_boy.png 2014-04 BOY BoyCoin >5% 10000000000 Block Time: 30 Seconds Difficulty Retarget Time: 1 hour Premine: over 5% for Justin Bieber Rewards Block 1 â 200,000 : 1-50,000 BOY Reward Block 200,001 â 300,000 : 1-40,000 BOY Reward Block 300,001 â 400,000 : 1-20,000 BOY Reward Block 400,001 â 500,000 : 1-10,000 BOY Reward Block 500,001 â 500,000 : 1-5,000 BOY Reward Block 600,001 â 700,000 : 1-2,500 BOY Reward Block 700,001 â 800,000 : 1-1,250 BOY Reward Block 800,001+ : 500 BOY Reward quitdoughcoin_quit.png 1 block 2015-01 QUIT QuitDough 2015-01-09 10% 120 30000000 TickerSymbol: QUIT, Algorithm:X13, Type:POW/POS, TotalCoins: 30,000,000, then 1-150 (1 Block ) , then 50 Coins per Block, BlockTime: 120, Difficulty Re-target: Every Block, Pre allocation:10% diocoin_dio.png KGW 2014-03 DIO Diocoin 2014-03-29 50 60 21000000 indocoin_idc.png 1 block 2014-05 IDC IndoCoin 2014-05-21 60 blocks 200000 60 194500000000 IndoCoin is aimed towards Indonesians. Rewards are soft capped at 194,500,000,000. After that every block generate 5000 IDC. 1-500,000 = 200,000 IDC 500,001-1,000,000 = 100,000 IDC 1,000,001-1,500,000 = 50,000 IDC 1,500,001-2,000,000 = 25,000 IDC 2,000,001-2,500,000 = 14,000 IDC 2,500,000+ = 5000 IDC ('c', 500000, 'b') navajocoin_nav.png 1 block 2014-07 60 NAV Navajocoin 纳瓦霍币 2014-07-06 30 rebrand of summercoin2 50000000 Symbol NAV, Algorithm X13 PoW/PoS (PoW ended, now PoS), Block Time 30 seconds, Coins swapped 25,232,976 (amount of coins on the SummerCoinV1 chain, swapped at 1:1 ratio), Coins not swapped 260,586.00121295 (donated to the foundation), RPC Port 33333, P2P Port 33330, Proof of Work (ended) , Total Coins (by PoW) 50,000,000 NAV, Difficulty Retarget Block 0 to 250: each 25 blocks, Block 251 and on: adjusted each block, PoW Max Block Height 14000 (in 7 days of PoW), Block Reward 1500, Block Halving Block 14,000 (150 NAV reward), Block 50,000 (15 NAV reward), Block 100,000 (end of PoW), Proof of Stake , Min Age 4 days, Max Age unlimited, Minted Blocks Maturity 60, Stake Interest 20% for the first year, 10% for the second year, 5% for every following year clamscoin_gcs.png 2014-06 120 GCS CLAMScoin 2014-06-10 4.9% 90 100000000 Algorithm: X11 Scrypt- CPU/GPU, ASIC Resistant, Max CLAMS: 100 Million, Block Rewards, month 1 - 3: 60.2 , month 4 - 6: 28.9 , month 7 - 12: 14.5 , month 12 - 36: 7, months 37+: 7, Block time: 90 seconds, Difficulty: Re-Targeting with Dark Gravity Wave DGW, 120 confirmations for blocks to mature, PREMINE: 4.9% reducing overseaschinesecoin_occ.png 2014-03 OCC Overseas Chinese coin 2014-03-01 fuckcoin_fuck.png 2014-01 FUCK Fuckcoin worldtradefundcoin_xwt.png 60 blocks 2014-10 XWT WorldTradeFund 2014-10-20 0 90 rebrand of WTFcoin 10000000 Algorithm: x15, PoW / PoS, TOTAL Coin Count: aprox 10,000,000 XWT, PoW Stage: 3 Days Mining (~90% of Total Supply), Main PoW phase ends with Block #2880, At Block #2881 PoW will continue with 1 XWT per block., PoS starts at Block #2881 with 1% Annual Interest, Block Time: 90 Seconds, Difficulty re-target every 60 blocks, Block Rewards:, Block 1 - 960: 5869 XWT, Block 961 - 1920: 2350 XWT, Block 1921 - 2880: 1180 XWT, Block 2881 - : 1 XWT, Block 70000 FULL POS, No IPO No Pre-mine poopcoin_poo.png 2014-06 POO Poopcoin 2014-06-12 5% 60 1200000 x11, 1.2 million coins max, Hybrid PoW+PoS (PoW for 7 days), 5% Premine to cover cost bounties, operating cost, and getting coin added to exchanges., will release rest of the specs later. giftcardcoin_gift.png 2015-03 50 GIFT GiftCardCoin 2015-03-22 60 7750000 Name: GiftCardCoin , Symbol: GIFT , X13 PoW/PoS Hybrid, Block time: 60 secs , Total Coins: 7,750,000, Confirmations: 50 blocks , PoW Total Blocks: 3,500 , PoS Interest: 92% , Min Stake Age: 6 hrs , Max Stake Age : 12 hrs , RPC Port: 39887 drachmacoin_drac.png DGW 2014-05 DRAC DRACHMacoin 2014-05-07 43 90 65500000 Proof of Work based. Mine using any of the 7 algorithms : sha256d(default), scrypt all the way through x11, x13, and x15., Difficulty is retargeted every block., 90 second block target per algorithm (30 second average across 7 algorithms)., 65.5 million total coins, 43+ coins per block. , Random superblock with reward., Up front super mining rewards., DarkGravityWave stackedcoin_stkc.png 2014-01 STKC Stackedcoin re-target every 2106 blocks, 50 coin block reward, and 21 million total coins. Stock Coin. watcoin_wat.png 2014-06 WAT Watcoin 2014-06-13 1% 99 199997800002 secure hashing algorithm X11 + KGW, making the coin 51% attack resistant and ASIC mining resistant., 99 second Block Targets, 9 Hour Difficulty Readjustments reducing quantumcoin_qtc.png 2013-06 QTC Quantumcoin 2013-06-06 Dead and relaunched scrypt-based cryptocoin (Litecoin clone) with a dynamic re-target, variable then 10 coins per block, and 256 million total coins. nautiluscoin_naut.png DS 2014-05 NAUT Nautiluscoin 8000 16100000 re-target using DigiShield, 8000 coins per block, and 16.1 million total coins. koalacoin_koala.png 2014-05 KOALA Koalacoin 2014-05-19 violetcoin_vlt.png 2014-04 VLT Violetcoin 50 10000000 50 coins per block and 10 million total coins. bankcoin_bank.png DGW 2014-05 10 50 BANK Bankcoin 60 100000000 pure PoS coin. After of 7 day as a PoW, the coin has been transfered to a pure PoS coin. POS generate after 8000 blck. Stake interest: 10%/year POS Min age: 3 day POS Max age: unlimited POW last: 10000 - 7 days emperorcoin_epc.png 2014-02 EPC Emperor coin 2014-02-25 80 240 58880000 kakacoin_kkc.png 2014-02 KKC KaKacoin 30000000.0 wcgcoin_wcg.png 2014-03 WCG WCG Coin 2014-03-11 secondary 100000000 x13coin_x13c.png 2014-06 110 X13C X13Coin 2014-01-06 201 60 50000000 tenfivecoin_10-5.png 4 hours 2014-03 10-5 TenFive Coin 2014-03-05 90 10500000 ASIC resistant and multi-pool protected implementation of a blend of scrypt Adaptive-N and kimotos gravity well. The progressive designed rewards scheme will provide flat rewards for the duration of the mining process. cpu2coin_cp2.png 2013-08 CP2 CPU2Coin yes premined mammothcoin_mamm.png 2014-06 MAMM Mammothcoin 2014-06-02 45 1000000 X13 PoW/PoS, Anonymous Wallet (to come), True random superblocks, PoW/PoS independent, thus very stable, Extreme fast transactions (average 45 seconds), Limited time PoW mining (about 2 months), then will be pure PoS, Coin required held only 1 day before PoS interests generated jfkcoin_jfk.png 2014-03 JFK JFKcoin ferengicoin_fer.png 2014-03 FER Ferengicoin 10000000000.0 re-target every 6 hours, 10,000 coins per block, and 1 billion total coins. Coin details, SHA-256 algorithm (just like Bitcoin to mine with ASIC, GPU or CPU), Block Rate (in seconds) 180, Initial value per block 10,000 coins, Block halving rate 50,000 blocks (3 months or less), Maximum coins: 1,000,000,000 coins (1 billion coins), Target timespan (adjustment of difficulty) re-target every 120 blocks (6 hours approx.), Coinbase maturity 200 blocks (newly miined coins will not become spendable until 200 blocks found to avoid double spending), 1% pre-mined tilecoin_xtc.png 2014-08 XTC 物联币 TileCoin 2014-08-29 100% 600 100000000 Ticker Symbol: XTC, Initial Coins: 100,000,000 of Tilecoin @ 0.00000542 BTC/EACH, Percentage of TileCoin will be burned for transactions., Consensus Algorithm: Counterparty (BTC Blockchain/VIA Blockchain - future), Additional Algorithms: Ed25519 signatures, AES-128, TBA, Coin Distribution: All Coins will be sold via BTER.com @ 0.00000542 BTC/EACH TileCoin (XTC), Block Interval: 10 Minutes (Bitcoin)/24 Seconds (Viacoin - future) galtcoin_glt.png 2014-02 6 GLT Galtcoin 2014-01-31 0 50 premined 42000000 Galt Coin - GaltCoin Abbreviation: GLT Algorithm: SCRYPT Date Founded: 1/31/2014 Total Coins: 42 Million Confirm Per Transaction: 6 Blocks Re-Target Time: Block Time: Block Reward: 50 Coins Diff Adjustment: Premine: 50,000 Coins. enigmacoin_enc.png 2015-04 ENC EnigmaCoin 2015-04-23 0 100 60 1000000 Name: EnigmaCoin, Ticker: ENC, POW+POS (X13), Block reward: 100 ENC (Block 1-100 - reward 1 ENC), Distribution: POW+POS, Money Supply: 1,000,000 ENC , POS starts at 7,000 blocks, POW ends at block height: 10,000, Total POW coins ~ 850,000, Block time: 60 seconds, POS interest: 5% per year gaiacoin_gaia.png 1 block 2014-10 10 240 GAIA GAIAcoin 2014-10-02 100% 60 node.js 24000000 Ticker: GAIA, Total coins: ~24 Million*, Algo: 100% POS, Annual Interest: 5%, Block time: 60 seconds, Min transaction fee: 0.0001 GAIA, Confirmations: 10, maturity: 240, Min stake age: 4 hours, no max age., Difficulty retarget: every block eaglecoin_ea.png 2015-06 EA Eagle 2015-06-30 120 40000000 Name: EAGLE, Ticker: EA, Algorithm: SHA256 POW, Difficulty Retargetting Algorithm: D.G.W. , Time Between Blocks: 120 sec., Block Reward 200, Block Reward Halving Rate 100000, Total Coins: 40000000 ('h', 100000, 'b') altcoin_atc.png 2014-02 ATC Altcoin SHA256 based cryptocoin with 512 coins per block and 268 million total coins. bticoin_bti.png 2015-04 BTI BTIcoin 2015-04-18 100% 60 product support vehicle 6000000 Name: BTI , Total Coins: 6,000,000 , Mining: 100% mined, Staking: 3% POS per year, This is not the traditional cryptocurrency model that is out there now. Our sole use for this crypto is to fund our development and provide people with a way to get our products at a discount. Our currency will be solely used to purchase physical hardware or software from our company at a discount than using fiat or BTC. spaincoin_spa.png 2014-03 SPA SpainCoin premined 50000000.0 scrypt bytecoin_bte.png 2013-04 BTE Bytecoin 2013-04-02 50 600 premined 21000000000 re-target every 2016 blocks, 25 coins per block, and 21 million total coins. The 1:1 bitcoin copy. Relaunched. Bytecoin - Byte Coin Abbreviation: BTE Algorithm: SHA-256 Date Founded: 4/2/2013 Total Coins: 21 Billion Confirm Per Transaction: Re-Target Time: Block Time: 10 Minutes Block Reward: 50 Coins per block Diff Adjustment 2,016 Blocks. ('h', 3153600, 'b') globalcurrencyreservecoin_gcr.png 2015-08 50 GCR GlobalCurrencyReserve 2015-08-19 100% 90 99000000 GCR is the the first home-based business opportunity with its own cryptocurrency coin and immediate opportunities for wealth-building and personal success., Type: Proof of Stake (PoS), Block Time: 90 seconds, Stake Interest: 5%, Block Maturity: 50 blocks, Minimum Coin Age: 8 hours socialnetworkcoin_snc.png 2014-07 10 50 SNC SocialNetworkcoin 2014-07-16 0 60 60000000000 INTRO, SNCoin is the first crypto-currency based on Bitcoin Theory rewards social media popularity., Proof of Popularity(PoP) and Proof of Work(PoW) both works in SNC System., SNCoin is designed with energy conservation and fair distribution in mind., Shared Weight from Friends Later Mechanism(SWFLM) made possible to quantify the popularities of social media users., Avoiding Botnet Attack while being friendly to Key Opinions Leaders., Mining with SCRYPT algorithm., SPECIFICATIONS, Block time : 1 minute, Difficulty Retarget : every block, Nominal Stake Interest : 1% annually, Min Transaction Fee : 0.0001 SNC, Fees are paid to miners, Confirmations : 10, Maturity : 500, Min Stake Age : 8 hours, no max age, P2P Port : 15712, RPC Port : 15713, Proof of Work : 59940000000 SNC, Algorithm : SCRYPT, Block Reward : 6000000 SNC, no halving, Height : 11 - 10000, MECHANISM, Weight = Max(6, Friends * (1 + Min(3, Followers / Followings))), SharedWeight = Weight / Followings, PV = Sum(SharedWeight)/ 6 + Weight, DISTRIBUTION SCHEDULE, In total, the amount of SNCoin is 60,000,000,000 (60 billion)., 20 billion coins will be fairly distributed among users according to their individual Proof of Popularity(PoP) PVs calculated in according to SWFLM and SNCoin Distribution Mechanism., 20 billion coins will be mined by miners on the basis of Proof of Works(PoW) and Mining Mechanism., 6 billion coins are scheduled for First IPO of SNCoin., 12 billion coins are scheduled for Second IPO of SNCoin., 2 billion coins rest are reserved for R&D and Business Operation teams of SNCoin., 6 billion SNCoins will be distributed in First IPO, BTC Address 1NDZLAfndv5rMj2gbgysC9fZaPM2Ta8TDo, LTC Address LastFZppJJPyeZE2535hw7eAHuWrK578VZ, The volume of BTC approach in this stage of IPO is 3 billion SNCoins., The volume of LTC approach in this stage of IPO is 3 billion SNCoins., (more IPO details referencing the WHITE PAPER), [ First IPO Period ] 00:00:00 on July 17th, 2014 ~ 00:00:00 July 24rd, 2014, DISTRIBUTION START-TIME, Upon finishing the development of SNCoin QT Wallet, SNCoin Official Website and Official SNCoin Distribution Platform., APPROACH, See more informations in our WHITE PAPER :, CONTACT US, Email : contact@sncoin.org, PS: For better network promotion of SNCoin, we need AGENT TEAMS all over the world (for different social-medias) who could share specific quota of reserved coins. Just email us if you are interested in. cashcoin_cash.png 2014-02 CASH Cash coin 47433600 variable re-target, variable block reward, and 47.4 million total coins. 10% yearly (0.6%), SA 30/90, Coins 2.26 fitcoin_fit.png 1 minute 2014-05 120 FIT Fitcoin 2014-05-14 0 {'gpu': '2222222/(((Difficulty+2600)/9)^2)', 'cpu': '11111.0 / (pow((dDiff+51.0)/6.0,2.0))'} 60 90000000 Fitcoin is the first digital currency that provides to its community the chance to have a healthier life. ('h', 1, 'y') solcoin_sol.png 2014-01 SOL Solcoin rabbitcoin_rbbt.png 2014-03 RBBT RabbitCoin 100000000000.0 supernetasset_token.png 2014-09 TOKEN SuperNET 2014-09-06 asset:15641806960898178066 SuperNET There is no limit on the amount of TOKEN available overall, though there may be short-term restrictions on TOKEN available at a given price if volumes are very high. If a large number are sold, this does not dilute other buyers' holdings, since SuperNET is backed by the cryptocurrency paid: SuperNET is not a cryptocurrency and this is not like a typical coin offering. (SuperNET is a basket of cryptocurrencies and revenue-generating assets than can be considered more like a closed-ended mutual fund.) The fundraiser will start at 14:00 GMT Saturday 6 September. It will continue for 2-4 weeks in total, for as long as there is sufficient interest. At the end, all TOKEN will be converted to UNITY, which can be withdrawn to a NXT wallet or traded on the AE, BTER and other supporting exchanges. bancorcoin_bncr.png 2014-01 BNCR Bancorcoin 2014-01-31 128 60 281600000 unitecoin_uni.png 1440 blocks 2013-12 UNI UniteCoin 2013-12-01 0 50 100000000 Unite Coin - UniteCoin Abbreviation: UNI Algorithm: SCRYPT Date Founded: 12/1/2013 Total Coins: 100 Million Confirm Per Transaction: Re-Target Time: 1440 Blocks Block Time: Block Reward: 50 Coins Diff Adjustment: Premine:. rastacoin_rtc.png 2014-01 RTC Rastacoin 353000000.0 civilisationcoin_civ.png 20 minutes 2014-09 10 50 CIV Civilisationcoin 2014-09-26 1% 1000 45 resuscitated 14000000 This coin was brought to my attention by it's developer (civilization.team) sometime back when allegedly according to him his first launch failed due to the coder then mined the pre-mine and ran off leaving him without anything , I do not know how true this or cannot comment on the same , anyhow he wanted me to change the port and regenerate the genesis which I did and sent back the source and wallet but then never heard of him. I had this lying around for quite a while and thought I'd share with your for better or worse. cybercoin_cc.png 2015-03 10 61 CC CyberCoin 2015-03-31 0 64 -1 CyberCoin PoW/vPoS 2.0, Release date: 9 PM GMT Tuesday March 31st, 2015/ No premine No ICO, Ticker: CC, Hashing algorithm: Scrypt, Interest: vPos 2.0 variable, Block generation: 64 seconds, Transaction confirmations: 10, Min age: 1 hour, no max age, Maturity: 61 confirmations standardcoin_std.png 2014-03 STD StandardCoin 400000000 mugatucoin_muga.png 2014-05 MUGA Mugatucoin 2014-05-31 0 30 50000000 PoW/PoS-based cryptocurrency with NO IPO and NO Premine. The purpose of MugatuCoin is to take over the world, by any means necessary. bellacoin_bela.png 2014-02 BELA Bellacoin re-target every 1000 blocks, 50 coins per block, and 54 million total coins. litecoinnew_ltn.png 6 hours 2015-01 120 LTN LitecoinNEW 2015-01-12 0 50 60 84000000 Start difficulty : 0.00024414, Premine 0%, Initial public offering (IPO) 0%,, Coin properties, Coin type Litecoin (Scrypt), Halving 840.000 blocks, Initial coins per block 50 coins, Target spacing 1 min, Target timespan 6 h, Coinbase maturity 120 blocks, Max coinbase 84.000.000 coins ('h', 840000, 'b') chaincoin_chain.png 2014-05 CHAIN ChainCoin pimpcashcoin_pimp.png 2014-11 10 69 PIMP Pimpcash 2014-11-23 3.9% 2000 69 69000000 69,000,000 Total PoW supply, 2,000 Coins per PoW block, PoW Algorithm: Scrypt, PoW + PoS Hybrid, PoS interest 69% Annually, PoS Min Stake Time: 8 hr, Pos Max Stake Time: Unlimited, 69 sec block target, 69 confirmations for blocks to mature, PoW End block: 33,186, transactions = 10 confirmations, rpcport=6969, port=6970, test ports = ( RPCport 7979 ) ( Port 7980 ), Blocks, 1 : 3.9% for Dev Fund ( 2,691,000 PIMP ), 2 - 30 : 1 PIMP (Anti Instamine), 31 - 33,186 : 2000 PIMP quedoscoin_qdos.png 2015-11 QDOS Quedos 2015-11-11 38400000 50 60 96000000 Type: Pure POW, Algorithm: X11, Blocksize: 8 MB, Total Supply: 96,000,000, Blocktime: 60 Seconds, Block Reward: 50 QDOS ('h', 576000, 'b') flirtcoin_flirt.png 1 second 2014-11 FLIRT Flirtcoin 2014-11-07 10000 300 1800000000 Flirtcoin is the official digital currency of Flirt Life. Flirtcoin provides adults with a new and exciting way to show their interest in each other “Flirting”!, Flirtcoin Technical Specifications:, Algorithm: SHA-256, Block Reward (initial): 10,000 Flirtcoins, Block Time: 5 minutes (300 seconds), Difficulty Retarget: 1 second (each block), Halving Interval: 75,000 blocks, Maximum Supply: 1,800,000,000 Flirtcoins (1.8 Billion) ('h', 75000, 'b') macdcoin_macd.png 2014-06 288 30 MACD MACDcoin 2014-06-12 0 variable 60 12000000 X13 algorithm, 12 Million Coin Proof-of-Work (PoW) Maximum*, 0% Premine, 7 Day Maximum Proof-of-Work (PoW) Mining Phase*, Difficulty retargets every block - nothing silly to interfere with the Proof-of-Stake (PoS) implementation, Confirmations - 288, Maturity - 30, 12% Proof-of-Stake (PoS)*, Min stake age - 1 Hour, Max stake age - 1 Day, , BLOCK REWARDS, 1-100: 10 MACDCoins, 101-500: 1,250 MACDCoins, 501-1500: 1,500 MACDCoins, 1501-3000: 1,250 MACDCoins, 3001-7000: 1,000 MACDCoins, 7001-8500: 1,250 MACDCoins, 8501-10000: 1,500 MACDCoins dokdocoin_ddc.png 2014-03 DDC Dokdocoin 50000000.0 re-target using Kimoto's Gravity Well, 100 coins per block, and 50 million total coins. bitzcoin_bitz.png 10 - 20 minutes 2015-02 BITZ Bitz 2015-02-13 60 20000000 Algo: X11 Pow/PoS hybrid, PoW has ended at block 100,000: ~ 2,000,000 coins, Time per block: ~ 1 minute (both PoW and PoS), Difficulty Re-target: 10-20 minutes, PoS 10% per year from block 0, As there will only be a finite amount of BITZ in existence the value will be determined by the demand for the currency. It has now been decided with consent in the community that the PoW phase will end at block 100,000 which is 2,000,000 BITZ. It will then be a pure PoS currency with an annual interest rate of 10%. This will keep BITZ in the realm of relatively rare when widely adopted but not deflationary like Bitcoin. bamitcoin_bam.png 2015-06 80 BAM Bamit 2015-06-27 1% 60 -1 Name: Bamit, Ticker: BAM, Algo: X11, Block time: 60 seconds, Block reward: 101-116, PoW Supply: 947307 BAM, Last PoW block: 8640 6 Days PoW, PoS Interest: 23% per year, RPC Port: 9988, Coinbase Maturity: 80, 1% Premine (Block 1), Fair Start (No reward Blocks 2-10), Reward Blocks begin on block 11 surgecoin_srg.png 2014-04 SRG Surge Coin 50000000 PoW Algorithm: Scrypt Difficulty Re-target: KGW Block Time: 60 seconds Initial Block Reward: 50 Coins per block Max Supply: 50-Million Surge Coins horsepowercoin_hsp.png 2 mins 2015-09 120 120 HSP HorsePowerCoin 2015-09-20 6% 60 1000000 Name: HorsePower, Symbol: HSP, Algo: Scrypt, PoW, Block time: 1 min, Retargeting: 2 mins, Maturity: 120 blocks, Confirmations: 120, P2P Port: 32321, RPC Port: 32322, Premine: 6%, horsepowercoin.conf, Blocks 2 - 1000 = 100 Coins, Blocks 1001 - 3000 = 200 Coins, Blocks 3001 - 6000 = 300 Coins, Blocks 6001+ = 250 Coins arpacoin_arp.png 2015-06 200 ARP ArpaCoin 2015-06-03 60 400000000 Sha256D, Dynamic rewards , Block Spacing: 60 Seconds , Stake Minimum Age: 5 Hours, POS 2.0, Port: 56631 RPC Port: 57631 , Max POW blocks: 5000, Anti-instamine of 30 blocks., 100 coins per block from block 31-5000, 50 coins per block from block 5001-10000, 25 coins per block from block 10001-15000, 12.5 coins per block from block 15001-20000, 10 coins per block from block 20001 and beyond., Our Arpanodes will get 60% of pos rewards hobonickels_hbn.png 2013-07 HBN HoboNickels 2013-07-24 0 5 30 120000000 Fast NVC clone. Extreme, over 100% ROI. 0 starting diff. Hobo Nickels Coin - HoboNickelsCoin Abbreviation: HBN Algorithm: SCRYPT Date Founded: Total Coins: 120 Million Confirm Per Transaction: Re-Target Time: Block Time: 30 Seconds Block Reward: 5 Coins Novacoin-based. None 100% NVCS (1.9%), SA 10/30, Coins 5.71 entrustcoin_etrust.png 2015-04 125 ETRUST Entrustcoin 2015-04-04 20000 10 60 experimental 1000000 Welcome to eTRUST, the only coin with a system designed to give a guaranteed 10% return every day. The blocks will pay out as follows: 10 Total, 2 eTrust Coins rewarded to a miner and 8 eTrust Coins deposited into the eTrust payout address (publicly listed) If you send money in to the address, your payout will be 110% with the funds accumulated from miners within a particular duration. Everyone will be able to receive the 110% without ever having to lose due to the skimmed funds always being vested into the fund account.By utilizing this particular 4:1 ratio we will be able to pay back EVERYONE. Premine will be used for giveaways and promotions and future developments., Ticker: eTRUST, Distribution: PoW - Fountain/Investing System, Algorithm: Scrypt, Block Time: 60 Seconds, Premine: 20,000 eTRUST, Maturity: 125 Confirmations, Block Reward 10, 8 to eTRUST funds account, 2 to the miner cthulhucoin_off.png 20 blocks 2013-09 30 OFF Cthulhu 2013-09-14 0 5 60 premined Quark-based cryptocoin with re-target every 20 blocks and 5 coins per block. Cthulhu Coin - Cthulhucoin Abbreviation: OFF Algorithm: SCRYPT Date Founded: 9/14/2013 Total Coins: Confirm Per Transaction: 30 Blocks Re-Target Time: 20 Blocks Block Time: 1 Minute Block Reward: 5 Coins Per Block, halving after 6 months Diff Adjustment: 20 Blocks Premine: 10k. ('h', 6, 'm') evilcoin_evil.png 2015-12 5 EVIL Evilcoin 2015-12-01 0 60 21024000 Coin Name : Evil coin, Ticker : EVIL, Algo : x11, POS or POW : Both, PoS Min Age : 1 Hour, PoS Max Age : 720 Hours, Mature : 5 Blocks, Max Coin Supply : 21024000, PoS : 2 %, Block Time : 60 seconds, Last PoW Block : 525600, PoS Start after : 525000, Premine : 0 libertyreservecoin_lr.png 2014-12 LR LibertyReserve 2014-12-20 10000000 60 25000000 25mil total coins (10 mil pre mined) 100% pos Last pow block 5000 (all blocks of 0 reward) nonamecoin_nonc.png 1 block 2014-11 7 120 NONC NoNameCoin 2014-11-17 5% 450 60 12000000 opalcoin_opal.png 2014-09 OPAL Opalcoin 2014-09-11 0 1000 90 relaunch 15000000 Opal – X13 CryptoCurrency, Algorithm: X13 POW/POS starts on block 15,000,, Opal is a re-brand, and entire re-release of OnyxCoin V2. The first OnyxCoin (V1) was, unfortunately, a scam. The original developer had coded a hidden premine and sold these coins when Onyx initially hit an exchange – the coin effectively died at this point. Shortly after this another developer decided to try and resurrect OnyxCoin by launching a new version of the coin (named OnyxCoin V2, surprisingly!) after making some changes to the PoW schedule, and removing the hidden premine. OnyxCoin V2 was a completely new coin – 100% detached from the original OnyxCoin – in everything except the name. The launch was successful, as was the mining period – and also the transition to PoS. Sadly, the Onyx name was too far tarnished by the original scam, and interest in the coin has faded. It is at this point that Opal was born. Opal uses the OnyxCoin V2 blockchain, which has always been a fresh new chain, but has been completely re-branded and is now backed by a team of committed members who wish to see the coin succeed. We liked what we saw in the codebase, and the wallet, and felt this was a great foundation from which we could build on - the first task being to detach it completely from the original name., RPC Port: 51990, P2P Port: 50990 informationcoin_itc.png 1 block 2014-05 ITC Informationcoin 70000000 re-target every 1 block, 0 coins per block (POS only), and 70 million total coins. solacecoin_solis.png 2015-08 SOLIS Solace 2015-08-30 0 <5 60 8800000 Solace has a very low emission rate that will produce no more than 8.8m coins [no premine]. Solace algorithm is proof-of-work forked from cryptonote. Block reward is <5 coins, where as other coins block rewards are 25+ coins. Solace overall goal is to build curiosity within the crypto world and challenge the potential users of Solace to understand how cryptonote algo works and what minor changes were made to Solace. blackcatcoin_bcat.png 2014-06 BCAT Blackcatcoin 2014-06-02 0 30 1100000 POW LAST BLOCK: 10000, Block time: 30 seconds, POS Min age: 15 minute, POS Max age: unlimited, Confirmations: 10, Maturity: 400, Stake interest 5%, PREMINE: 0% chavezcoin_chvz.png 2015-06 30 days CHVZ Chavezcoin 2015-06-17 100% 60 16382264 Algorithm: X11, Intial coins: 8.191.132, total coins: 16.382.264, BlockTime: 60 sec, Stake: 5% anual, Minimum Stake time: 7 days, Coin Maturity: 30 days airdrop-faucet Dissemination via airdrop initially, then by faucet. sheckelcoin_sks.png 2014-10 160 SKS Sheckel 2014-10-07 4% 24 300000000 Encrypted Messaging, Stealth Addressing, POW MAX Height: 50 million (Roughly 15 years), Block Times 24s, Premine 4%, MAX COIN: 300 Million, Est Supply: 250M, 160 confirms for maturity, 2% Interest, Staking starts 1hr after mining begins, Payout initially is 10 SKS, Every block, payouts are reduced by 20 satoshis stronghandscoin_shnd.png 2015-09 SHND StrongHands 2015-09-30 150 1000000 Coin Stats, name - StrongHands, PoW Algo - Sha256D, PoS - min stake age - 30 days, PoS - Reward 100%, Blocktime - 2.5 mins ecocoin_eco.png 2.25 minutes 2013-08 5 20 ECO Ecocoin 2013-08-26 0 45 10200000 Scrypt-based Eco Coin - EcoCoin Abbreviation: EGC Algorithm: SCRYPT Date Founded: 8/26/2013 Total Coins: 10.2 Million Confirm Per Transaction: 5 for tx 20 for mint Re-Target Time: 2.25 Minutes Block Time: 45 Seconds Block Reward: 7 Coins - Varies Diff Adjustment: Premine:. (7, 'v') indexcoin_idc.png 2014-01 IDC Indexcoin Scrypt storjcoinx_sjcx.png 2014-07 SJCX Storjcoin X 2014-07-18 asset: 500000000. firecoin2_fc2.png 2014-05 FC2 Firecoin2 2014-05-16 retarget every 2016 blocks, 200 coins per block, and 336 million total coins. gamecoin_gme.png 30 minutes / 12 blocks 2013-05 GME GameCoin 2013-05-12 0 1000 150 1670000000 Unfinished FTC clone , killed on launch. Failed to revive once, now finally revived. Game Coins - GameCoins Abbreviation: GME Algorithm: SCRYPT Date Founded: 5/12/2013 Total Coins: 1.67 Billion Confirm Per Transaction: Re-Target Time: 30 Minutes Block Time: 2.5 Minutes Block Reward: 1000 Coins Diff Adjustment: 12 Blocks Premine:. mastertradercoin_mtr.png 2015-02 3 50 MTR MasterTradercoin 2015-02-13 10000000 60 trading system 10110000 MasterTrader Coin is new cryptocurrency aimed at developing social connections with avid Crypto enthusiasts seeking professional insight, analysis, and ideologies revealing the art of trading. ascentcoin_asc.png 1 block 2014-05 4 30 ASCE Ascentcoin 2014-05-16 2% 150 30 25000000 50000 POW blocks Ascentcoin is deisgned by using X11 Algorith , this will lower your heat and power usage while mining so it will cost you less and damage your hardware less and you can hold more! PoW Max coins: 75,000,00 PoS interest 15%, Minimum Coin Age: 8 Hours PoW Algorithm: X11, PoW + PoS, Symbol: ASCE, PoS interest 15%, , Minimum Coin Age: 8 Hours, 30 second block target, 30 confirmations for blocks to mature!, Retarget difficulty each block, PoW Total blocks: 50,000 POW blocks, PoW Payout: 150 per block, PoW Max coins: 7.5 Million, PoW Confirmations: 4, NO IPO, 2% Premine, ('h', 50000, 'b') edwardsnowdencoin_esc.png 4 blocks 2015-05 6 200 ESC EdwardSnowdenCoin 2015-05-13 0 10000 60 2628000 Edward Snowden Coin 0.0.1 [ESC] Node Hardcoded, Proof of Work SHA256D Cryptocurrency based on Unobtanium 0.9.x (Bitcoin 0.8.99), Kimoto's Gravity Well, EventHorizonDeviation = 1 + (0.7084 * pow((double(PastBlocksMass)/double(28.2)), -1.228)), BlocksTargetSpacing = 1 * 60, PastSecondsMin = TimeDaySeconds * 0.23, PastSecondsMax = TimeDaySeconds * 1, Block Reward Halving = Every Day, Blockchain, Constant block reward: 10000, Maximum coins: Much More, 200 blocks to mature, 6 transactions to confirm, Minimum subsidy of 0.00000010 for all transactions, Estimated PoW lifespan: 5 years ('h', 1, 'd') roseonlycoin_rsc.png 2014-05 RSC Roseonlycoin unattaniumcoin_unat.png 4 blocks 2014-09 6 UNAT Unattaniumcoin 2014-09-10 0.16 8 1000000 Algo: SHA-256, Reward: 0.16 UNAT, Spacing: 8 seconds, Retarget: Every 4 blocks, Target block time will be 8 seconds after block 52, down from 2.5 hours, Block Reward has been decreased to 0.16 UNAT per block, down from 200, Difficulty will readjust every 4 block, up from 2, The daily mining yield remains unchanged. Roughly 1920 UNAT is the target mining reward across the entire network, both pre and post fork, The ratio of networkseconds:UNAT per day remains unchanged, Updated seednodes, New icons and logos, 6 confirmations will now take only 48 seconds, down from 128 days, Network hashrate now available in Windows and OSX via getnetworkhashps, Network hashrate based on the last 5 blocks, down from 120, nopecoin_nope.png 2014-10 10 10 NOPE NopeCoin 2014-10-18 10% 60 20259990 X-11 algo 20259990 coins, POS 13% starts at Block 500, 60 Sec Block Time, 10 Confirms on TXs, 10 Confirms on mined blocks, Stake Age Min=1hr, Max=10days, Premine - ~10% leprocoin_lpc.png KGW 2014-01 LPC Leprocoin 2014-01-05 0 4 15 42100000 Lepro Coin - LeproCoin Abbreviation: LPC Algorithm: SCRYPT Date Founded: 1/5/2014 Total Coins: 42.1 Million Confirm Per Transaction: Re-Target Time: Kimoto Gravity Well Block Time: 15 Seconds Block Reward: 4 Coins Diff Adjustment: Kimoto Gravity Well Premine: None. volumecoin2_vol.png KGW 2015-08 VOL Volume 2015-08-15 1600 16 60 relaunch 10000000 ('h', 50000, 'b') airdrop Dissemination to denizens huntercoin_huc.png 1 block 2016 lookback 2014-01 HUC HunterCoin 2014-01-27 0 10 60 entertainment 42000000 Merge-mineable SHA256 / scrypt cryptocoin with re-target every block, 1 coins per block, and 42 total coins. Coin that you can aquire by playing simple game. Hunter Coin - HunterCoin Abbreviation: HUC Algorithm: SHA256 + Scrypt Date Founded: 1/27/2014 Total Coins: 42 Million Confirm Per Transaction: Re-Target Time: Every Block Based On Last 2016 Blocks Block Time: 1 Minute Block Reward: 10 Coins per block Diff Adjustment: Every Block Based On Last 2016 Blocks Premine:. litebar_ltb.png 24 hours 2014-02 4 LTB LiteBar 2014-02-12 0 (1, 5) 180 1350000 Lite Bar - Litebar Abbreviation: LTB Algorithm: SCRYPT Date Founded: 2/12/2014 Total Coins: 1.35 Million Confirm Per Transaction: 4 Per TX Re-Target Time: 24 Hours Block Time: 3 Minutes Block Reward: 1-5 Bars Diff Adjustment: Premine: None. ravencoin_rvn.png 2015-04 RVN Raven 2015-04-16 100% 60 760000 Raven [RVN], Algorithm: Scrypt, Block Time: 60 seconds 6% POS, Supply: 760,000 RVN, Stake Min Age: 1 Hours, ABOUT RAVEN Raven is an entirely proof of stake cryptocurrency, to be distributed through a sale of the total pre-stake supply at a price of 0.00002500 BTC per RVN. To prevent a staking monopoly, the maximum puchase of RVN will be 0.5 BTC per user. The minimum purchase is 0.001 BTC. To prevent sockpuppet accounts, buyers must provide a platform to show their active participation in the cryptocurrency community (Reddit, Twitter, Bitcointalk, etc). The supply sale will last for 14 days and any unsold RVN will be burned., We, as the development team, will retain 0.25% of the supply to stake until the network is stable, at which point these coins will then be burned also., Proof-of-work is about securing the network, but mining costs on alternative cryptocurrencies are so small that this proof-of-work system only gives the illusion of security. We have decided to distribute through a small premine sale in order to raise funds for the core developer to be able to dedicate himself to securing the network on an ongoing basis, run full nodes and provide core services for the Raven network., Raven currently includes stealth addressing and encrypted messaging, with a primary goal to implement a modified version of DarkSend and MeshNodes. To purchase RVN in the supply sale, you must send between 0.001 BTC and 0.5 BTC to the following development address: 1Cw8kuwcNFgejkdY93nJmNgq6gRC5AEnrv. Then, please message this account with your transaction ID, corresponding RVN address, and active cryptocurrency community account (if it is not your Bitcointalk account), your Raven will be delivered to the RVN address after you have passed the sockpuppet checklist. Otherwise, your BTC will be refunded. turroncoin_tur.png 2015-08 TUR Turroncoin 2015-08-31 100000 60 1000000 TURRON, TUR, POS, 8% A Year, Min Stake Age is 2 hours, Max Stake Age is 7 days, Minting is 50 blocks, Tx Confs is 2, TX Fee is 0.0001 TUR, 100,000 POW Supply in block 1 for the TIO servxcoin_xsx.png 2014-08 XSX ServX 2014-08-21 75000 12 60 12000000 Coin Name: ServX, Symbol: XSX, Algorithm: Scrypt Pure PoW, Total Coins: 12 Million, Block time: 60 Seconds, Block Reward: 12 Coins (No halving), Premine: 75,000 chncoin_cnc.png 2013-11 CNC CHNcoin 中国币 2013-11-03 88 60 premined 462500000 re-target every block, 88 coins per block, and 462.5 million total coins. LTC clone, announced on chinese forum first.0 starting diff Chinacoin - China Coin Abbreviation: CNC Algorithm: SCRYPT Date Founded: 11/3/2013 Total Coins: 462.5 Million Confirm Per Transaction: Re-Target Time: Block Time: 1 Minute Block Reward: 88 Coins Diff Adjustment. bitburst_btb.png KGW 2014-05 BTB Bitburst 2.6% 0.0035 15000 We have developed a new wallet with a new Graphic User Interface with many additional built-in functions: real-time price of major exchanges, webchat, etc. dashcoin_dash.png 2014-01 DASH 达世币 Dash 2014-01-18 84000000.0 re-target using Kimoto's Gravity Well, block reward controlled by moores law, and 84 total coins. mapcoin_mapc.png 2015-08 MAPC Mapcoin 2015-08-20 100% 60 3000000 Total supply: ~3,000,000 (3MM), Premine: 100% Premine, Devs Premine: 1%, Algorithm: x11 (POS), PoS: 2% a year latium_lat.png 2014-05 LAT Latium 2014-05-15 swansoncoin_ron.png 2014-03 RON SwansonCoin 742000.0 re-target using Kimoto's Gravity Well, random block reward, and 741,776 total coins. fakecoin_fkc.png 2014-03 FKC FakeCoin 900000000 - scrypt-based cryptocurrency - 6 confirmations for transactions - 50 coins per block - 899999900 total coins - 2016 blocks for changing difficulty frogcoin_fgc.png 2015-03 30 FGC Frogcoin 2015-03-20 250 25 200000000 Algorithm: Scrypt, Reward: 250 FGC, Block Time: 25 sec, Maturity: 30 blocks, Havling every : 400000 blocks, Total coins: 200000000 ('h', 400000, 'b') gaycoin_gay.png 2014-03 GAY Gaycoin 15600000.0 1560 coins per block and 15.6 million total coins. pandacoin_pnd.png 2014-02 PND Pandacoin 100000000.0 osmosiscoin_osmo.png 2014-03 OSMO Osmosis 1% 15500000 Block time: 2 Min Retarget up: Every 5 blocks (max 100%) Retarget down: Every block (max 200%) Block reward: 0.1 OSM Total supply: 15,482,880 + 1% PA Premine: 100% monkeycoin_mky.png 2014-02 MKY Monkeycoin 2014-02-21 antarctic_acc.png 2014-05 120 ACC Antarctic ((1, 250), 1000) 60 120000000 kimdotcoin_dot.png 2014-03 DOT Kimdotcoin 500 890000000 500 coins per block and 890 million total coins. quantradilbrycoin_q1.png 2015-07 Q1 Quantradilbry 2015-07-28 60 1000000 Name: Quantradilbry, Ticker: Q1, Algo: Srypt, Block Time: 1min waccoingold_wacg.png KGW 2014-06 WACG WaccoinGold 2014-06-21 60 600000000 Algoritm Hashing: Scrypt POW , Max Coin: 600 Millones, Block Reward: The reward of waccoin was specially designed to guarantee the value of the coin. The system was designed under the advice of experts in macro and micro economy to be a stable and profitable coin in short and long therm. When the difficulty grows the reward grows as well following the function described in the graphic: , Excel Block Reward and profit calculator:, Block time: 60 segundos , Dificulty retarget: Kimoto Gravity Well (KGW = 1 + (0,7084 * (PastBlocksMass/144)^(-1,228)), Cloud Seed Nodes (Faster conections) , Built-in DNS Seed computercoin_cmpt.png 2014-07 6 120 CMPT Computercoin 2014-07-10 3,000,000 CMPT(20%) will pre-mine for IPO 60 14580000 Computercoin., Computercoin is a cryptocurrency with X13 built in., Exchange:, Coin-ga:, Vote:, Vote on Cryptoine!, Vote on Bter!, Logo desgin:, First:, Shareholder list:,, About:, The computer is a great initiative, it changed the human world, changed people's lives. Wherever you are, whatever you do, you must use it. This is a miracle., Computer linked to the people around the world, whether rich or poor, beautiful or ugly, we can equal exchange., n this world, no computer can not do a thing. Shopping, travel, cooking, live, learn, work, any industry, any corner will have his presence. Computercoin will achieve all of the above functions. Do not believe? Let us take a closer look at it. , Specifications:, Algorithm: X13 + POS, Total: 14,580,000, Difficulty retarget : every block, Mined Block Confirmation: 120, Transaction Confirmation: 6, Block Rewards:, 0: (4million for IPO), Block 1-200: 0.1 block rewards, After 201 block there will be 2000 rewards, until 7 days later.(8200 block), Pow:14,580,000, PORT:, Port: 9461, rpcport=9462, Wallet:, Windows:, V1.0.0.1, Mac:, Source:, POS Interest:, Min stake age: 1 day, Max stake age: 1000 days, 18% in the first year, 15% the second year, 13% in the third year, 10% in the fourth year, Stay at 5% final, IPO:, IPO is necessary to run the project. Part of the funds will be used to stabilize the market., IPO will last 10 days, do not miss a chance, otherwise you will regret., IPO total coins: 4 million(20%).Total shares:100, Up Investment: 10 shares,every share contains 40000 CMPT., Each stage ended, Then go to the next stage., New shares:, Stage 1: 0.05 BTC (25/25 shares) supply 1,000,000CMPT, Stage 2: 0.08 BTC (7/35 shares) supply 1,600,000CMPT, Stage 3: 0.14 BTC (0/20 shares) supply 1,000,000CMPT, Bounties and Giveaways: 20 shares supply 800,000CMPT, My BTC address:1awRRT56LyLpdHtBzj5owqNXpFfndNWjq, Escrow by cooldgamer;u=79720, Fee:1%. It is paid by buyer,it means if you join 0.05BTC,you need paid 0.05*0.01+0.05=0.0505BTC., Round 1:1Ggy8KkkMq3pohBpLYqGWfhh2Jp9zRv1FT, Round 2:12ibX1dMf8BUxTi4izDEc5pPFtb7Q5kcp7, Don't forget send me and escrow a private message:, 1) The amount you're investing., 2) Txid, Pool:, Hashharder:, 0% fee,, IRC: ,, Block Explorer:, First:, Second:, Faucet:, , Game: , , Translation: , Chinese:. by Ribuce.J 10000CMPT, German: by DrMotz 10000CMPT, Portuguese: by Mxrider420z, Spanish: by alexvillas, Romanian: by kencoles, Italian: by Anon39, Bouties:: , Multipool: 2 shares., Mac wallet:2 share., Logo design:1 share., Android wallet:1 share., Translation:10,000CMPT, First Block Explorer:1 share, Second Block Explorer:0.8 share, Faucet:1share., Game: 1 share., FAQ: , 1,Why IPO? , Some people have not hash to mine, but they want join in. Do you agree?, 2,How much will pre-mine coins?, 3,000,000 CMPT(20%) will pre-mine for IPO. Do you agree?, 3,When the pre-mine coins distribution? , Within one hour after launch wallet, download it and PM me ,or you can send mail to me Do you agree?, Have any questions? PM me or send e-mail to me., E-mail:computercoindev@gmail.com, Twitter: polishcoin_pcc.png 2014-03 PCC PolishCoin 150000000.0 v8coin_v8.png 2014-07 20 V8 V8coin 2014-07-01 1% 500 120 10000000 Total Number of POW coins : 10,000,000, Block rewards : 500, Number of blocks : 20000, Block Time : 120 sec, Confirmation : 20, POS : 1% yearly, Premine : 1% = 100,000 V8, IPO : 3%= 300,000 V8, First 50 blocks reward will be 1 V8 kingdomcoin_king.png 1 block 2014-11 KING Kingdomcoin 2014-11-18 30% 60 2417600 Algorithm: X13 -(blake, bmw, groestl, jh, keccak, skein, luffa, cubehash, shavite, simd, echo, hamsi, fugue), Ticker: KING, Total supply: 2.417.600 KING , Max POW Height: 20160 ( 14 days in total ), Block Time: 60s, PoS Interest: 2.5% per year, Min stake Age: 2.5 hours, ICO premine is 30%, 750000 coins will be up for sale., Diff retarget every block (it’s the safest and best option for smooth mining), Premine will be located in the first block and we will set a 0 block reward for the initial 100 blocks. Hope this is fair for everyone. insanitycoin2_ins.png 2014-12 1 INS Insanitycoin 2014-12-15 5% 1 43200 1000000 Specifications:, Max total coins: 239234, Block time: 12 hrs, Coins per block: 1, Maturity after: 1 block, Premine: 5% (11961 coins), Points of note 1) Insanely difficult to mine and very scarce 2) May or may not be a joke (depends on how I feel and whether people want this to continue) 3) MASSIVE PREMINE - this will take 16.4 years to be matched by mining!!! 4) Novelty 5) Fun (for me) 6) THIS WILL MAKE YOU RICH (warning: for legal reasons this will probably not make you rich), About the premine - a tribute to Nakamoto's huge instamine of Bitcoin and the massive instamine of Darkcoin - although I probably won't dump this early (it's insane not stupid) I like the power of having it as a kill switch in case of establishment takeover - the government/NSA/CIA/Walmart/the guy who owns the shop down the street are not going to get anything useful - if it ever has any value I will auction limited quantities (to reduce the risk of devaluation) in order to fund further (real) technical development - may be used to build a giant golden goose and buy the Spruce Moose flowercoin_flow.png 2014-07 FLOW FlowerCoin 2014-07-30 0 60 500000 English name:Flower Coin, DEV team:a team with worldwide teamers, Our team have 16 workers., Born Time:2014.7.30, Total amount:500000, Premine:ofc no, Flower coin is a X11 coin Roll Eyes Roll Eyes, Wallet:, Sourcecode:, :::::::::: Features :::::::::::, - 60 seconds block time, Which means Fast confirming transactions, - 20000 Blocks, 1-5000 50, 5000-10000 25, 10000-15000 15, 15000-20000 10, - Every trade needs 6 confirminations., :::::::::: MINING POOLS ::::::::::,,,, , we want to make a honest and successful altcoin,and we promise TLC is what we want to make!, :::::::::: BLOCK EXPLORERS :::::::::::,, :::::::::: Something we prefer to do :::::::::::, Flower coin not only represents trend of the future, and is the pioneer of techniques, ideas, and vision., one of Altcoin's first vision is “to remove the concentration”,so with the increasing of altcoins' price,ASIC was borned., ASIC has changed the original beauty, resulting in the altcoin can only be hold in a few people, including the development of ASIC vendors, or early buyers of the ASIC,and “decentralization” disappear since then., so Flower coin to be a poineer to realize 'fairness' and fight against ASIC., we make it impossible to mine Flower coin by ASIC with Adaptive N-Factor, and other Technology:Kimotos Gravity Well,will also be used in Flower coin to ensure the investors' interest., We take new technology to ensure announce the wallet and source code.And the wallet can be used automatically after the end of countdown, so that we can promise nobody can premine. anarchistprimecoin_acp.png 8 hours 2015-03 ACP AnarchistPrime 2015-03-24 0 32 180 53760000 Bitcoin (SHA256) , Halving: 840,000 blocks , Initial coins per block: 32 coins , Target block spacing: 3 min , Diff retarg: 8hrs, Premine: 0, Max ACP minted: 53,760,000 coins ('h', 840000, 'b') coloradocoin_colc.png 2014-09 COLC Coloradocoin 2014-09-05 45 303720000 Algorythm - SCRYPT, Type - Proof of Work/Proof of Stake(6% Annually), PoW Phase - 20,000 blocks, Block Reward - 0.0001 - 1.000(reward raises with difficulty), Time Per Block - 45 seconds, Max Coins - 303,720,000, Pre-Mine - None(Unless IPO then only IPO amount), IPO - TBD(We probably wont be aren't fully decided) gcoin_gcn.png 2014-04 GCN Gcoin 2014-04-08 3% 81 33000000 Gcoin is a Philanthropy based cryptocoin developed by Freemasons for the "love of humanity" Please note: This is not the official cryptocoin of any recognized Masonic body in the world at this time. This coin has been developed by Freemasons, It was developed for charitable purposes only. Scrypt, Kimoto Gravity Well, 2 minute blocks, 150,000 reward per block, Subsidy halves every 533333 blocks, Total Coins: 200 billion., Pre Mine: 40 billion. spktrcoin_spktr.png 2015-06 60 SPKTR Spktr 2015-06-23 60 22700000 Ticker: SPKTR, Algorithm: SHA256, RPC Port: 24511, P2P Port: 24514, 60 Seconds Per Block, 60 Blocks to Confirm, 20MB Blocksize, POW/POS, Min. Staking Age: 3 Hours, Est. Total Supply:1,370,000 POW, 900,000 POS magicoin_magic.png 2014-04 MAGIC Magicoin re-target every 2016 blocks, 50 coins per block, and 84 million total coins. scarcecoin_sec.png 1 block 2014-11 SEC ScarceCoin 2014-11-17 20 0.1 60 26289.99843358 ScarceCoin specs., mining algo: scrypt, 1 minute block targets, 0.1 coins per block, subsidy halves in 131400blocks (~3 Month), First year mined 24647,5 SEC, 26289,99843358 total coins after 5 years and 9 month, 1 blocks to retarget difficulty, Lowered fee: 0.00001000 SEC minimum ('h', 131400, 'b') 7coin_7.png 2014-01 7 7coin 0.00000700 77 7 bitcashcoin_bcsh.png 2014-08 4 120 BCSH Bitcash 2014-08-23 60 5000000 Ticker: BCSH, Distribution: PoW/PoS, Algorithm: x14, Total coins: 5'000'000 in POW, Block Reward:, (first 14400 blocks during ICO phase has 0.01 BCSH reward to mantain the blockchain working and test it safely. 144 BCSH will be produced within this ICO period and will be free distributed to users with giveaways and promotions!), 1-7 Block Days : 100 BCSH, 7 - 14 Block Days: 50 BCSH, 14 - 21 Block Days: 25 BCSH, 21 - 28 Block Days: 12,5 BCSH, 28 - 42 Block Days: 6 BCSH, 42 - 733 Block Days: 3 BCSH, After a total of 733 days of PoW mining BitCash will switch to pure PoS., Total coins to be mined in PoW mining 5 Millions, Block Time: 60 Seconds, TX Confirmation: 4 blocks, Annual interest: 5%, 120 minted block confirmations nicecoin_nic.png 2014-07 4 30 NIC Nicecoin 2014-07-23 100% 60 65000000 As the name signifies, it the nicest crypto currency to provide essential elements to grow the crypto ecosystem with its path breaking strategies., This is the first Coin to promote a new concept called “POHE (TM) - Proof Of Human Effort” instead of machine efforts. You get paid for using NiceCoin in transactions & services related to it., The Coin is a 100% POS and will be distributed very fairly using Marketing Bounty Programs and POHE. The users of Nice Coin will receive coins on engaging with the services using nicecoin, both free and paid services. That means more the engagement, more the earnings!!!,, The coins are called “NIC” and NICs for more than 1 coin. For Decimal places upto 8, we have given name as Nickels., Due to our team marketing efforts, the NiceCoin will open ample new business verticals for accepting crypto currency as an important mode of transacting. We will keep sharing some good news in coming days., Welcome to Nice Days!, Coin Specifications are as follows:, - Total coins: 6.5 Billion NIC, - Algorithm: Scrypt, - 100% PoS, - Symbol: NIC, - PoS interest 5%, - Minimum Coin Age: 1day, - 60 second block target, - 30 confirmations for blocks to mature!, - Retarget difficulty each block, - Coin Confirmations: 4, - NO IPO, - 100% Premine bitsiscoin_bss.png 2015-04 2 250 BSS Bitsis 2015-04-13 0.0018% 25 600 28000000 Abrv: BSs, Algo: Sha256d, Type: Proof Of Work, Coins Per Block: 25, Halving: 105,000 blocks roughly 2 years, Block Time: 10 minutes, Max Coins: 28,000,000, Coinbase Maturity: 250, Confirmations 2, TXFee = 0.01, Expceted Blocks Per Day: 144 Blocks, Expected coins per day: 3600 (BSs), Expected year mining ends: Year 2036, Premine: 0.0018% ('h', 105000, 'b') solocoin_solo.png 2015-08 9 SOLO Solocoin 2015-08-01 4% 120 closed source, revoked ico 312500 Specs SOLOCOIN -SCRYPT-POW Abbreviation -SOLO- Max Money 312500 Total coin POW 300000 Premine 4 % 0-10 Genesis Block BOUNTY,EXCHANGE,DEVELOPMENT,ETC difficulty Launch : 0.00131203 0-150000 1 Coins 150000-300000 0.5 Coins 300000-450000 0.25 Coins 450000-600000 0.125 Coins Block Time 120 Seconds maturity 9 blocks kushcoin_khc.png 2014-01 KHC KushCoin 2014-01-11 4200 420 42000000 Grow KushCoins Solo or start a Grow-Op!!, 4,200 coins per crop, 420 seconds to grow each crop, 420,000,000 total coins, SCRYPT growing. klingondarsek_ked.png 2013-08 KED Klingon Empire Darsek POW / POS and 3.5 coins per block. StarTrek-insipred coin. californiacoin_cac.png 2014-04 CAC CaliforniaCoin 10,000 coins per block and 16.8 billion total coins. node_node.png 2014-05 NODE NodeCoin 节点币 2014-04-30 100% 60 infrastructure, hybrid coin and payment system 600000000 Node.js cryptocurrency implementation exabytecoin_exb.png 1 block 2015-05 6 120 EXB ExaByte 2015-05-25 non 1 30 500000000 ExaByte is mineable with three algorithms and random rewards., Specifications: Proof of Work based. Mine using any of the 3 algorithms : sha256, scrypt or groestl., Default algorithm is sha256, Difficulty is retargeted every block. (4% up ,2 % down), 30 seconds, Max coin will be 500,000,000 coin ever, Random Coins per Block Based on Probability, 48% 5, 25% 10, 15% 25, 11% 50, 1% 100, After block 1,000,000 reward remand at 1 coin.~ ( 1 year ), Confirmations: Confirmations for mined blocks are 120, Transactions require 6 confirmations to become valid. random rewards norrencoin_xnc.png 2015-10 6 21 XNC Norrencoin 2015-10-20 600 10 60 closed source, no blockchain 1000000 The abbreviation is XNC, Proof of Work, 10 XNC of reward each block, Halving every 210000 blocks, Small premine of around 600 XNC, No blockchain, nobody can see your transactions, Coinbase maturity: 21 blocks, Number of confirmations: 6 ('h', 210000, 'b') scrolls_scq.png 1 block 2014-06 3 50 SCQ Scrolls 2014-06-02 1 42 7000000 Block Reward: 1 SCQ, Block Time: 42 sec, Difficulty Re-target: every block w/ KGW, Confirmations: 3, Mined Block Maturity: 50 blocks, Block Halving Rate: Approx. every 4 years, Total Coins: 7,000,000 SCQ. Scrolls is a new, rare, fast cryptocurrency that utilizes the popular Scrypt algorithm. Scrolls also includes Kimoto's Gravity Well (KGW) for efficient difficulty re-targeting after every block making multipool issues a thing of the past. The Scrolls Foundation, which will be implemented shortly, will rely only on donations to support the YAC, Young Archaeologist Club. skynetcoin_snet.png 1 block 2014-08 45 SNET Skynetcoin 2014-08-29 2000000 60 1000000 netcoin2_net2.png 2014-02 NET2 Netcoin2 re-target every 10 blocks, 50 coins per block, and 50 total coins. zeitcoin_zeit.png 1 block 2014-02 4 50 ZEIT Zeitcoin 2014-02-14 0 10000 30 99000000000 re-target every 1 block, 1,000,000 - 250,000 coins per block, and 99 billion total coins. Zeit Coin - ZeitCoin Abbreviation: ZEIT Algorithm: SCRYPT Date Founded: 2/14/2014 Total Coins: 99 Billion Confirm Per Transaction: 4 For TX and 50 For Confirms Re-Target Time: Every Block Block Time: 30 Seconds Block Reward: 10k Coins Diff Adjustment: Premine:. qcoincoin2_qtc.png 2015-10 QRT Qcoin 2015-10-06 91800000 100 64 102000000 Scrypt, POS/POW Hybrid, POW ~102,000,000 QTC, Premine: 91,800,000, 100 Qcoin per block, 1% per year, Block time? 64 seconds, Ports: 19784 for P2P, 19785 for RPC bitokcoin_bok.png 2014-12 BOK BITOK 2014-12-01 100% 60 RU-focused 100000000 Coin BITOK: This POS / POW coin. The number of coins already generated 100 million BITOK Growth of 1% per year growth will continue for about 50 years for the promotion of this coin was collected cash pool of $ 120,000 ue in October 2014, ie 2 months ago. One of the goals of the company - is to bring this to the forefront of a coin, for use in international money transfers, as well as to pay for online stores. silverbulletcoin_svb.png 2015-05 SVB SilverBullet 2015-05-14 0 60 569400 SilverBullet: algo X13, premine: none, ico: none, mining will stop after 13000 blocks. Block time 60s. PoS 6%, Minimum age 1 hour, maximum age 8 hours diraccoin_xdq.png 20 blocks 2014-05 XDQ Diraccoin 2014-05-19 180 multimining 2272800 ISO 4217 Trading Symbol : XDQ, Monetary Symbol :, Target block time : 180 seconds, Difficulty retarget : 20 blocks (every hour), Maximum Coins: 2,272,800, Starting block reward : 8, First reward reduction @ 43201 : 1.25, Second reward reduction @ 744001 : 0.75, Third reward reduction @ 1448001) : 0.5, Fourth reward reduction @ 2145601 : 0.25, Fifth reward reduction (inflation mode) @ 2846401 : 0.01, yolocoin_yolo.png 5 blocks 2014-05 YOLO Yolocoin (1000, 1) 120 7775000 a PoW cryptocurrency with a low inflation rate mooncoin_moon.png 8 hours 2013-12 MOON MoonCoin 2013-12-28 0 29531 90 384400000000 Moon Coin - MoonCoin Abbreviation: MOON Algorithm: SCRYPT Date Founded: 12/28/2013 Total Coins: 384.4 Billion Confirm Per Transaction: Re-Target Time: 8 Hours Block Time: 90 Seconds Block Reward: 29531 MOON Diff Adjustment: 8 Hours Premine:. inkcoin_ink.png 2014-03 INK INKcoin 11000000.0 SHAvite-3 based cryptocoin with re-target every 7 days, 50 coins per block, and 11 million total coins. samsaracoin_smsr.png 2015-06 SMSR Samsaracoin 2015-06-16 66% 75 60000000 Algo: Qubit, Ticker: SMSR, Total supply: 60 million, POW/POS, block time: 75 seconds , minimum stake age: 6 hours , maximum: unlimited , 40m coins will be sold at 150 sats each. POW ends 16100. POS will start on block 16000 so we have smooth transition from pow to pos instaminenuggetsacoin_minew.png 3 hours 2015-02 3 MINEW InstaMineNuggetsA 2015-02-27 750000 50 180 1500000 InstaMineNuggetsCLASSA $MINEW, Coin Type: Litecoin Scrypt, Halving: 7500 Blocks, Initial Coins Per Block: 50 Coins, Target Spacing: 3 Minutes, Target Timespan: 33 Hours, Coinbase Maturity: 3 Blocks, Pre-Mine: 50%=750000 Coins, Max Coinbase: 750000 + 750000=1500000 Coins. $MINE has launched 1.5 million ($MINEW) (CLASS A CRYPTO COIN CONTRACTS) on 02-27-15. $MINEW will be convertible into $MINE at 2.00 $USD per coin with a 2 year Expiration Date from purchase or can be bought/sold freely like $MINE once exchange listed. $MINEW is a seperate 1.5 million capped “Crypto Coin Contract” blockchain based cryptocurrency that will trade freely alongside $MINE once listed. $MINEW can be bought and sold with “no lockup period”. ('h', 7500, 'b') aeromecoin_am.png DGW 1 block 2014-12 AM AeroME 2014-12-08 100% 60 asset issuance instrument, ipo cancelled 12000000 AeroME (AM), X13, 60 second block time, PoW + PoS Hybrid, 12 Million coins Premine in first few blocks, 100% Premine Block 1 = Premine (12 million coins for ICO), Block 2 > 100,000 = 0 coin rewards, in-case PoW mining is required to keep the chain moving, 12 hours min coin stake age, Unlimited max stake age, DGW re-targeting starts at block 20 frenchgeniuscoin_genius.png 2015-02 20 GENIUS FrenchGenius 2015-02-24 60 physical coin backed 100000 Coin name: French Genius (Angel), Coin ticker: GENIUS, Max mint: 100 000, Virtual coin true value: 1:1 with one physical Genius gold coin, Transaction fee: 0.0001 GENIUS (around 2 cents), PoS ~0.00001, Algo: SCRYPT metalcoin_metal.png 1 block 2014-11 METAL Metalcoin 2014-11-27 1000000 90 91315000 Total supply: ~91315000 Metal (will be less, PoS will kick in on block 22501), Algorithm: x11 (POW), POW Blocks: 50000 (will be less, PoS will kick in on block 22501), Difficulty retarget: every block, PoS: 5% a year, PoS start: block 22501, Premine: 1000000 METAL, Block time: 90 seconds, Block 1: premine monetacoin_monet.png 10 blocks 2015-10 120 MONET Moneta 2015-10-20 84000000 1 30 184000000 NAME: MONETA, Short Name: [MONET], algorythm: Scrypt, coin supply: 184 000 000, ColdBlackBox safe: 84 000 000, coin mining available: 100 000 000, block generation time: 30 seconds, block max size: 8 Mb, block reward: 1 MONET, start difficulty: 0.00024414, change difficulty: 10 blocks, coinbase maturity: 120 blocks, market available: 80 000 in 1 day, start price: 0.00050001 BTC 1 kw/h in the world, FIRST 5 000 blocks - reward 10 MONET, <10 000 blocks - reward 5 MONET, >10 000 blocks - reward 1 MONET bitzenycoin_zny.png DGW 2014-11 ZNY BitZeny 2014-11-08 250 90 250000000 Symbol ZNY, Algorithm Yescrypt(GlobalBoost-Yとは少し異なるため注意), Max Coins 250,000,000(2億5000万), Premine 0, Block time 90 秒, Difficulty DarkGravityWave3, Block reward 250, Halving every 500,000 blocks ('h', 500000, 'b') snakecoin_snake.png 2014-03 SNAKE Snakecoin 2014-03-08 5% 120 100000000000 bananacoin_banc.png NGW 2014-05 5 BANC Bananacoin 2014-05-19 50000 90 50000000000 ('h', 1, 'm') ripple_xrp.png 2013-05 XRP Ripple 2013-05-01 100000000000 Protocol, PoS winecoin_wnc.png 2014-07 WNC Winecoin 2014-07-26 10000 60 1000000 Block time: 1 minute Difficulty Adjustment: Each block Standard Dividends: 1% / year The minimum transaction fee: 0.0001WNC Transaction fees paid to miners Trade confirmations: 10, mature confirm: 500 Coin maturity: 8 hours p2p port: 9411,, pc port: 9412 Algorithm: scrypt Block Awards: 10000WNC, no halving POW phase 10000 blocks, after that switching to pure POS. Hybrid POW/POS WineCoin utilizes a hybrid POW / POS system. This allows for a fair distribution during the POW phase, and then switched to an energy efficient POS system. fullintegritycoin_fic.png 2014-10 FIC FullIntegritycoin 2014-10-28 1000 60 product range support 4204800000 Name: FULL INTEGRITY COIN, Abr. : FIC, Coins per Block: 1000, Block per day: 1440, Total coin in 1 day: 1440 * 1000 = 1,440,000 Coins, Total Block in 4 years= 2,102,400, Total Coins to generate: 4,204,800,000 echocoin_echo.png 1 block 2014-10 360 ECHO EchoCoin 2014-10-18 0.4% 40 60 relaunch 5000000 ('h', 43000, 'b') yovivirtualcoin_yovi.png 2015-07 YOVI Yovivirtualcoin 2015-07-13 100% 150 22830000 Ticker: YOVI, Algorithm: SHA256, Est 120-180 Seconds Per Block, Est Money Supply: 22.83m , POS: 5% / year wampumcoin_wam.png 100 blocks 2014-03 WAM Wampumcoin 2014-03-22 60 25000000000 scrypt ('r') starcoin_stc.png 2015-09 STC StarCoin 2015-09-12 110 60 2420000 Name: StarCoin, Short name: STC ★, Algorithm: X11, Total Coins: 2,420,000, Block Halving: 11000, Block Reward : 110 coins ('h', 11000, 'b') multigateway_mgw.png 2014-08 MGW NxtMultigateway 2014-08-20 asset:4551058913252105307 1000000 Multigateway (MGW) is a third party service developed on top of the NXT network that allows you to move cryptocurrencies in and out of the NXT Asset Exchange, the peer-to-peer exchange that offers decentralized trading with no trading fees. petrodollar_xpd.png 2014-03 XPD PetroDollar 1220000000 re-target using Kimoto's Gravity Well, block range coin rewards, and 1.2 billion total coins. 5 minute transaction time 288 blocks per day 120 blocks to mature Blocks 105,120 per year SHA-256D Kimoto's Gravity Well nanitecoin_xnan.png 2014-11 XNAN Nanite 2014-11-06 1% 60 1000000 Algorithm: X11, Block Time: 1 Minute, Block Reward;, 1 - 500 = 400 XNAN, 500 - 1500 = 350 XNAN, 1500 - 3000 = 150 XNAN, 3000 - 4001 = 225 XNAN, PoW Ends: Block 4001, PoS Starts: Block 3800, PoS Interest: 200%, Pre-mine: 1% (10,000 XNAN) craigscoin_craig.png 2014-09 CRAIG Craigscoin 2014-09-12 30 30000000 Name: CraigsCoin (CRAIG), 100% POS, Presale coins: 30000000 CRAIG, POS annual interest: 2%, Block time: 30 sec, The main idea of CraigsCoin is to provide the world with trustless, decentralized classified ads listing. Since everything is stored in the blockchain no entity will be able to delete or somehow edit an ad once it is posted., All 30000000 CraigsCoins are going to be sold via Bittrex ICO. Coin price will be determined at the end of the ICO (if there will be 10 BTC raised during the ICO it will mean that one CRAIG is worth 0.00000033 BTC) jesuscoin_god.png 2014-01 GOD Jesuscoin ferretcoin_fec.png 960 blocks / maximum difficulty retarget 123/55 (~+123%) and 55/123 (~-55%) respectively 2013-07 FEC Ferretcoin 2013-07-01 0 23 90 123000000 Scrypt-based cryptocoin. Yacoin clone. Ferret Coin - FerretCoin Abbreviation: FEC Algorithm: SCRYPT Date Founded: 7/1/2103 Total Coins: 123 Million Confirm Per Transaction: Re-Target Time: 960 Blocks Block Time: 1.5 Minutes Block Reward: 23 Coins Diff Adjustment: maximum difficulty retarget 123/55 (~+123%) and 55/123 (~-55%) respectively Premine:. optioncoin_option.png 2 blocks 2015-06 37 OPTION Optioncoin 2015-06-03 21000000 1 60 21001000 FREE DISTRIBUTION - 1,000,000 COINS (or 5%) will be given away to supporters of the coin for free!!!, Coin structure:, Block 1 - 20,000,000 ICO, Block 2 - 1,000,000 for free distribution, Block 2 - 1000 - 1 just to get the block chain working.OPTION is a PoS-based cryptocurrency., SLING is dependent upon libsecp256k1., Total POW: Blocks POW Reward: 137 SLING per Block POS Reward: 1.337 SLING Block Spacing: 60 Seconds Diff Retarget: 2 Blocks Maturity: 37 Blocks Stake Minimum Age: 8 Hours, Port: 30137 RPC Port: 30138 ircoin_irc.png 2014-04 IRC IRCoin celebration 55000000000 PoW Algorithm: Scrypt Difficulty Re-target : KGW Block Time: 60 seconds Initial Block Reward: 65000+ Coins per block Max Supply: 55.000.000.000 Coins bitcrystalcoin_btcry.png 90 days 2014-10 BTCRY BitCrystal 2014-10-31 500000000 15 9999999999999 Minimum Reward: 0.00000000000025 BTCRY, You can solo mining this coin and you can using this Coin in minecraft with my plugin!, You can use this plugin for all Cryptocoins and its integrated with shopsystem. , The configs should be self explained., Improved Design,Changed Retarget time and changed icons., 1 block is easy to get due to the weak level of difficulty., blocks halving after every 80640 blocks (14 days), The reward is halving every 14 days you get after 1 month the reward (500000000) divided through 4, after 2 month reward (500000000) divided through 8 and so on., When you reach the minimum of 25 coins then divided the minimum value of 25 coins / 100000000000000 so that the maximum minimum reward is 0.00000000000025 BTCRY pro Block. ('h', 14, 'd') bottlecaps_cap.png 2013-06 CAP Bottlecaps 2013-06-24 premined 47400000 re-target every 4 hours, POW / POS, 10 coins per block, and 47.4 million total coins. NVC clone, with a 0.25 diff start and quick 9 years inflation Abbreviation: CAP Algorithm: SCRYPT Date Founded: 6/1/2013 Total Coins: 48 Million Confirm Per Transaction: 5 Re-Target Time: 4 Hours Block Time: 1 Minute Block Reward: 10 Coins Diff Adjustment Starts at 0.25 with a 4hr Difficulty target time. Novacoin-based. 1% NVCS, SA 30/90, Coins 2.26 chikun_kun.png 2014-03 KUN Chikun 2014-03-10 10000000.0 goddesscoin_godd.png 2014-07 60 GODD Goddesscoin 2014-07-27 1.5% 153 60 3200000 Exchange, C-CEX Soon, Bittrex Soon, Poloniex Soon, Social networking sites, Website:Announced later, Twitter:, Email:GoddessCoins@gmail.com, Download, Wallet password:&#%@RFSD, Windows:, Mac:Reward 500 GODD, Linux:Reward 500 GODD, GitHub:, Specification, Total: 3.2 million, Coin Name: Goddess Coin [GODD], PoW Algorithm: X13, PoW + PoS hybrid, Confirmations for blocks to mature: 60, PoW Total Blocks: 10800 PoW blocks [7 days mining], 60 second block time, PoW block reward: 153 GODD, 60 confirmations for block to mature, POS starts at 10080 block, PoS Interest: 9%, Pre-mining:1.5% [high-quality Exchange and reward ], It is the first block, No IPO, RPC:19611, p2p:19612, Block Explorer, Awards 2000 GODD, Mining pool, We will select some to join the pool of high-quality,,, Faucet, Awards 2000 GODD, Paper Wallet, Later released, Reward, Translate: 100 GODD, propaganda: 100 GODD, Game: 1000 GODD, Business: 1000 GODD, For more information about GoddessCoin will be announced in the future walletcoin_wtc.png 5 minutes 2015-11 3 5 WTC WalletCoin 2015-11-24 0 10000 300 closed source 420000 Algorithm: Scrypt, Coin Name: WalletCoin, Coin Abbreviation: WTC, Address letter: W, RPC Port: 5354, P2P Port: 5353, Block reward: 10000 coins, Block halving: 210000 blocks, Total coin supply: 420000 coins, Coinbase maturity: 5 blocks, Number of confirmations: 3 blocks, Target timespan in minutes: 5 minutes, Target spacing in minutes: 5 minutes ('h', 210000, 'b') blitzcoin_bltz.png 2014-03 BLTZ BlitzCoin 1% 100000000 Retarget: Kimoto Gravity Well Block Time: 60 seconds Transaction: 3 confirmations Maturity: 30 confirmations Block Reward: 10,000 BLTZ Max Supply: 100,000,000 BLTZ Premine: 1% (1,000,000 BLTZ for IPO Hybrid) tattoocoin_ink.png 2014-05 INK Tattoocoin 50 21000000 50 coins per block and 21 million total coins. smileycoin_smly.png 5 days 2014-11 SMLY Smileycoin 2014-11-03 25000000000 10000 180 premined 50000000000 Initial block reward: 10000 SMLY., The coin is based on litecoin source. The source code is freely available., A total of 50*10^9 (50 billion) coins will eventually be generated, half during the pre-mine phase., Difficulty will be adjusted approximately every 5 days so as to obtain a new block every 3 minutes on average. This difficulty schedule aims to reduce the non-premined coins by 50% over 7 years. thirdgenerationcoin_tgc.png 2014-06 TGC ThirdGenerationCoin 2014-05-07 vardiff 180 21000000 We define the cryto-currency with 3 stage: The first stage is the POW(proof of work). Bitcoin The second stage is the POS (proof of stake). PPC The third stage is POB(proof of burn) reference (Slimcoin Whitepaper PDF),thanks for their great work.But we find some drawback with distribution model of slimcoin.So we try to improve this model.Make it more profit for the holders. 1. We reduce the amount to 21 Million 2. Make the mint confirm faster and less block confirm. 3. Improve the distribution model. 4. Make burn Coins more profit. Proof-of-work is used as a mean for generating the initial money supply.But if someone doesn’t have any rig, can not take part in the distribution. Specifications Uses Dcrypt algorithm, an algorithm made to be difficult to implement on an ASIC. Tri-Hybrid Blocks: Proof-of-Burn blocks, Proof-of-Stake blocks, Proof-of-Work blocks Block time is 3 minutes (180 seconds) Difficulty re-targets continuously Block Rewards: Proof-of-Burn blocks: max 200 coins Proof-of-Work blocks: max 20 coins Pos:7 days Block rewards decrease in value as the difficulty increases Total 21 million coins emoneycoin_ecash.png 2015-02 120 ECASH EMoney 2015-02-24 60 5000000 Algorithm - SHA256, Premine 1% Of Total PoW Blocks (40k Coins, approx 1% due to POS) Escrow Needed, POS Maturity - 24 hours, Block Target - 1 minute, 120 Blocks for Maturity nukecoin_nuke.png 2015-07 NUKE Nukecoin 2015-07-07 100% 30 2778196 Ticker: NUKE, Full-POS No-Mining, 50% yearly interest, minimum stake age: 1 hour, max supply 6,031,900, 100% ICO: Duration 3 days breakcoin_bre.png 2015-08 BRE Breakcoin 2015-08-24 100% 60 760000 BREAK COIN [BRE] | 760.000 coins | 50% POS | MASTERNODE | Only POS, Only 760, 000 COINS, 50% POS, 10,000 per masternode, Min. Staking age : 8 hours, Bonus blocks 5, 10 , 25, 100, 500, You need 10,000 to make any chance to get bonus blocks peppsycoin_psy.png 9.5 blocks 2015-10 PSY Peppsycoin 2015-10-17 9.5 30 block 0 at 2015-05-01 159595959 NAME: Peppsycoin, ALGO: Scrypt, SUFIX: PSY, Coin Target: 30s, Block reward: 95 PSY, Block Halfing: 788400 blocks, 273.75 days, Retarget: 9.5 blocks, Total: 159595959 ('h', 788400, 'b') neocortexcoin_neoc.png 2014-07 NEOC NeoCortexCcoin 2014-07-28 20000 NEOC 90 4893200 NeoCortex - Coins for the Mind, Introducing the latest mining algorithm - NeoScrypt - with thanks to GhostLander for the creation of the algorithm., Algo: NeoScrypt (profile 0x3), Ticker Code: NEOC, Coin Name: neocortex (neocortexd and neocortex.exe), Config Location: neocortex.conf, Block Rewards, 10K NEOC Premine (Originally For Bounties - see below) in Block 1 - But Unintentionally discarded due to blockchain fork., Blocks 2 - 99 - 2 NEOC, Blocks 100 - 499 - 20 NEOC, Blocks 500 - 20000 - 250 NEOC, Blocks 20000+ - 50 NEOC + Transaction Fees, The shorter plan for the block reward system, is to allow for an easy inclusion of PoS at some point after block 20,000 if the coin's value is sufficient to warrant it., Use the following nodes to help connect if you are having issues: (add to bottom of neocortex.conf and remove your peers.dat file), Code:, rpcport=25555, rpcuser=rpcuser, rpcpassword=rpcpass, addnode=107.170.90.55:36454, addnode=37.59.21.199:29030, addnode=107.170.165.93:58468, addnode=54.191.27.187:34347, Difficulty Algorithm, The same difficulty algorithm as in Monocle has been used for this coin., Mining, As this coin NeoScrypt - no current GPU miner will work for this coin. The CPU miner from GhostLander is also untested with this coin's exact workings. Bounties (see below) will be provided., Github Source: -, Windows Wallet: -, MAC Wallet: -, CPU Miner,, Code:, solo mine using -a altscrypt -o -u rpcuser -p rpcpass, GPU Miners - Required - See Bounty Section Below, Pool Python-Hash - Required - See Bounty Section Below, Pool - Required - See Bounty Section Below, NEOC to BTC on “FobCoin” -, Windows Wallet - 'ascii' - 1000 NEOC, MAC Wallet - Donations to 'w8' - 9i4QFZe4tvyFTcs1xna6Cky7YYk2SK1NcA, Working Tested CPU Miner, Working Tested GPU Miner AMD - 1000 NEOC Bounty, Working Tested GPU Miner nVidia - 1000 NEOC Bounty, Working Tested Rental Rigs Site - 500 NEOC Bounty, Python Hash For Pools - 500 NEOC Bounty, First 5 Pools - 500 NEOC Bounty nebulacoin_neb.png 1 block 2014-06 10 50 NEB Nebulacoin 2014-06-21 20000 60 1450000 Algorithm: X11 POW/POS Short: NEB, Total coin: ~1,450,000 with PoW, Block reward: Random Super Block, Reward Blocks, 50 NEB Up to 200 (Low Reward while mining pools gets setup and users discover Nebulacoin to create a fair launch), 100 NEB Up to 1000 , 300 NEB Up to 2500 , 150 NEB Up to 6000, PoS starts on block 6500, PoW last: 6500 ~approx 5 days, PoS generate after block 6400, Block time: 60 seconds, PoS Min age: 8 hours, PoS Max age: Unlimited, 20K NEB Premine for Bounties/Dev, Difficulty Readjusts every block, Confirmations on Transactions: 10, CoinBase Maturity: 50, Stake interest: 3% per year ratcoin_ratc.png 2014-05 RATC Ratcoin 大鼠币 jewelcoin_jewel.png 4 hours 2015-02 8 JEWEL JewelCoin 2015-02-19 1% 1 60 350000 Name JewelCoin, Coin Abbrevation JEWEL, POW/POS, POS 7%, Hash type x13 no ASIC, Premine 1%, Coins per block 1, Target spacing 1 min, Halving 50000 blocks, Target timespan 4 h, Coinbase maturity 8 blocks, Max coinbase 350.000 ('h', 50000, 'b') noocoin_noo.png 2014-12 NOO Noocoin 2014-12-12 100% 60 25000000 100% PoS, 3% annual interest, 4 hour stake. 100% presale credits_crd.png 60 minutes 2014-01 6 70 CRD Credits 2014-01-19 3.5% 33.7 30 337337337 re-target every 1 hour, 33.7 coins per block, and 337.3 million total coins. dvorakoin_dvk.png 2014-05 DVK Dvorakoin speccoin_spec.png 4 blocks 2015-06 44 SPEC Spec 2015-06-28 30% 1250 300 3000000000 SPEC is a lite version of Bitcoin using scrypt as a proof-of-work algorithm., Coin Name: SPEC, Algorithm: Scrypt, Coin Abbreviation: SPEC, Block Time: 5 minutes, Block Reward: 1250 SPEC, Retarget: 4 Blocks, Minted Coin Maturity: 44 blocks, Block Halving: 840000 blocks (~8 years), Total Coin Supply: 3,000,000,000 coins, ('h', 840000, 'b') liteshares_lts.png 2014-04 LTS Liteshares fuelcoin2_fc2.png 2014-04 FC2 FuelCoin 2014-04-29 50000000 60 premined, revamp of fuelcoin 100000000 NEW FuelCoin - FC2 Re-Distribution x11/pos 2% interest 2014-04-30, 03:15:32, Fully Mined for re-distribution, Ticker: FC2, Total Coins: 100 Million, Algo: x11, Interest Stake: 2% a3coin_a3c.png 2014-04 4 60 A3C A3 coin 1.5% 60 110000000 Pure POS. Block Rewards: 1st year nominal stake interest : 50% 2nd year nominal stake interest : 30% 3rd year nominal stake interest : 20% 4th year nominal stake interest : 15% 5-10 year nominal stake interest : 5% pa 10 years onward nominal stake interest: 3% pa guldencoin_nlg.png 2014-04 NLG Guldencoin 10% premined 1700000000 Algorithm: scrypt PoW KGW Transaction confirmations: 6 Total coins: 1700M Premine: 170M (+/- 10%) Starting diff: 0.00244 Block reward: 1000 NLG Block time: 150 seconds Retarget: 1440 blocks Reward halving interval: 840.000 blocks siacoin_sia.png 2015-05 SIA Siacoin 2015-05-14 600 storage-contract -1 Anybody with siacoins can rent storage from hosts on Sia. This is accomplish via ”smart” storage contracts stored on the Sia blockchain. The smart contract provides a payment to the host only after the host has kept the file for a predetermined amount of time. If the host loses the file, the host does not get paid. prismacoin_prisma.png 2014-07 PRISMA Prismacoin 2014-07-21 1.5% 60 6000000 COUNTDOWN TIMER, PrismaCoin was created by three people (web developer, designer and c++ programmer)., We are now just a small group but we have lot of projects and plans for the future so stay here and keep updated., With a short POW we give less time for the miners to dump their coins during the POW phase. The 10% interest on POS means more incentive for you to hold your coins. With this we can avoid a big dump., You will able to buy Web-designs, Logos, Banners, Pictures, directly from the owners with PrismaCoin., And you will able to hire web developers and designers with PrismaCoin too., We are now negotiating with some online stores in Europe about accepting PrismaCoin as payment method to buy electronic devices such as Personal Computer, Hardware, Television, Mobile Phone., PrismaCoin will be accepted by this store in at the end of August., 2014.07.28, PrismaCoin has good news! , From August 17th a newly created company based in Hungary will accept PrismaCoin., The Company's main profile is ASIC miners. We spent a lot of time with the owners and finally they will make business with us., They will open an Online Store on August 17th., You will able to pay with Bitcoin or with PrismaCoin!, If you decide to pay in Prisma we have a good news for you! Because the prices in PRISMA will be lower than in BTC. , They will announce a giveaway where somebody can win 2 Antminer S3s or 1 Dragon T1. We don't have more information about it. If we will have any information from the webshop we will announce it!, You will able to buy ASIC miners:, Antminer S3, Rockminer 32Gh/s, KNC Jupiter, TerraMiner II, Dragon T1, Butterfly Labs Jalapeno, And many others!, algorithm: x13, supply: 6.000.000 (6million) PRISMAcoin, reward: 1000 PRISMAcoin, blocktime: 60s, last POW block: 6011, min height: 3 hours, max height: 10 days, stake: 10%, Premine: 1.5% (90000) For development, giveaway, bounty, Blocks:, 1: premine, 1-100: 1 PRISMA, 101-6011: 1000 PRISMA internetcoin_ 2014-03 WWW Internetcoin re-target every 60 blocks, 50 coins per block, and 500 million total coins. naanayamcoin_nym.png 1 block linear 2013-11 100 NYM NaanaYaM 2013-11-07 0 1 10 -1 re-target every 10 seconds, POW / POS, 1 coins per block, and 100 trillion total coins. Just a PPC clone. Naanayam - Naanayam Abbreviation: NYM Algorithm: SHA-256 Date Founded: 11/7/2013 Total Coins: Unlimited Confirm Per Transaction: 100 For mint Re-Target Time: linear (per-block) Block Time: 10 seconds Block Reward: 1 Coin Diff Adjustment: linear (per-block) Premine:. Novacoin-based. vaultcoin_vlt.png 2014-04 VLT Vaultcoin 66000000.0 stepscoin_steps.png 2015-09 STEPS Steps 2015-09-07 1000000 60 25000000 Ticker: STEPS, ICO/FREE GIVEAWAY/POS, Initial Supply: 16 MILLION, 100% Proof of Stake, POS: See Reward System Below, Minimum Staking Age: 24 hours, Total Supply: ~25 MILLION, THE ICO WILL RUN IN 3 STAGES BY A TRUSTED ESCROW SERVICE FOLLOWED BY POS AND LISTINGS ON MAJOR EXCHANGES pkrcoin_pkr.png 2014-03 PKR PKRCoin 69000000.0 potatocoin_spud.png 2014-01 SPUD Potatocoin 2014-01-31 500 30 premined 625000000 Algorithm: Scrypt, Block Reward: 500 (decreasing by 0.0002 per block), Block Time: 30 seconds, Maximum Coins: 625 Million ('r', 0.0002, 'b') nofiatcoin_xnf.png 2014-01 XNF NoFiatcoin kimcoin_kmc.png 2014-04 KMC Kimcoin 2014-04-01 Multi-hash algo based cryptocoin with re-target every 500 blocks, CPU only mining, 100 coins per block, and 210 million total coins. fantomcoin_fcn.png 2014-05 FCN Fantomcoin 2014-05-06 60 -1 Fantomcoin is the first CryptoNote currency to support merged mining of different CryptoNote-based coins, allowing to receive not only the FCNs but also any other CryptoNote-based currency without extra mining effort. As a result, the cryptographic security of the coins is increased. This also allows fair resource distribution and stabilizes the cryptocurrency market through diversification. siriuscoin_ssc.png 2014-01 SSC Siriuscoin re-target every 4 blocks, 25 coins per block, and 10 billion total coins. teslax3coin_tesla.png 2014-03 TESLA Teslax3coin 2014-03-20 33 120 17500000 ('h', 'y') vaporcoin_vprc.png 30 min 2015-06 150 VPRC VaporCoin 2015-06-02 60 1000000 ALGO: SHA256, P2P port: 56631, RPC port: 57631, POW: 4000 blocks, POW rewards: , Block 1-1000 = 843 coins/block, Block 1001-2000 = 621 coins/block, Block 2001-3000 = 437 coins/block, Block 3001-4000 = 271 coins/block, POS: DPOS, Block 1-30000 = 14 coins/block, Block 30001-10000 = 7 coins/block, Block > 100001 = 3.5 coins/block elektroncoin_ekn.png 2015-04 5 30 EKN Elektron 2015-04-10 2000000 60 6000000 Currency name: ELEKTRON, Currency symbol: EKN, Total coins after PoW: 3 Millions circa , Coins available to mine: 1 Million in 15 Days after ICO ends, Coins available for pre-sale: 2 Millions , Confirmations for block maturity: 30, Block time: 60 seconds, Confirmations for transactions: 5 (really fast tx), Annual stake interest rate: 1,5%, RPC port: 23123, P2P port: 23121, Minimum stake age: 12 hours, No max stake age., Auto checkpoint Masternode system., Block reward scheme:, 1st Block: 2 Millions premine for pre-sale, 2nd to 149th Block: 0 block reward for testing and premine move to escrow, 150th to 1560th Block: 150 EKN per block, 1561th to 5881th Block: 30 EKN per block, 5882th to 21880th Block: 50 EKN per block, After block 21880 Elektron network will switch to full PoS. , No more PoW blocks will be accepted after block 21880., A dedicated EKN multipool will be released to coincide with the end of PoW phase. unattainiumv2coin_unat.png 1 block 2015-03 8 UNAT UnattainiumV2 2015-03-24 0.25 30 multi-algo, reconfig 1000000 Algo: SHA-256d, SCRYPT, SKEIN, QUBIT, GROESTL, Reward: 0.25 UNAT, Spacing: 30 seconds across all algorithms, Retarget: Every block, Diff algo: Digishield multi-algo fiftyshadesofcoin_fifty.png 5 hours 2015-02 5 FIFTY FiftyShadesofCoin 2015-02-27 2% 500 60 ignored 50000000 Coin properties Coin typeBitcoin (SHA256) Halving 50000 blocks Initial coins per block 500 coins Target spacing 1 min Target timespan 5 h Coinbase maturity 5 blocks Max coinbase 50.000.000 coins ('h', 50000, 'b') mycoin_myc.png 2014-03 MYC Mycoin 2014-03-29 100% 15 premined 880000000 darkcorgicoin_dcorg.png 20 minutes 2015-09 5 10 DCORG DarkCorgi 2015-09-12 0 10000 120 200000000 ƊαякƇσяgι, Ƈσιη Aввяєνιαтιση: ƊƇORƓ, Aℓgσяιтнм: Scяуρт, Ɓℓσcк Ƭιмє: 2 мιηυтєѕ, Ɗ郃ιcυℓту Rєтαяgєт: 20 мιηυтєѕ, Ɓℓσcк яєωαя∂: 10,000 cσιηѕ, Ɓℓσcк нαℓνιηg: 10,000 вℓσcкѕ, Ƭσтαℓ cσιη ѕυρρℓу: 200,000,000 cσιηѕ (200 Mιℓℓιση), Mιηιηg Mαтυяιту: 10 cσηƒιямαтισηѕ, Ƭχ Mαтυяιту: 5 Ƈσηƒιямαтισηѕ, RƤƇ Ƥσят: 4790, Ƥ2Ƥ Ƥσят: 4789 ('h', 10000, 'b') fbcoin_xfb.png 2014-06 XFB FBcoin 2014-06-06 15 100000000 Algorithm: Scrypt, Blockchain Security: POW/POS, Total Coins: 100Million, Block Reward: 100XFB, Block Time: 15 Seconds, Interest: 10% Annually, Min stake age: 3days, Max age: 100days quazarcoin_qcn.png 2014-05 60 QCN QuazarCoin 2014-05-08 120 18446744 Quazarcoin is the CryptoNote currency launched as a result of community discussions. It has a flatter emission curve and a clear launch for the wider community. QCN's developers focus on usability aspects of the currency. Its main contribution is the popularization of CryptoNote. longcoin_lng.png 2014-07 LNG LongCoin 2014-08-03 5% 60 2000000 Launch date: August 3rd, 2014 9:00 UTC, News, Aug 3rd, 2014 - Launch delay, There is a delay with the launch. We need to setup the pool and block explorer. The launch has been rescheduled to 23:00 UTC (12 hours delay), July 26th, 2014 - Presale started, You can now buy Longcoins before the launch date:, Wallets, Windows Wallet: , Source: , longcoin.conf, Code:, rpcuser=youruser, rpcpassword=yourpassword, listen=1, daemon=1, server=1, rpcallowip=127.0.0.1, addnode=66.45.238.253, Specifications, Symbol: LNG, Algo: X11 PoW, Block time: 60 sec, Difficulty retarget: every block, Total coins: 2,000,000, Premine: 100,000 - 5% of total supply (80,000 presale, 20,000 bounties, faucets, etc), Website,, Block rewards, Block 0 - 0 : 0 LNG, Block 1 - 1 : 100,000 LNG (presale, promotion), Blocks 2 - 360 : 0 LNG (6 hours), Blocks 361 - 43200 : 0.5 LNG (1 month), Blocks 43201 - 525600 : 0.25 LNG (1 year), Blocks 525601 - 2628000 : 0.125 LNG (5 years), Blocks 2628001 - 24763840: 0.0625 LNG (47 years), Bounties, First exchange - 5000 LNG, Dice Game - 1000 LNG, Faucet - 1000 LNG, New logo - 1000 LNG unlock.mk, Block explorer,, Pools, 1% fee, Exchanges, TBA mandarincoin_mac.png 2014-04 MAC Mandarincoin russiacoin_rc.png 2014-06 RC Russiacoin 2014-06-07 144000000 songcoin_sng.png 2014-09 120 SNG SongCoin 2014-09-30 2% 100 120 210240000 Songcoin - Altcoin Designed for Investing In Music, Algorithm : Scrypt - Litecoin Descendant, Number of Coin : 200 Million, Premine : 2%, Difficulty : 3.5 days, Coins on each block : 100 coins, Time between blocks : 2 min, blocks per day : 720 blocks, Total blocks in 4 years : 1.051.200 blocks, Total all coins : 210.240.000 coins, Maturity Coinbase : 100+20 fistbumpcoin_fist.png 2015-06 FIST FistBumpcoin 2015-06-01 60 67000000 FistBump : Algo X13 -PeerCoinFork, Proof of Work 24 Hours - 1440 Block, Proof of Stake; Min Age 60 Minutes, Proof of Stake Reward; 20%, Increasing Block Reward:, 0 - 480 each Block 20000[FIST] ~ 8 Hours, 481 - 960 each Block 40000[FIST] ~ 8 Hours, 961 - 1440 each Block 80000[FIST] ~ 8 Hours, After Block 1440 Full Proof of Stake, PoW Phase : 67 Million [FIST] guerillacoin_gue.png 2014-06 510 GUE Guerillacoin 2014-06-18 0 750 60 fast staking 15000000 Algo: X11 PoW/PoS, Total coins: 15 000 000, Block time: 1 minute, Block reward: 750, Block maturity: 510, POW Blocks: 10 000, FAST STAKE INTEREST after 8 hours but no more than 365 days., NO Premine cooperationandsolidaritycoin_cas.png 2014-12 CAS CooperationAndSolidarityCoin 2014-12-17 0 496 60 virused 4500000 Total coins : 4.5 million, PoW timespan : 7 days, Algo : x13, Anon : PoSA in beta now., Pre-mine : none, IPO : none, PoW block reward: 496, Mining will stop after 10080 blocks (7 days), Block time: 60 seconds, PoS 6%, Minimum age 1 hour, Max age 8 hours masterdogecoin_mdoge.png 2015-06 MDOGE Masterdoge 2015-06-28 1% 60 100000000 Algorithm: Scrypt, Ticker: MDOGE, Supply: 100 Million, PoW blocks: 20050, Reward: 5000 MDOGE, Block Target: 60 sec, PoS: 10%, Stake Min age: 6 hrs, Masternode Cost: 20000 MDOGE, Masternode reward: 1000 MDOGE + 5% of PoS, Premine: 1 Million (check Bounty list below) whistlecoin_wstl.png 2014-10 10 70 WSTL Whistlecoin 2014-10-07 0.5% 4728 60 20000000 Algo X11, POW/POS, POS interest 2%, Block 1 - premine 0.5%, Block Rewards 2- 7199 - 2764, Block Rewards 7200 - 4728, POS Starts at Block 6000, 60 Second Blocks, Confirms 70 for mined, Confirms 10 for Transactions, 20 Mil Coins in POW, Min Stake Age 2hrs, Max Stake Age 8hrs einsteinium_emc2.png 2014-03 EMC2 Einsteinium 300000000.0 re-target using Kimoto's Gravity Well, 1024 coins per block, and 299.7 million total coins. sphere_sph.png 2013-07 SPH Sphere 2013-07-23 auto re-target and 20 coins per block. whirlcoin_whrl.png 2014-07 WHRL Whirlcoin 2014-07-21 0 45 168000000 Whirlpool Algorithm, WhirlCoin uses the Whirlpool algorithm, and is the first to do so. This algorithm utilises the Merkle–Damgård construction based on AES, created by one of AES' creators .It has remained open cource and without patents since its release., The Wirlpool algorithm is fast and efficient. It has an average speed of 57 Mib/s, almost as fast as SHA-256 and much quicker than most algorithms used by cryptocurrencies. Whirlpool is also energy efficient, allowing both miners and clients to work to a higher capacity., WhirlCoin implements SHA-256 alongside Whirlpool for added security; Each hash has one round of Whirlpool and one of SHA-256., Specification:, Algorithm: WhirlPool, Money Supply: 168,000,000, Block Target: 45 Seconds, Rewards: WhirlCoin's rewards follow -sqrt(0.0026x)+100, a negative square root function starting at 100 and ending at 10, 4 years later., Why does WhirlCoin not have Pos?, PoS is a dying concept. It allows those the rich to easily amass more wealth, and it inspires people to dump the coin after the PoW phase., This view is becoming more and more popular, and a more in-depth look at the issue is avaliable here: Below is a plot of the first million or so block rewards for WhirlCoin. yinyangcoin_yinyang.png 2014-02 YINYANG YinYangCoin 36000000 36,000,000 Total Coin Supply Max 1,000,000 available for .0001 each week of competition scrypt - 0 block reward, 0 mining fees foxcoin_fox.png 60 seconds 2014-02 2 FOX FoxCoin 2014-01-17 0 250 60 premined 1000000000 Fox Coin - FoxCoin Abbreviation: FOX Algorithm: SCRYPT Date Founded: 1/17/2014 Total Coins: 1 Billion Confirm Per Transaction: 2 Confirms per TX Re-Target Time: 60 Seconds Block Time: 1 Minute Block Reward: 250 FOX decreases by 0.000125 FOX every block Diff Adjustment: Premine: 25k Coins. ('ra', 0.000125, 'b') nightmarecoin_nm.png 2014-08 NM Nightmarecoin 2014-08-15 2% 60 7140000 Nightmare 'NM', POW/POS, Algorithm - FRESH (Shavite, Simd, Echo, in 5 rounds of hashing), Total Coins - ~7.14 Million NM, 60 second block time, 11% POS Interest for the first 1 year then drops to 5.5%, 1% developer premine and 1% premine for bounties, NightSend addresses start with a lowercase 'n', Nightmare addresses start with a capital 'N', , BLOCK REWARDS, Block 1 is 1% NM developer premine & 1% premine for bounties, Block 2-160 are 0 NM for fair start, Block 161-14000 are 500 NM (~10 days POW), POS with 11% annual interest and then 5.5% annual interest after 1 year. faucetcoin_fec.png 2014-04 FEC Faucetcoin 100000000 Transaction fees are 0,010 Faucets: Available at launch Faucets limit: about 180 000 coins a day total 100 000 000 coins total Algo: scrypt Block reward: 1000 FEC Block time: 60 seconds Retarget: every 10 minutes Transaction: 3 confirmations RPC port 9939 P2P port 9949 globecoin_glb.png 9 blocks 2013-12 GLB Globe 2013-12-06 0 ((0, 2months, 10), 5) 60 a re-target every 9 blocks and 10 coins per block. Globe Coin - GlobeCoin Abbreviation: GLB Algorithm: SCRYPT Date Founded: 12/6/2013 Total Coins: Confirm Per Transaction: Re-Target Time: Block Time: 1 Minute Block Reward: 10 coins per block for first two months, 5 coins thereafter + 4% increase in block size annually -adjusted every day (every 1440 blocks) 20% tax on coins Diff Adjustment: 9 Blocks Premine:. ('ip', 4, 'y', 'd', 1, 'd', 1440, 'b', ('tax', 20)) coolingcoin_clc.png DGW 2014-07 10 CLC Coolingcoin 2014-07-10 0 1000 60 10000000 Coolingcoin coin has been created with the Algo NIST5 which uses, GRoSL, JH, BLAKE, KECCAK & SKEIN. These five Algos together make the best cooling rate for miners and is also energy efficiant.. Total coin: 10 Million Block time: 60 seconds Difficulty Retargets:DGW Decentralized MasterNode Network Confirmations on Transactions: 10 1.5% Premine -- For IPO & bounties, development. POW REWARD Total Block: 10 000 Block 1- 10 000: 1 000 After block 10 000 POS will start . 10% Annual PoS. ethereumdarkcoin_etd.png 2014-11 ETD EthereumDark 2014-11-05 60 1400000 Algorithm: X11, Ticker: ETD, Est Supply: 1.4 mill, Max POW Height: 14k, Block Time: 60s, POS Interest: 100%, Min stake Age: 1h, POS Starts at block 8200, POS % interest will half every 30 days, Future Features:, Stealth Addressing, Mixing with Nodes, In wallet Chat and more! qubitcoin_q2c.png 2014-01 Q2C QubitCoin 2014-01-12 248000000 2048 coins per block and 248 million total coins. whalecoin_whale.png 1 block 2014-08 10 WHALE Whalecoin 2014-08-01 1% 60 864000000 X11 - 5 Days POW / POS , 864,000,000 max coins , 120,000 coins per block , 60 sec block time , 7,200 blocks , 10 confirmations for blocks to mature , Re-target difficulty 1 block. , PoS 8.88% interest , PoS Min age: 8 hour , PoS Max age: unlimited , 1% premine for development sync_sync.png 2014-05 SYNC Sync 2014-05-17 0.5% 0.14 60 1000 katecoin_ktc.png 2015-08 KTC KateCoin 2015-08-05 200 4 60 closed source 42000000 Kate currency, KateCoin 2015/8/5 release, Total: 42 million, Symbol: KTC, RPC port 19765, P2P port 19766, sha256 algorithm, 60 seconds a block, the first 200 blocks each block reward a currency for testing the block chain. After each block award four coins, halving every 60 days, until the last 0.06 coin reward exhausted. Kate currency KTC official total population 481,484,329 official website ('h', 60, 'd') plebeiancoin_pleb.png 2014-08 PLEB Plebeiancoin 2014-08-09 2% 4200 67 23129316 Algo: X13, Symbol: PLEB, Block Time: 67 Seconds, Block Award: 4,200 PLEB, PoW Period: ~4.2 Days, Total Mineable PoW Money Supply*: 22,666,730 PLEB, Total Money Supply incl. Premine*: 23,129,316 PLEB, Last PoW Block: 5400, PoS Interest: 20% Annually, Coin PoS Age: Min. 1 day / Max. 7 days, Premine: 2% = 462,586 PLEB (1% for dev team fund, 1% for community bounties) hamburgercoin_hac.png 60 blocks 2014-02 3 20 HAC Hamburgercoin 2014-01-25 0 20 premined 500000000 Hamburger Coin - HamburgerCoin Abbreviation: HAC Algorithm: SCRYPT Date Founded: 1/25/2014 Total Coins: 500 Million Confirm Per Transaction: 3 for tx 30 for mint Re-Target Time: 60 Blocks Block Time: 20 seconds Block Reward: Random Diff Adjustment: Premine: 55%. judgecoin_judge.png 2014-08 110 JUDGE Judgecoin 2014-08-16 500 60 11450000 Total coins: About 11,450,000+ (After PoW ended), 500 Coins per PoW block (Ended), PoW Algorithm: X13 , PoW + PoS Hybrid , PoW Blocks: 38,640, Stake 6% Annually , 24hr Minimum coin age , 60 second block target , 110 confirmations for blocks to mature clockcoin_ckc.png 2014-01 CKC Clockcoin re-target each block, 60 coins per block, and 525 million total coins. minecoin_xmine.png 2015-08 XMINE Minecoin 2015-08-27 8400000 100 30 unmentioned premine 84000000 Symbol MINE, Pure scrypt Proof of Work coin, 84 Million coins available, Block reward 100 MINE halving every 210000 blocks, 30 seconds block time ('h', 210000, 'b') twerkcoin_twerk.png 2015-05 750 TWERK Twerk 2015-05-16 120 -1 TWERK = HIGH REWARD = SHA256 = 现在挖 TWERK, Hashing Algorithm: SHA256, Total POW: 4000 Blocks, POW Reward: , 1-10 = 100 TWERK (for anti-instamine), 10-499 = 4000000 TWERK, 500-999 = 2000000 TWERK, 1000-1499 = 1000000 TWERK, 1500-1999 = 500000 TWERK, 2000-2499 = 250000 TWERK, 2500-2999 = 125000 TWERK, 3000-3499 = 62500 TWERK, 3500-4000 = 31250 TWERK, POS Reward:, 1-19999 = 10000 TWERK, 20000-99999 = 25000 TWERK, 100000-199999 = 20000 TWERK, 200000-399999 = 15000 TWERK, 400000-infinity = 10000 TWERK, Block space: 2 Minutes, Maturity: 750 Blocks, Stake Min Age: 7 Hours, Stake Max Age: 24 Hours, Port: 32101, RPC Port: 32100, 20 MB maximum block size Can handle 100x more transactions per second than bitcoin ((20,000,000/1,000,000)*((10*60)/(2*60)), Magic Bytes: 0xa3 0xb2 0xf9 0xc7 lemoncoin_lmc.png 2014-03 LMC Lemoncoin 10000000000.0 helixcoin_hxc.png 2014-03 HXC Helixcoin 120000000.0 SHA-3 based cryptocoin with re-target using Kimoto's Gravity Well, 151 coins per block, and 120 million total coins. gongyicoin_gyc.png 5 blocks 2015-07 GYC Gongyicoin 2015-07-07 0 60 closed source 42000000 Currency name: public coin English gongyicoin abbreviated GYC public money was born on 7 July 2015, x13 algorithm based on and improved unique algorithms and professional mining machine graphics are not were mining, computer CPU can mining, fairer public coin issued a total of 42 million , without robbing and no pre dug. The overall interface of the wallet is updated (public money is committed to the development of social welfare)., R & D Author: GYc team, Core algorithm: Quark, Algorithm: x13 algorithm based on improved specific anti budget wallet algorithm, graphics and professional mining machine are not mining, computer CPU can mining, Total amount: 42000000 without pre excavation, Out of block speed: 60 seconds a block, an annual output of 5, Block Awards: the first annual output of second 10000000 third fourth 5000000 fifth 2000000, Block browser:, Official website: urodarkcoin_urod.png 2014-11 UROD UROdarkcoin 2014-11-09 60 2000000 algorithm = Scrypt block time = 60 seconds. Block 1 - 1,000,000 UROD, sold in ICO on Allcoin (20 BTC at ICO, 2000 satoshi per coin). Yearly POS reward will be 100% redkoin_rkn.png 2014-06 RKN Redkoin 2014-06-30 yes 120 0 RedKoin is planned to be a 100% proof of stake coin, except for the premine. The premine will be transparently distributed and monitored using an official faucet and block explorer. The public address of the faucet's wallet will be publicly available on launch. The faucet will follow a exponential decay model, paying out highly at launch and then tapering off quickly within a week. The exact amount of the premine is still up for debate. The initial planned amount was to be 50% but we decided that it was best left up to the community. bestcoin_best.png 2013-07 BEST Bestcoin 2013-07-31 premined re-target every 10 minutes or 2 blocks, 1 coins per block, and 1 million total coins. Abbreviation: BEST Algorithm: Date Founded: 7/1/2013 Total Coins: 1 Million Confirm Per Transaction: 6 Re-Target Time: 60 minutes Block Time: 5 Minutes Block Reward: 1 Coin Diff Adjustment 12 Blocks. sovereigncoin_svc.png 2014-03 SVC Sovereigncoin 600000000.0 1000 coins per block and 600 million total coins. cryptocoininccoin_cci.png 2014-09 CCI Cryptocoin Inc Shares 2014-09-25 180 clone of bottlecaps 100000 Cryptocoin Inc. is a registered Canadian company based in Barrie, Ontario, Canada. CCI is in the process of building out a medium-scale Bitcoin mining operation and offering its services as a virtual provider of cloud-based Bitcoin mining contracts in the near future. CCI Shares are a direct stake in ownership and profit sharing for Cryptocoin Inc. CCI Shares are Proof of Stake with null (0 coin) Proof of Work mining for transactional movements prior to full distribution. Total Shares: 100,000. Null/Zero-Coin blocks for Proof of Work to keep TX moving before staking period. 1% Per-Annum Stake with 15 Day Maturity and 30 Day Full Weight. Transactional Messaging, In-Client Extras zimstake_zs.png 2014-04 ZS Zimstake re-target every 1 block, 512 coins per block, and 70 million total coins. raincoin_rain.png 2015-09 20 RAIN Raincoin 2015-09-09 100% 120 2100000 Block time : 120 seconds, Mined: 2,100,000 ( for rain drop since day one ), 7,000 coin drop each hour for online wallet., Supply will be dry about 12.5 days., 10% POS yearly, Coin base Maturity : 20 fuguecoin_fc.png 2014-03 FC Fuguecoin 2014-03-17 50 experimental 84000000 ('h', 840000, 'b') cryptopowerscoin_cps.png 2014-05 CPS CryptoPowers X11 based cryptocoin with re-target every 5 hours, 10,000 random coins per block, and 43 million total coins. 2baccocoin_bacco.png 33 2015-06 2 BACCO 2bacco 2015-06-22 12% 512 120 81454545 Total coins = 81,454,545, 2bacco coin premine of 12% = 9,774,545.4 was needed to pay for current and ongoing costs for 2bacco coin, 2bacco coin specifications:, total coins = 81,454,545, Block Time = 2 minutes, Coins mature after 2 blocks, Target timespan = 33 hours, 512 coins mined per block to start, block size/amount halves each 70,000 blocks or approximately each 6 months, ('h', 70000, 'b') empyreancoin_epy.png 2015-04 170 EPY Empyrean 2015-04-29 50% 0.5 60 100000 Algo: Scrypt, Max coins: 100k, 100% POW, Difficulty halving: 20,000, Block reward: 0.5, TX fee: 0.00001, Coinbase Maturity: 170, 50% of the coin supply will be offered in an ICO format ('h', 20000) feathercoin_ftc.png 2013-04 FTC Feathercoin 羽毛币 2013-04-16 200 150 premined 336000000 re-target every 2016 blocks, 200 coins per block, and 336 million total coins. LTC clone with 4x more coins. starting diff 0 Feather Coin - Feathercoin Abbreviation: FTC Algorithm: SCRYPT Date Founded: 4/16/2013 Total Coins: 336 Million Confirm Per Transaction: Re-Target Time: Block Time: 2.5 Minutes Block Reward: 200 Coins Diff Adjustment. ulicoin_uli.png 2013-12 ULI Ulicoin 2013-12-22 4900 coins per block and 6 billion total coins. bluntcoin_blunt.png 1 block 2014-12 BLUNT Bluntcoin 2014-12-04 0.75% 60 7500000 ('h', 5000 'b') ufocoin_ufo.png 1 hour 2014-01 UFO UFO Coin 2014-02-11 5000 4000000000 ('h', 400000, 'b') gawpaycointm_xpy.png 1 block 2014-12 XPY Paycoin 2014-12-13 96% 49 60 premined 12500000 Name Paycoin™, Symbol/Tag XPY, Website, Github / Source Code, Forum, Hash Algorithm SHA-256, Proof-of-Work Scheme Proof-of-Work/Proof-of-Stake, Coins to be Issued 12,500,000, Block Time 1.00 minute(s), Block Reward 49.00 coins, Difficulty Retarget 1 blocks greedevolvedcoin_ge.png 30m 2015-07 5 GE GreedEvolved 2015-07-18 60 swap coin 50000000 hazmatcoin_hzt.png 2015-04 HZT HazMatCoin 2015-04-10 0 100 180 100000000. Ticker is HTZ Scrypt algorithm 100,000,000 total coins 100 Coin block reward 180 second target spacing 10% Yearly Interest No pre-mine No ICO or IPO 66coin_66.png 11 minutes 2014-02 66 66Coin 2014-01-25 0 0.000066 66 premined 66 re-target every 11 minutes, 0.00006600 coins per block, and 66 total coins. 66 Coin - 66Coin Abbreviation: 66 Algorithm: SCRYPT Date Founded: 1/25/2014 Total Coins: 66 Coins Confirm Per Transaction: Tx fees are 0.0000000066 Re-Target Time: 11 Minutes Block Time: 66 Seconds Block Reward: 0.00006600 Diff Adjustment: Premine: 4.54%. bawbee_baw.png 2014-04 BAW Bawbee 2100000 Litecoin/Scrypt Based Kimoto Gravity Well Pre-mining - 1 x Genesis Block, 2 x Checkpoint Blocks Limited supply, a tight fisted 2,100,000 coins only Symbol: BAW Reward: 2 BAWs / Block Block generation: 2 minutes Halves: Every 2 years vertcoin_vtc.png 3.5 days 2013-01 VTC 绿币 Vertcoin 2013-01-08 0 50 150 premined 84000000 Scrypt-Adaptive-Nfactor based cryptocoin with re-target every 3.5 days, 50 coins per block, and 84 million total coins. Vert Coin - VertCoin Abbreviation: VTC Algorithm: Scrypt-Adaptive-Nfactor Date Founded: 1/8/2013 Total Coins: 84 Million Confirm Per Transaction: Re-Target Time: 3.5 Days Block Time: 2.5 Minutes Block Reward: 50 Coins Halves every 4 years Diff Adjustment: Premine: 3 Blocks. ('h', 4, 'y') rebirthcoin_reb.png 1 block KGW 2014-09 50 REB Rebirthcoin 2014-09-05 0 60 3527369 Name: Rebirth [REB], Algo: SHA-256, Block time: 1MIN, Shares: (750,000) ICO + (401,339.2) Free Distribution 350,000 + Bounties 51,339 , ICO = Will be Host on C-Cex.com (Total 22.5 BTC (750,000 X 0.00003000 ) ), Free Distribution = (1 share = 100 REB ) (1 Per Person ), Bounties = ( 1 share = 100 REB ), Blocks, Blocks( 1):ICO+Distribution = 1,151,399.2 REB , Blocks(2-14401) = 0 REB (10 Days ) Total: 0 REB , Blocks(14402 to 172800) = 15 REB (110 Days ) Total: 2,375,970 REB, Blocks(172800-X) = 0.01 REB, Small inflation is to keep the network hashing, (120 days until reaching block 172800), Block Retarget: every Block, Total coin supply :3,527,369 REB + yearly inflation, Coins mature after 50 blocks, Bitcoin 0.9.2 Based , Fast Transactions vtorrentcoin_vtr.png 2014-12 50 VTR vTorrent 2014-12-11 3% 200 60 20000000 Total supply: 20 million Coinbase maturity: 50 blocks Target spacing: 60 seconds Premine: 3% Algorithm: Scrypt (Hybrid PoW/PoS) Stealth: All-time, impossible to trace transactions through the block chain. Proof of Work Last POW block: 97,000 Block reward: 200 VTR Proof of Stake PoS interest: 5% per year Min stake age: 6 hours Max stake age: 6 days darkblockcoin_dblo.png 2014-08 DBLO Darkblock 2014-08-23 0 5000 45 5000000 Darkblock Coin Built up by the community. Specifications:, X11, 5,000,000 PoW coins, 5,000 coin reward per block, 1,000 total PoW blocks, 45 second block time, No Premine, No IPO, No hidden premine (we will ask for code review from anyone) bitchcoin_btch.png 2014-09 720 BTCH Bitchcoin 2014-09-21 0 10 60 1000000 20 BTCH per block for 1st month. Then drops to 10 BTCH, Every 10K blocks, there is a bitch block with no reward, no premine, X11 because it's the one people bitch the least about and I already had a coin with X11 so it was easy, First “fair ninja”, No addnode shit needed, 720 block maturation time. (3 days), to ensure people bitch about not being able to dump granite_grn.png 2014-05 GRN Granite heelcoin_heel.png 2015-07 HEEL Heelcoin 2015-07-12 5000000 60 100000000 signaturecoin_sign.png 2014-08 3 50 SIGN Signaturecoin 2014-08-01 0 60 18000000 SignatureCoin is unique, in that it will have an anonymous wallet ready by launch time. There will not be “plans” to do so in the future, as with many other coins. The anonymous wallet features are freshly implemented by the development team from scratch according to the principles of Coinjoin, SignatureCoin comes with an innovative distribution model. Of the total PoW coins, 75% will be distributed *freely* to community members that meet certain criteria, on a daily basis and over a period of 90 days. The number of coins distributed depends on qualified people; for example, if 300 members qualify, they will each receive a daily distribution about 300 coins. (Actual values may change), The rest of the coins will be (1) mineable through PoW Mining (2) be used in mixers for anonymous transfers. The coins used in mixers are considered as “service” coins and will not be in circulation. These coins are required by mixers as “buffers” to encrypt the transfers with dynamic addresses. We will provide detailed percentages at launch time., The distribution will be sent to qualified members who actively keep their signature updated according to that given in OP and SignatureCoin website., The details of distribution will be posted at launch time. Distribution will be done through the script in an automatic fashion., SignatureCoin uses many other advanced features, such as separation of PoW and PoS. This leads to a super fast transaction time, usually less than 1 min in a PoS-established network (usually after 2-3 days of the launch). It also uses true randomness for superblocks, offering pleasant surprises to miners., Though with great features, we don't do IPOs to get money. Nor do devs reserve any pre-mines for them. Devs will, like many other people, receive the coins according to the standard distribution method., This is a community coin - the community is the true owner of the coin. All community members are expected to have creative ways to support, promote, help, and enhance the coin. We hope that this set-up will serve all in an exciting yet harmonious way., SignatureCoin (SIGN), X11, Anonymous wallet using CoinJoin technology (fresh implementation from scratch) - ** available at launch time ** !, Will have 3-5 dedicated mixers., PoW/PoS separate, 3 transaction confirmations (< 1min on average), 50 minted block confirmations, Total PoW coins will be about 13 millions and total coins including PoS will be about 18 millions., No IPO, No premine for devs. gamecryptocoin_gec.png 2014-06 240 GEC GameCryptocoin 2014-06-27 12.5% 250 60 80000000 GameCrypto peer-to-peer currency launched for use as payments processing system in games where some economic components can be added., It can be resource extraction, item crafting, hunting or in-game trading. Both online games can be a good candidate to integrate cryptocurrency donate/withdrawal., Algo: X11 PoW/PoS | Last PoW block: 40320 | Block time: 60 sec | Block reward: 250 Cryptos, no halves, Maturity: 240 blocks | Retarget: 12 minutes | PoW stage: 4 weeks, PoS starts 7 days after launch, Total emission: 80.000.000 Cryptos | Transaction confirm after: 10 blocks, Stake interest: 10% | PoS start block: 10240 | Min/max stake: 24 hour / 30 days, PREMINE 12.5% for IPO & BOUNTIES nemcoin_nem.png 2014-01 NEM NEM 新经币 2014-01-19 100% infrastructure 4000000000 2nd gen developed from bitcoin and NXT protocols NEM Coin. payzorcoin_pzr.png 2014-07 PZR Payzorcoin 2014-07-24 3% 80 1500000 The coin with real use., Ticker: PZR, Algorithm : scrypt, Premine : 3% (to cover costs of development and bounties), Block time : 80 seconds, PoS: 8% per year, Block reward: 50, PoW Time: 10 days, Max coins: 1,5 million, Twitter:, Website:, IRC freenode: #payzor, Spanish version:, Chinese version:, Italian version:, Polish version:, Russian version:, WHATS NEW THAT PayzorCoin BRINGS ?, Lamassu has put up their code as open source, we decided to fork it so the ATM machines will be able to trade BTC/currency + PZR/currency + PZR/BTC., This way our coin will have use as a real cash exchangable currency, not just other altocoin that changes few things in the code., We`ve already contacted Lamassu to speak with them about implementation of code without giving it away as Open Source - we do not want other altocoins to just copy what we prepared., Currently we have place in one of biggest malls to put our ATM at in Berlin/Germany and after launching it, we already have 2 other places that already own Lamassu confirmed they will implement code if it works - 1 in USA and 1 in Asia., After sucessfuly launching code for Lamassu we plan to add our currency to other Bitcoin ATMs. This way we can became real force in decentralised world of cryptocurrency., Proof of ATM:, Proof of Lamassu accepting forking the code:, WALLETS, source and windows qt., - win wallet in /resources/, sources in proper folders - bundle of sources & qts., - MAC OSX wallet, Code:, [2014/07/22 20:38:51]server=1, listen=1, rpcuser=user, rpcpassword=pass, rpcport=14621, addnode=162.218.92.33, addnode=167.160.94.200, addnode=173.78.49.31:14622, addnode=88.213.221.77, POOLS,,,,,,, multipool,, - multipool and own port, EXCHANGES,,, block explorer, ARTICLES/MERCHANTS/GAMES, article, article, GAME, FAUCET, PAYZOR COIN BOUNTIES, Logo, Reddit Tipbot 200 PZR, Twitter Tipbot 200 PZR, Explorer claimed, Pools, P2p-pool - 100 PZR, Games - 20 PZR each, Faucet - 50 PZR, Video - claimed, Articles on Blogs - 10 PZR each, Translations - Russian, Chinesse, Spanish, Polish, French, Andoir Wallet 500 PZR, OSX Qt, Lamassu GUI Translations - Chinesse, French, Polish, Spanish, Russian - all claimed nectarcoin_sweet.png 2014-10 SWEET Nectarcoin 2014-10-05 100% 60 12500000 12.5 million ico coins, POS framework that will include new POP* (Proof of Promotion) adoption incentive system that rewards users for sending coins to a new zero-balance address distant from their own source address. Tx fees: market-based. Block generation: 1 minute. Stake minimum age: to be determined. Variable Network-Stake-Dependent Interest: up to 30% a year. Proof of Promotion: Award of a set amount of coins for each send to a new zero-balance address to which the coin holder is not historically, closely associated aurumcoin_au.png 5 hours 2014-09 2 AU AurumCoin 2014-09-23 0.9% 1 60 300000 Coin type Bitcoin (SHA256), Halving 150000 blocks, Initial coins per block 1 coins, Target spacing 1 min, Target timespan 5 h, Coinbase maturity 2 blocks, Premine 0.9 % For Exchange,pool,advertisement, Max coinbase 300000 coins ('h', 150000, 'b') bittorcoin_bt.png 2014-07 BT BitToCcoin 2014-07-28 0.3% 120 1000000 We present BitTor (BT), this is a project we've been working on for quite a while now. The objective is to have the ultimate Tor based privacy coin. This coin will feature Tor technology at launch and there a lot more privacy enhancement features being cooked behind the scenes. Visit this thread or our Twitter regularly for more info., 2 minute blocks, 1M total BitTors, 1 coin per generated block for the first 600k blocks, 0,5 between blocks 600k and 1.2M, 0.25 between blocks 1.2M and 1.6M., Difficulty retargets after every block, Algorithm: X13, 720 blocks per day, 0.3% Premine (With a value equal to the first 3000 blocks), Launching 28/07/2014, 7pm GMT linearcoin_lnc.png 2014-06 LNC Linearcoin 2014-06-21 <0.001% 300 25025025 No IPO, No POS, < 0.001% Premine, No worries!, POW Algorithm: X11, Total Coins: 25,025,025 Coins, Block Time: 5 Minutes, Confirmations: 10, Block Maturity: 80, Difficulty Re-target Time: Every Block (KimotoGravityWell), Halving Rate: Decreases 0.00005 Coins every block, Mining Window: ~9.51 years reducinglin furrycoin_fur.png 2014-05 FUR FurryCoin re-target every 60 blocks, 5000 coins per block, and 50 million total coins. halcyoncoin_hal.png 2014-08 HAL Halcyoncoin 2014-08-16 0 60 2294000 9% POS Interest per Year, Min Stake Age 12 Hours, Max Stake Age 30 Days, Total coins: 2294000 - Will Be Closer To 2m With POS, POS Begin At Block 2000 carboncreditcoin_unit.png 2015-03 UNIT CarbonCredit 2015-03-19 100% 60 1000000000 Name: CarbonCredit , Symbol: UNIT, Algorithm: Scrypt (POS), Interest: 2% / year, Total supply: ~16,800,000,000 UNIT, First Block: 1,000,000,000 UNIT, 100,000,000 UNIT for “Hidden” Company (don't sell before have debit card), 10,000,000 UNIT for ICO @ 100 satoshi, 290,000,000 UNIT for ICO @ 200 satoshi (Bittrex), 600,000,000 UNIT for Bounty/Activity gollumcoin_glm.png KGW 2014-03 GLM Gollumcoin 50 17000000 re-target using Kimoto's Gravity Well, 50 coins per block, and 17 million total coins. vendettacoin_vac.png 2014-02 VAC Vendettacoin Scrypt-jane based cryptocoin with re-target every week, block range coin rewards, and 21 million total coins. autocoin_aut.png 2014-03 AUT Autocoin 50 coins per block and 21 million total coins. eagscurrencycoin_eags.png 2014-12 EAGS EAGScurrency 2014-12-03 6000000 60 20445500 PREMINE: 0% (No Premine), NAME: EAGS Currency, TICKER: EAGS, ALGO: SHA256, TOTAL COINS: 20,445,500, POW COINS: 10,445,500, Miners: 40% of the 10 millions + the 445,500, XDE ICO INVETORS AND OTHERS: 60% of the 10 millions, MINING PHASE: 5 to 10 days (May change), FAIR LAUNCH: 0 Block Reward after 1st Block - Up to 400 blocks, BLOCK REWARD: none yet (Will be updated), POS COINS: 10 millions, POS INTEREST: 5%, MIN AGE: 24HRs, MAX AGE: 90 days (Can be changed) bitpeso_btp.png 5 hours 2014-04 BTP BitPeso 2014-03-15 1000 300 200000000 1000 coins per block and 200 million total coins. knightcoin_kgc.png 2014-05 KGC Knightcoin 2014-05-12 10% 400 60 250000000 The difficulty of adjustment: 1440 adjust again. First pieces of 25000000 coins. Second the beginning of each block of 400 dollars until the end of 225000000 coins. ('h', 1440, 'b') harmonycoin_hmy.png 2014-01 HMY Harmonycoin 2014-01-01 400 60 7032000 ipo, anon Algorithm: X13, 7,032,000 Total Coins , POW/POS & Anon, 6% Annual Interest once coin becomes Pure POS, Block Height: 60 Seconds, Confirmations: 6 , POW blocks: 10080, Block Reward: 400 coins per block, .5% of every POW block will go to a charity fund sauroncoin_sau.png 2013-08 SAU SauronRings trickydickyfunbillcoin_tdfb.png 2015-04 200 TDFB TrickyDickyFunBill 2015-04-28 0 60 1500000 Today is a great day, we proudly announce to you all the birth of TDFB coin - a decentralized web infrastructure. TDFB Foundation is made up of a group of people. TDFB is not the same promise-full cryptocoin, we might bring many of the coolest features and functionalities, such as POW-POS phase (stake multiplier! WoW!) and possibly a web wallet. The TDFB coin is just another example of the versatility of altcoins. Ticker: TDFB No Premine, No IPO/ICO. Block time: 60 seconds Coinbase maturity: 200 blocks min stake age: 8hrs max stake age: 36hrs Algo: SHA256d recyclingcoin_rex.png 2014-05 REX Recyclingcoin 2014-05-19 cypherfunk_funk.png 2014-03 FUNK Cypherfunks 49300000000.0 re-target using Kimoto's Gravity Well, time based coin rewards, and 50 billion total coins. mcoin_m.png 1 block 2015-01 M Mcoin 2015-01-01 80 150 closed source 2000000000 MCOIN SPECIFICATIONS, X15 Algorithm, Block discovery: every 2.5 minutes, 80 coins per block reward, Total amount 2 Billion coins, Block retarget time on every block, Proof of Work, Proof of Stake reward 8%, (and decreasing every year by 1% until reaching 1% yearly) tradecoin_tdc.png 2013-06 TDC TradeCoin 2013-06-25 100% Scrypt-based fully mined cryptocoin with 147 million total coins. 100% premine LTC clone experiment. Dying. woolfcoin_woo.png 2015-05 WOO Woolfcoin 2015-05-28 0 50 120 2100000 Name: WOOLFCOIN, Symbol: WOO, Algorithm: SCRYPT,POW, Block Reward: 50, Block Reward Halving Rate: 21000, Block time: 120 sec, Premine: 0% !!!!, Total supply: 2 100 000 ('h', 21000, 'b') ultracoin_utc.png 10 blocks 2014-02 5 50 UTC UltraCoin 2014-02-01 2% 30 60 100000000 Scrypt-jane based cryptocoin with re-target max 200% per 30 minutes, 50 coins per block, and 100 million total coins. ('h', 4000000, 'b') h2ocoin_h2o.png 2014-03 H2O H2Ocoin 2000000000 The specs of the coin are the following: Scrypt -N Adaptive KGW ( Kimoto's Gravity Well ) Total coin supply: 2 billions First month blocks will give per 2000 H2O Drops (Coins) so we attract miners. While, after 1 month, it will go to 1600 coins. novacoin_nvc.png 2013-02 NVC Novacoin 2013-02-09 2000000000 Scrypt hashing[like LTC], proof of stake [like PPC]. Novacoin-based. established 100% NVCS (1.3%), SA 30/90, Coins 95 hispacoin_hpc.png 2014-03 HPC Hispacoin 50 50000000 50 coins per block and 50 million total coins. templecoin_tpc.png 2014-03 TPC Templecoin 99.81% premined 2718281828.45904523 shekelcoin_she.png 2013-11 SHE Shekelcoin 2013-11-27 re-target every 2016 blocks, POW, 50 coins per block, and 21 billion total coins. onyxcoin_onyx.png 2014-07 ONYX OnyxCoin 2014-07-30 1% 60 15000248 A new precious gem of X13 cryptocurrency, ONYX., RPC Port: 51996, P2P Port: 50995, Algorithm: X13 POW/POS starts on block 1,500,000 (2.8 years approx.), Ticker: ONYX, Max Proof-of-Work Coins: Approximately ~15,000,248 ONYX, 5% Proof-of-Stake Yearly Interest, 1% Premine, Block 1 is 1% Premine; Block 2-250 are 1 ONYX #low reward to prevent an instamine; Block 251-1500000 are 10 ONYX; PoW Ends on Block 1,500,000 (approx. ~2.8 years), 60 Second Blocks, Difficulty adjusts per block, Block Explorer:, Block Explorer with Richlist Coming Soon, DOWNLOAD ONYX, GitHub Source Code:, Windows onyxcoin-qt:, Mirror #1:, Mac onyxcoin-qt:, Linux onyxcoin-qt:, Mining Pools, We need pool operators to add us!,,,,,,,,,,, Exchanges, We will be working to get Onyxcoin on exchanges!, Block Explorer available in wallet, online version needed!, Twitter:, IRC Chat Channel is #onyxcoin, ONYX Game:, ONYX Faucet:, Roadmap, Where is Onyxcoin going?, We are a group of anonymous developers that want to better cryptocurrency, with Onyxcoin we bring to you fixed block rewards, X13 proof-of-work mining, and proof-of-stake interest after 2.8 years. Onyxcoin features many of the latest features found in many other popular coins, we will strive as developers to create a very stable, fully functional, feature packed cryptocurrency. We do have plans to add anonymous features in the future for users to have the option to create anonymous transactions in the blockchain. A date and whitepaper will be available in time. Want to join us? PM onyxdev on bitcointalk. This thread and graphics will be updated soon., 7/30/2014 - ninja launch, Next date/timeline to be announced., The premine address is: oMpC9Y6ZQpMJ18aoTPFq1G3yNaSqrrQjKZ, For transparency and monitoring. It will not be dumped, we will be using the premine for future bounties and contests. riecoin_ric.png 12 hours 2014-02 RIC Riecoin 2014-02-03 0 50 84000000.0 Bitcoin fork using prime constellations as PoW with re-target every 288 blocks, 50 coins per block, and 84 million total coins. Riecoin Foundation is online: ('h', 840000, 'b') hongketocoin_hkc.png 5 hours 2014-05 50 HKC Hongketocoin 2014-05-17 60 600000000 The Coin For Fighters! Remix Version ('h', 50000, 'b') blackmarketcoin_blm.png 2015-05 200 BLM BlackMarket 2015-05-16 60 1000000 Algorithm: X15, Block time: 60 seconds, Block reward: 1 - ICO, 2-99 = 0, 100-1100 = 10, 1101-1500 = 20, 1501-2000 = 25, 2001-2500 = 20, 2501-3000 = 15, PoW Supply: ~100K, Last PoW block: 3000, Maturity: 200 Blocks, PoS Reward: 1, PoS Min Stake Age: 6 hours, PoS Starts: after POW sentarocoin_sen.png 2015-06 SEN Sentaro 2015-06-13 100 60 990000 Algorithm: Scrypt, Ticker: SEN, PoW blocks: 10000, Reward: 100 SEN ( starting from block 100 ), Block Target: 60 sec, PoS: 1%, Stake Min age: 6 hrs, Masternode: 5000 SEN , Masternode reward: 1 SEN + 10% of PoS, Anti-instamine: 0 reward first 100 blocks stalwartbucks_sbx.png 2014-01 SBX Stalwartbucks parkbytecoin_pkb.png 2015-05 PKB ParkByte 2015-05-07 2000000 90 60 25000000 Coin Name: ParkByte, Algo: SHA256, Ticket: grifcoin_gfc.png 2 days 2014-06 GFC Grifcoin 2014-06-02 120 42000000 Initial Coin per Block: 40, Halving Rate: 525600 blocks (about every 4 years), Block Generation: 2 minutes, Readjust Difficulty: 2 days, Maximum Coins: 42,000,000, Algorithm: Scrypt, If you are a fan of Red vs Blue on Roosterteeth, then this cryptocurrency is for you! Based on a character known as Grif, Grifcoin is a new altcoin for the true RvB fans. If you haven't seen RvB, check it out on Roosterteeth (). Remember to start on Season 1! This is still very early in development. I am just releasing to those who want to get started early. In order to establish some sort of value, I am going to back Grifcoin: 0.01 BTC for every 5,000 GFC. Being that the average subpar laptop can (as of now) mine Grifcoin about 1,500 per night, I think this is pretty generous. Basically, if you donate some of your computing power to the network and manage to mine 5,000 Grifcoins, I will send you 0.01 Bitcoins. socialkryptcoin_sokry.png 1 block 2014-10 SOKRY SocialKrypt 2014-10-13 120 3040000 Algo: x13, Block Time: 2 Minutes, Diff retarget: every block, Block reward: , First block mined for ICO funds: 2'500'000 SOKRY (Unsold coins will be destroyed), 2 - 3600 blocks = 0 SOKRY for ICO duration and fair start., 3601 - 7200 blocks = 100 SOKRY per block for a total of 360'000 SOKRY (ca.), 7201 - 10080 blocks = 50 SOKRY per block for a total of 143950 SOKRY, Total coins if ICO will be sold entirely: 3'040'000 SOKRY, Pos Interest: 5% crypticoin_xcr.png 2014-06 XCR Crypti 和氏币 2014-06-16 60 infrastructure 1000000 Crypti is a 2nd generation crypto-currency designed from the ground up. It's built to solve the biggest problem with current crypto-currencies, a lack of purchase motivation. Crypti is being built from scratch, complete with a new Elliptic curve algorithm, relying on No previous crypto-currencies code. It uses a combination of 3 radically new proof-of-stake algorithms making it a first of it's kind. It's being developed in lightweight Node.js, and can be ran on any device out there, including mobile devices. vastcoin_vast.png 1 block 2014-07 7 70 VAST Vastcoin 2014-07-21 0 4000 30 3937500 Name: VastCoin, Symbol: VAST, Algo: X11, POW/POS hybrid, POW: 25 hours of mining, Block time: 30sec, Block reward: 4000 VAST, halving every 500 blocks, Last POW block: 3000, POS starts at block 1500, Block retarget: every block, POS interest: 5% per year, Coins stake after 15 hours, Confirmations per transaction 7, Coins mature after 70 blocks, Premine: 0 (0%) ('h', 500, 'b') clustercoin_clstr.png 1 block 2014-08 5 100 CLSTR Clustercoin 2014-08-23 7000000 60 36610000 Coin name: ClusterCoin Symbol: CLSTR Algorithm: SHA-256 ICO coins: 7,000,000 Block rewards: 1st block: 7,000,000 CLSTR (ICO coins) 2 - 13,000 blocks: 0 CLSTR (no mining reward during ICO) 13,001 - 10,000,000 blocks: 30 3 CLSTR Confirmations for blocks to mature: 100 Re-target difficulty each block Total CLSTR coins generated during pow: 36,610,000 CLSTR Block time: 60 seconds Transaction confirmations: 5 legendarycoin_lgd.png 2014-03 6 50 LGD Legendary Coin 2014-03-25 3% 7 120 10000000 ('h', 64800, 'b') gascoin_gas.png 2013-07 GAS Gascoin 2013-07-27 dynamic retarget, POW, and dynamic block reward. keycoin_key.png 2014-07 KEY Keycoin 2014-07-15 0 60 1000000 Distribution: POW/POS, Algorithm: x13, Total Coins: Approx 1,000,000 in POW., No premine., Fair blocks for Community Bonus, Blocks 1601-2000 Larger Rewards Than First., Block Reward: Block 1 - 400: 500 KEY, Block 401 - 700: 450 KEY, Block 701 - 1000: 350 KEY, Block 1001 - 1300: 300 KEY, Block 1301 - 1600: 250 KEY, Block 1601 - 2000: 550 KEY, Block 2001 - 2100: 150 KEY, Block 2101 - 3000: 100 KEY, Block 3001 - 4000: 50 KEY, Block 4001 - 4500: 1 KEY, POS Only After Block 4500, Block Time: 60 Seconds, Yearly POS Interest: 20%, 50 Minted Block Confirmations, p2p port: 37941, rpc port: 37942, Website: Coming Soon, Downloads: Source:, Windows Wallet: Mega, Nodes: addnode=188.226.184.232, addnode=162.243.122.97, addnode=107.170.153.13, addnode=95.85.62.12, addnode=178.62.54.145, Exchanges: Bittrex, Cryptsy, Pools: SuchPool, MineP.it, P2PoolCoin.com - EU Server | Canada Server, POOL.MN, dedicatedpool, suprnova.cc, Chainz: Now on CoinMarketCap also. zhaocaibicoin_zcc.png 2012-08 ZCC ZcCoin 招财币 2012-08-05 0 1000000000 Scrypt-jane based cryptocoin with POW / POS, 50 coins per block, and 1 billion total coins. Yacoin clone. ZC Coin - ZCCoin Abbreviation: ZCC Algorithm: SCRYPT Date Founded: 8/5/3012 Total Coins: 1 Billion Confirm Per Transaction: Re-Target Time: Block Time: Block Reward: 50 Per block/Varies Diff Adjustment: Premine:. Novacoin-based. (50, 'v') gramcoin_gram.png 2015-04 GRAM Gramcoin 2015-04-22 3% 220 120 65360000 Name: GRAMCOIN, Symbol: GRAM, Algorithm: SHA256,POW, Block Reward: 220, Block Reward Halving Rate: 144000, Block time: 120 sec, Difficulty retarget: Dark Gravity Wave, Premine: 3% (only for coin development), Total supply: 65360000 ('h', 144000, 'b') topcoin_top.png 2014-02 TOP TopCoin 11520000000 tripcoin_trip.png 1 block 2015-08 10 240 TRIP TripCoin 2015-08-24 60 closed source 100000000 Ticker: TRIP, FullName: TRIPCoin, Total coins: 100 000 000, Algo: 100% POS - scrypt, Annual Interest: 5%, Block time: 60 seconds, Min. transaction fee: 0.0001 TRIP, Confirmations: 10, maturity: 240, Min stake age: 4 hours, no max age * Difficulty retarget: every block, Blocks: 1 - 40mln TRIP, 2-2880 - 0.1mln TRAVEL. The value of a coin is fixed in relation to the US dollar. 1 TRIP = 1$ vocalcoin_vocal.png 2014-11 VOCAL Vocal 2014-11-06 100% 60 28000000 There will be initially 28 million vocal coins to be used on the entire platform., 100% Proof of Stake, Initial Coin Supply: 28 Million coins, Stake Interest: 4%, Minimum Stake: 6 hours energycoin_enrg.png 2014-05 ENRG Energycoin 2014-05-15 30 110000000 Free IPO lifeextensioncoin_ext.png 2015-06 EXT Life Extension 2015-06-27 100% 60 5000000 LAUNCH : June 27, TICKER: EXT, Algo: QUBIT, Max Supply: 5 Millions, FULL PoS: 1% yearly, Minimun Stake Age: 1 hour, IDO: 100% (5 Millions) xagoncoin_xagon.png 1 day 2015-11 XAGON Xagon 2015-11-12 40000000 60 60 200000000 XAGON [~SLATE~] Distributed Identity, T2 Alpha, Abstract Commerce ('h', 1333334 'b') solidcoin2_sc2.png 2011-10 SC2 SolidCoin2 2011-10-10 vaginacoin_vag.png 2013-05 VAG Vaginacoin 2013-05-31 retarget every 690 blocks, 420 coins per block, and 69 million total coins. mastermintcoin_mm.png 2015-11 50 MM MasterMint 2015-11-17 150000000 60 1500000000 MasterMint, Features, 1500000000 Total supply, 50 % PoS annual, Masternodes 50% of PoS reward, 60 seconds block time, 1 hour minimum stake, 50 confirms to mature greekcoin_grk.png 2014-06 GRK Greekcoin 2014-06-06 30 100000000 X11 - POW/POS coin, Number of coins: 100,000,000, POW blocks : 10,000, POS : 5% yearly, Block Time : 30 sec, Confirmation : 6, Pre-mine:1% taxitokenscoin_hack.png 168 hours 2015-01 10 HACK TaxiTokens 2015-01-01 1% 50 600 21000000 Taxi Tokens ($HACK) is a new decentralized cryptocurrency that will be used “exclusively” by NYC for-hire vehicles. EX: Medallion Drivers, FHV Drivers and Others. The New York City Taxi and Limousine Commission currently is nation’s “largest” and most active taxi and for-hire vehicle regulator with a current Driver License count of 179,873 qualified drivers servicing an average of 241 million passengers a year generating over $11 billion annually. ($HACK) was designed by a thriving New York City taxi cab owner/operator who is dedicated “initially” on attracting NYC merchant/user adoption through many industry contacts and peers developed over a 20 year period of employment in the transportation industry. In 2013, the Taxi and Limousine Commission (TLC) realized a number of achievements in keeping with its mission of ensuring that New Yorkers and visitors to the city have access to taxicabs and other for-hire ground transportation that are safe, efficient, sufficiently plentiful, and provide a good passenger experience. However, NYC for-hire drivers and transportation owner/operators are always looking for new ways to generate income with the rising operating costs incurred to remain afloat in this competitive industry. ($HACK) was created to encourage the adoption of cryptocurrency into this sector while creating a new and innovative payment platform and revenue stream that affords both driver(s) and passenger(s) the opportunity of utilizing our secure network as a preferred method of payment. ($HACK) eliminates typical merchant provider transactions fees allowing users to save money while increasing their overall bottom line profits., Algorithm: Scrypt Nr. Coins: 21 Million Coins., Coin Type Litecoin (Scrypt), Halving 210000 Blocks, Initial Coins Per Block 50 Coins, Target Spacing 10 min, Target Timespan 168 h, Coinbase Maturity 10 Blocks, Premine 210,000 Coins, Max Coinbase 21000000 + 0 = 21000000 Coins ('h', 210000, 'b') skycoin_sky.png 700 blocks 2013-12 SKY Skycoin 2013-12-22 0 15 120 4000000 Sky Coin - Skycoin Abbreviation: SKY Algorithm: SCRYPT Date Founded: 12/22/2013 Total Coins: 4 Million Confirm Per Transaction: Re-Target Time: Block Time: 2 Minutes Block Reward: 15 Coins Diff Adjustment: 700 Blocks Premine:. austrocoin_atc.png 2014-06 ATC AustroCoin 2014-06-03 60 8500000 dubeucoin_dbu.png 10 blocks 2014-12 DBU Dubeucoin 2014-12-10 500 30 2000000000 Dubeucoin is a lite version of Bitcoin using scrypt as a proof-of-work algorithm., 30 second block targets, subsidy halves in 840k blocks (~4 years), ~2 billion total coins, The rest is the same as Bitcoin., 500 coins per block, 10 blocks to retarget difficulty ('h', 840000, 'b') binarycoin_bic.png 2013-12 BIC Binarycoin 9 blocks premined re-target every .2 days, 20 coins per block, and 84 million total coins. BINARYCOIN BIC Website SPECIFICATIONS Algorithm: Scrypt Max Coins: 84,096,000 Block Time: 15 Seconds Difficulty Retarget Time: 0.2 days Premine: 9 blocks as test Starting difficulty: 0.00024414 20 coins per block DOWNLOADS Windows Linux MacOS Source Code Sample binarycoin.conf server=1 rpcuser=%USER% rpcpassword=%RPCPASS% addnode=198.98.102.131 PORTS Unknown. POOLS Cryptopools EXCHANGES None Available. SOCIAL None Available. SERVICES / OTHER None Available. Launch thread: Abbreviation: BIC Algorithm: SCRYPT Date Founded: 12/23/2013 Total Coins: 84.09 Million Confirm Per Transaction: Re-Target Time: Block Time: 15 Seconds Block Reward: 20 Coins Diff Adjustment. coffeecoin2_cfc2.png 1 block 2014-05 4 40 CFC2 CoffeeCoin 2014-05-05 20 relaunch 100000000 Algorithm: 100% Proof-of-Stake, Block Time: 20 Seconds, Difficulty Re-target: Every block, Total Coins: Starting at 100,000,000, Mined Block Confirmation: 40, Transaction Confirmation: 4, Minimum Stake Age: 24 Hours, Maximum Stake Age: 90 Days bumbacoin_clot.png 5 hours 2014-05 CLOT Bumbacoin 2014-05-02 10000 60 meme 201600000 ('h', 10080, 'b') 2112coin_yyz.png 360 blocks 2015-02 21 YYZ 2112coin 2015-02-14 0 120 120 21120000 No Premine, Based on Bitcoin 0.9.3 SHA256 source, Block target: 2 minutes, Difficulty retargets every 360 blocks, Block reward: 120 YYZ, halving every 88,000 blocks, Total coin supply: 21,120,000 YYZ, Coinbase maturity: 21 blocks ('h', 88000, 'b') pencoin_pen.png 2015-01 PEN PENCoin 2015-01-23 60 543732 total volume of coins released: 1,000,000 coins, volume of ICO: 250,000 coins, block time: 60 seconds, rewards:, first 2 days = 25 coins, another 2 days = 50, another 2 days = 75, after that, mining ends and coin is fully POS, yearly POS reward: 15%, algorythm: Scrypt sha1coin_sha.png 3.5 days 2014-03 SHA Sha1coin 2014-03-28 50 150 83550000 SHA-1 based cryptocoin with re-target every 2016 blocks, 50 coins per block, and 83.55 million total coins. ('h', 840000, 'b') imperiumcoin_impc.png 168 hours 2014-10 6 10 IMPC ImperiumCoin 2014-10-27 0 122 600 51240000 Premine: 0% - No Premine!, Coin type: Scrypt, Halving: 210000 blocks, Initial coins per block: 122 coins, Target spacing: 10 min, Target timespan: 168 h, Coinbase maturity: 10 blocks, Max coinbase= 51240000 coins, Confirmation : 6 ('h', 210000, 'b') bitcredits_credits.png 60 minutes 2014-01 5 CREDITS Bitcredits 2014-01-12 3.5% 10000 60 repertoire 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 From the international enterprise team who bring you GoldBar, Silverbar and PlatinumBar is the new BitCredits (Credits). maples_map.png 2013-09 MAP Maples 0 premined Clone of NVC. Maples - Maples Abbreviation: MPL Algorithm: SCRYPT Date Founded: 9/6/2013 Total Coins: 50 Million Confirm Per Transaction: 40 Re-Target Time: Block Time: 1 Minute Block Reward: 0-3000 50 Maples 3000-5000 30 Maples 5000-7500 20 Maples 7500+ 8 Maples per block halving every 3 years Diff Adjustment: Premine: 50K. wpcoin_wpc.png 2014-03 WPC WPcoin 2014-03-19 0 60 320000000 darkfoxcoin_drx.png 2014-08 6 100 DRX DarkFoxcoin 2014-08-09 0 95 60 15000000 Algorithm: Sha256, Total coins : 15,000,000, Coin maturity: 100 blocks, Transaction confirmation: 6, Last pow block: 7200 , Block reward: 95, Block time: 60 sec, Pow total coins 759,000 , Pos interest 9 %, Premine 0.5 %, Stake min age 12h, No max age. flashcoin_flc.png 2013-06 FLC Flashcoin 2013-06-02 re-target every 60 blocks, 1 coins per block, and 10 million total coins. bmwcoin_bmc.png 2014-01 BMC BMWcoin 2014-01-29 50 120 12000000 goldrushcoin_grc.png 2014-12 3 60 GRC GoldRushcoin 2014-12-12 60 25000000 The Gold Rush Coin is part of The Gold Rush Project. The overall project is taking the global gold rush research of Sam Sewell III and creating a full series of Books, Merchandising, and 2D and 3D animated versions of multiple games. Each step along the way and each game provides the player the ability to create and store wealth via gold backed crypto coin, Hybrid POW/POS, BlockTime 60 seconds, Block 1 - 500 = 1 GRCX, Block 501 - 720 = 5 GRCX, Block > 720 = 20 GRCX, 3 confirms per transaction., Total 25,000,000 GRCX, Scrypt POW, POS staking 24 hours - 42 days for max stake livecoin_lvc.png 2014-01 LVC Livecoin greenbackscoin_gb.png 2014-08 6 GB GreenBacks 2014-08-15 30 infrastructure 20000000 20 Million Initial Coins:, *Max of 10% additional coins created each year, dependent upon number of coins in staking, 100% Proof of Stake: , *10% Staking Interest – 8 Hours for coin maturity to begin staking, 30 Second Block Time & 6 block confirmations required:, *Transactions will be 25x faster than Bitcoin maidsafecoin_maid.png 2014-04 MAID Maidsafecoin 2014-04-22 0 1 intermediary token BTC-SAFE 430000000 MaidSafecoin is an intermediary token residing on the Bitcoin blockchain that will be exchanged on a 1:1 basis with safecoin the native currency of SAFE Network a P2P Internet platform. Safecoin is required to pay for services on the SAFE decentralized internet. litestarcoin_lts.png 1 block 2015-02 4 50 LTS Litestarcoin 2015-02-18 1.7% 1000 90 120000000 The perfect micropayment solution. Money over IP X11, PoW/Pos separate technologies with no effect on PoW mining, Advanced asic proof PoW/PoS coin, Random SuperBlock, True random, so no cheating by big hashpowers, 4 transaction confirmations (Very fast), 50 minted block confirmations, Short poS block time (less than 1 minute), 90 sec PoW block time, diff retarget each block for PoW, Daily random superblock payout 10X, Weekly random superblock payout 100X, block payout reduced 20% every 20 days, Initial payout: 1k, 15 sec PoS block time, PoS diff retarget each block , Block minted: 14 Total: 2005000 (1.7%), Min: 1 day / Max 100 days, Variable PoS payout: 60% first year / 20 % second year, Max suppli: 120 millions approx ('rp', 20, 20, 'd') findyoucoin_find.png 2014-11 20 FIND Findcoin 2014-11-19 100% 120 100000000 100% of FIND will be distributed via in-wallet faucet. All you have to do is just keep your wallet running to receive the funds. By doing so you will also be supporting the FIND network, FindYou Coin Specs, Name: FindYou Coin, Thicker: FIND, Initial coin supply: 100,000,000 FIND (will be distributed via faucet), Distribution period: 21 days (200,000 coins per hour), POS annual interest: 1%, No POW rewards. Algo is X13 (if that makes any difference - FIND will be 100% distributed via in-wallet faucet), Same distribution model as miraclecoin?, Faucet distributes 200 000 coins/hour. microcoin_mrc.png 2014-01 MRC microCoin 0 100000000000 Algorithm: Scrypt-Jane Proof-Of-Work with modified nFactor Block Time: 32 seconds No Premine bitswiftcoin_swift.png 2014-10 50 SWIFT BitSwift 飞速币 2014-10-02 30 blocknet 4000000 X15, 4 Million SWIFT Total Supply, 3% PoS Annually, Minimum 4 Hour Minimum Stake Age, No SWIFT Transfer Fees!, 30 Second Blocks xanaxcoin_xnx.png DGW 2015-04 XNX Xanaxcoin 2015-04-25 0 210 120 42000000 Name: XANAXCOIN, Symbol: XNX, Algorithm: SCRYPT(POW), Block Reward: 210, Block Reward Halving Rate: 100000, Block time: 120 sec, Difficulty retarget: D.G.W., Premine: NO!, Total supply: 42 000 000 ('h', 100000, 'b') lycancoin_lyc.png 2014-03 LYC Lycancoin 4950000000.0 iridiumcoin_iri.png 5 hours 2014-08 IRI IridiumCoin 2014-08-12 0.7% 770 300 77000000 ('h', 50000, 'b') eoncoin_eon.png 2014-04 EON EonCoin 1000000000 Algorithm : Scrypt Pow + PoS Symbol : EON Block target : 30 seconds Block reward : 5000 EON Retarget Every block Max PoW coins : 1 Billions EON P2P port : 7201 RPC port : 7200 teddycoin_tdy.png 2014-07
http://bel-epa.com/download/xml
CC-MAIN-2017-17
en
refinedweb
mvcConf 2 - Steve Sanderson: MvcScaffolding The .NET framework provides a simple API for sending email. I assume you are already acquainted with the handy namespace System.Net.Mail. However, dynamically generating the content of an email is still a bit tricky. Code that concatenates strings and variables is no fun to write or read! What we need is a way to write text (or html) templates which will be rendered with some data. ASP.NET MVC already has exactly this in the form of Views. So let's reuse the view engine infrastructure to create our emails. I created a simple library called Postal that does just that. This session will introduce Postal and walk through using the library. Actual format may change based on video formats available and browser capability.. Comments have been closed since this content was published more than 30 days ago, but if you'd like to send us feedback you can Contact Us.
https://channel9.msdn.com/Series/mvcConf/mvcConf-2-Andrew-Davey-Going-Postal-Generating-email-with-View-Engines?format=smooth
CC-MAIN-2017-17
en
refinedweb
Import Packages such as vue-validator Hi, I am having a issue importing the vue-validator. Where would I import them? Currently I put it in the router.js as import VueValidator from 'vue-validator’ Vue.use(VueValidator) I imported them directly on main.js to keep an overview off all the imports. here is my working example of vue-resource. import Vue from 'vue' import Quasar from 'quasar' import router from './router' import VueResource from 'vue-resource' Vue.use(Quasar) // Install Quasar Framework Vue.use(VueResource) - rstoenescu Admin @Nickk2 Like Gianni correctly indicated, it’s best to import packages in main.js. One comment though. I’d use Vuelidate instead of vue-validator. Much more clean and it will go very smooth with the pending improved form changes that will soon be released for Quasar.
http://forum.quasar-framework.org/topic/225/import-packages-such-as-vue-validator
CC-MAIN-2017-17
en
refinedweb
%”. Just]]> Recently?]]>). I! A;]]> Some time ago I was given a link to a nice youtube video. Around 00:04 I recognized “Stairway to heaven” intro. Friends told me I was wrong, they just sound alike. Challenge accepted! Let’s analyze it. First of all, let’s download the video from youtube. Visit e.g. and enter video URL:. Download the result (I chose MP4 480x360). Extract the sound with ffmpeg -t 10 -i video.flv suspected.wav First 10 secs is enough. Then load the file into Audacity and begin the sound processing. Because we are interested in a very specific sounds, let’s filter out frequencies which are above the scale of a guitar. We’ll use “Low-pass filter” with rolloff of 48dB/octave, filter quality of 0.7 and cutoff frequency of 700Hz. This will allow us to better hear where the guitar plays. Now, the hard work. What I want to do is to find beginning of the notes, and for every suspected fragment analyze its’ spectrum. Looking at the spectrum peeks, I’ll assume the biggest ones (in guitar tune frequencies) are the tones we are looking for. The good practical method to find the beginning of a tone is to listen to a fragment in slow motion. Also you may try to capture the correct moment by selecting a fragment and lowering its end until you hear no tone change at the end of this fragment. E.g. select a fragment from 3.4s to 3.9s and play it. You will hear two tones. Then, lower the end of the fragment until you hear only the first note. For me it was around 3.7s (a hint: text input for selection is very recommended). This is the beginning of the second tone. Proceed with all the following sounds. Take into account, that previously played tones can co-sound, so analyze changes in spectrum for adjacent fragments. My choices for times of the beginnings of tones, found peek frequencies and corresponding notes are: 3.44 293 D4 3.70 348 F4 3.95 441 A4 4.24 587 D5 4.48 662+277 E5 + C#4 4.74 350 F4 5.02 443 A4 5.36 659 E5 5.65 701 F5 6.05 440 A4 6.35 396 G4 6.60 357 F4 Then the guitar player goes to another theme. But this certainly IS “Stairway to heaven”! You can play it like this: E|----------------10---|-12-------------12---|-13------------------| B|-----------10--------|-----------10--------|------10-------------| G|------10-------------|------10-------------|-----------10--------| D|-12------------------|-11------------------|----------------10---| A|---------------------|---------------------|---------------------| E|---------------------|---------------------|---------------------| while the common version you will find on the internet goes like: E|----------5--7--------7-8--------8--2--------2-0---------0- B|-------5--------5----------5-----------3---------1-----1--- G|----5--------------5----------5-----------2---------2------ D|-7-----------6----------5-----------4---------------------- A|----------------------------------------------------------- E|----------------------------------------------------------- Beside the transposition and a small variance in 2nd tact (2 notes reversed), they are the same. Now, it would be very nice, if the computer could do the whole analysis. E.g. analyze the changes in spectrum in time and draw conclusions, when new sounds appear and what tones they are. Do you know such a tool, or should I start writing my own?]]> Recently I started to work more intensively under Windows. Being a Linux convert, I installed MSYS to have bash and other UN*X tools. Although MSYS works nice, I had problems with proper console behavior. Both the Windows default one ( cmd.exe) and Console2 lack some terminal capabilities, so I stick with MSYS’ rxvt. One thing I didn’t like about rxvt was the colors. I have Tango colors set in Gnome Terminal, so I tried to copy the palette. Rxvt allows you to set an ANSI color with -colorX options, but accepts only X11 color names, while Gnome Terminal gives you RGB values. So I wrote a simple script which reads an RGB triple from the input and finds the closest matching colors in the palette. Here it is: use warnings; use strict; use 5.010; use Color::Similarity::HCL qw( distance ); my ($fname) = @ARGV; die "Usage: $0 /etc/X11/rgb.txt" unless defined $fname; my @palette; open my $xcol_fh, '<', $fname or die "Can't open $fname: $!"; while (<$xcol_fh>) { chomp; next if /^\s*!/; my ($r, $g, $b, $name) = $_ =~ /^\s*(\d+)\s+(\d+)\s+(\d+)\s+(.*)/; push @palette, { name => $name, r => $r, g => $g, b => $b }; } close $xcol_fh; while (my $l = <STDIN>) { chomp $l; my @triple = $l =~ /^\s*(\d+)\s+(\d+)\s+(\d+)/; $_->{dist} = distance([ @triple ], [ @$_{qw( r g b )} ]) for (@palette); say "Best matches for (", join(qq(, ), @triple), ") are: "; my @srt = sort { $a->{dist} <=> $b->{dist} } @palette; for my $col (@srt[0..9]) { say "$col->{name} (", join(qq(, ), @$col{qw( r g b )}), "): $col->{dist}"; } } or even simpler, using Convert::Color::X11, as suggested by LeoNerd: use warnings; use strict; use 5.010; use Convert::Color::X11; use Color::Similarity::HCL qw( distance ); my @palette = map { { name => $_, rgb => [ Convert::Color::X11->new($_)->rgb8 ] } } Convert::Color::X11->colors; while (my $l = <STDIN>) { chomp $l; my @triple = $l =~ /^\s*(\d+)\s+(\d+)\s+(\d+)/; $_->{dist} = distance([ @triple ], $_->{rgb}) for (@palette); say "Best matches for (", join(qq(, ), @triple), ") are: "; my @srt = sort { $a->{dist} <=> $b->{dist} } @palette; for my $col (@srt[0..9]) { say "$col->{name} (", join(qq(, ), @{$col->{rgb}}), "): $col->{dist}"; } } For me, rxvt best emulates Tango palette with the following options: -color0 black -color1 red3 -color2 chartreuse4 -color3 gold3 -color4 DodgerBlue4 -color5 plum4 -color6 turquoise4 -color7 honeydew3 -color8 gray34 -color9 firebrick2 -color10 chartreuse2 -color11 khaki -color12 SkyBlue3 -color13 plum3 -color14 cyan3 -color15 gray93]]>. I read Perlmonks irregularly, sometimes I even post something. I give hints, thoughts or bigger chunks of code. But for the first time I got as much as 40% of downvotes on an entry that I think is rather neutral. I understand that some people don’t like to give others ready solutions, they prefer “to teach others to fish than to give them a fish”. But is it really something that bothers them so much to give thumbs down to the ones who don’t mind? No feelings hurt, I’m just curious…]]> Note: Since 5.10.1 is not upstream, there might be changes in packages versions, so downloading patches as described below might not work. I’ll try to update this entry to reflect the correct links, but in case of problems just go to, select the current version of Perl 5.10.1 and download mod_paths.diff manually. After reading encouragingly simple instructions on compiling Perl 5.10.1, I decided to compile Perl — for the first time — on my own. Everything went fine, but after installation it occurred that the order of directories on @INC is “core, vendor, site”, where the expected — after using Debian’s Perl for years — was “site, vendor, core”. Apparently a bug (for most), with no chance to be corrected due to backward compatibility. So, I applied a patch from Debian, which corrects the order. Note, that besides correcting the order, the patch adds /etc/perl at the beginning of @INC and /usr/local/lib/site_perl at the end. This is fine with me, but you can remove it from the code if you wish. mod_paths.diff: Subject: Tweak @INC so that the ordering. --- perl.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 62 insertions(+), 0 deletions(-) diff --git a/perl.c b/perl.c index 94f2b13..5a6744a 100644 --- a/perl.c +++ b/perl.c @@ -4879,9 +4879,14 @@ S_init_perllib(pTHX) incpush(APPLLIB_EXP, TRUE, TRUE, TRUE, TRUE); #endif +#ifdef DEBIAN + /* for configuration where /usr is mounted ro (CPAN::Config, Net::Config) */ + incpush("/etc/perl", FALSE, FALSE, FALSE, FALSE); +#else #ifdef ARCHLIB_EXP incpush(ARCHLIB_EXP, FALSE, FALSE, TRUE, TRUE); #endif +#endif #ifdef MACOS_TRADITIONAL { Stat_t tmpstatbuf; @@ -4906,11 +4911,13 @@ S_init_perllib(pTHX) #ifndef PRIVLIB_EXP # define PRIVLIB_EXP "/usr/local/lib/perl5:/usr/local/lib/perl" #endif +#ifndef DEBIAN #if defined(WIN32) incpush(PRIVLIB_EXP, TRUE, FALSE, TRUE, TRUE); #else incpush(PRIVLIB_EXP, FALSE, FALSE, TRUE, TRUE); #endif +#endif #ifdef SITEARCH_EXP /* sitearch is always relative to sitelib on Windows for @@ -4954,6 +4961,61 @@ S_init_perllib(pTHX) incpush(PERL_VENDORLIB_STEM, FALSE, TRUE, TRUE, TRUE); #endif +#ifdef DEBIAN + incpush(ARCHLIB_EXP, FALSE, FALSE, TRUE, FALSE); + incpush(PRIVLIB_EXP, FALSE, FALSE, TRUE, FALSE); + + /* Non-versioned site directory for local modules and for + compatability with the previous packages' site dirs */ + incpush("/usr/local/lib/site_perl", TRUE, FALSE, FALSE, FALSE); + +#ifdef PERL_INC_VERSION_LIST + { + struct stat s; + + /* add small buffer in case old versions are longer than the + current version */ + char sitearch[sizeof(SITEARCH_EXP)+16] = SITEARCH_EXP; + char sitelib[sizeof(SITELIB_EXP)+16] = SITELIB_EXP; + char const *vers[] = { PERL_INC_VERSION_LIST }; + char const **p; + + char *arch_vers = strrchr(sitearch, '/'); + char *lib_vers = strrchr(sitelib, '/'); + + if (arch_vers && isdigit(*++arch_vers)) + *arch_vers = 0; + else + arch_vers = 0; + + if (lib_vers && isdigit(*++lib_vers)) + *lib_vers = 0; + else + lib_vers = 0; + + /* there is some duplication here as incpush does something + similar internally, but required as sitearch is not a + subdirectory of sitelib */ + for (p = vers; *p; p++) + { + if (arch_vers) + { + strcpy(arch_vers, *p); + if (PerlLIO_stat(sitearch, &s) >= 0 && S_ISDIR(s.st_mode)) + incpush(sitearch, FALSE, FALSE, FALSE, FALSE); + } + + if (lib_vers) + { + strcpy(lib_vers, *p); + if (PerlLIO_stat(sitelib, &s) >= 0 && S_ISDIR(s.st_mode)) + incpush(sitelib, FALSE, FALSE, FALSE, FALSE); + } + } + } +#endif +#endif + #ifdef PERL_OTHERLIBDIRS incpush(PERL_OTHERLIBDIRS, TRUE, TRUE, TRUE, TRUE); #endif -- tg: (daf8b46..) debian/mod_paths (depends on: upstream) So, my modified compilation instructions with Ubuntu’s “taste” look like: wget wget tar xjf perl-5.10.1.tar.bz2 cd perl-5.10.1 patch -b -p1 <../mod_paths.diff perl Configure -de -Dprefix=${HOME}/local -Dusethreads -Duselargefiles -Dccflags=-DDEBIAN -Dcccdlflags=-fPIC -Darchname=i486-linux-gnu -Dpager=/usr/bin/sensible-pager -Uafs -Ud_csh -Ud_ualarm -Uusesfio -Doptimize=-O2 make make test make install]]> Here’s a little test of Perl formatting with SyntaxHighlighter. #!/usr/bin/perl use strict; print "Hello there!"; I’ll write a bit more on my Movable Type config soon, as there are some interesting quirks.]]>. distccis called and performs compilation on the SERVER. ccache’s cache and “returned back”. libglade has the possibility to automatically connect all signal handlers specified in a Glade XML file. However, it is not directly possible in libglademm (C++). Although you find some results when you search the Net for “libglademm autoconnect”, either no solution is given, or the posters mention that “wrapping functions with extern works for them”. No code. So here is a simple example working for me. minimal.cc: #include <gtkmm.h> #include <libglademm.h> #include <iostream> #include <glade/glade-xml.h> extern "C" { void on_button1_clicked() { std::cout << "Clicked" << std::endl; } } int main(int argc, char *argv[]) { Gtk::Main kit(argc, argv); Glib::RefPtr<Gnome::Glade::Xml> refXml = Gnome::Glade::Xml::create("minimal.glade"); glade_xml_signal_autoconnect(refXml->gobj()); Gtk::Window *mainWnd = 0; refXml->get_widget("window1", mainWnd); Gtk::Main::run(*mainWnd); return 0; } minimal.glade: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!DOCTYPE glade-interface SYSTEM "glade-2.0.dtd"> <!--Generated with glade3 3.4.5 on Thu Aug 20 15:37:26 2009 --> <glade-interface> <widget class="GtkWindow" id="window1"> <child> <widget class="GtkButton" id="button1"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="label" translatable="yes">click me</property> <property name="response_id">0</property> <signal name="clicked" handler="on_button1_clicked"/> </widget> </child> </widget> </glade-interface> Makefile: CXXFLAGS = $(shell pkg-config gtkmm-2.4 libglademm-2.4 --cflags) LDLIBS = $(shell pkg-config gtkmm-2.4 libglademm-2.4 --libs) LDFLAGS = -export-dynamic minimal: minimal.o clean: rm minimal minimal.o And then make minimal produces something like g++ -D_REENTRANT -I/usr/include/gtkmm-2.4 -I/usr/lib/gtkmm-2.4/include -I/usr/include/glibmm-2.4 -I/usr/lib/glibmm-2.4/include -I/usr/include/giomm-2.4 -I/usr/lib/giomm/include/sigc++-2.0 -I/usr/lib/sigc++-2.0/include -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/lib/gtk-2.0/include -I/usr/include/cairomm-1.0 -I/usr/include/pango-1.0 -I/usr/include/cairo -I/usr/include/pixman-1 -I/usr/include/freetype2 -I/usr/include/directfb -I/usr/include/libpng12 -I/usr/include/atk-1.0 -I/usr/include/libglademm-2.4 -I/usr/lib/libglademm-2.4/include -I/usr/include/libglade-2.0 -I/usr/include/libxml2 -c -o minimal.o minimal.cc cc -export-dynamic minimal.o -lglademm-2.4 -lgtkmm-2.4 -lglade-2.0 -lgiomm-2.4 -lgdkmm-2.4 -latkmm-1.6 -lpangomm-1.4 -lcairomm-1.0 -lglibmm-2.4 -lsigc-2.0 -lgtk-x11-2.0 -lxml2 -o minimal Running the minimal program and clicking the “click me” button should produce a message on stdout.
http://tu.wesolek.net/atom.xml
CC-MAIN-2017-17
en
refinedweb
I am trying to make a simple program that allow you to enter the value of an item, then adds on a sales tax of 5.6 percent and outputs in the following format:"Item price of $10.00 with sales tax is 10 dollars and 56 cents". I have made a program so far that calculates the sales tax, adds it to the item price and then gives a resulting floating point number. However, I do not know how to display the result in the format above. Here is what I have written so far. Any help would be great! Code: #include <stdio.h> #define TAXRATE .056 int main(void) { float item, tax, total_cost; printf("Enter the value of your item\n"); scanf(" %f", &item); tax = item * TAXRATE; total_cost = item + tax; printf("Your item of $%.2f with sales tax is %.2f", item, total_cost); getchar(); return 0; }
https://cboard.cprogramming.com/c-programming/139577-price-plus-sales-tax-calculator-printable-thread.html
CC-MAIN-2017-17
en
refinedweb
So I'm attempting to make a hangman game for practice. I've wrote the function to get the random word and a function to pair those characters up with there index. Wondering if the user guess a correct character is there a way to reference the dictionary, and output the character to an empty list at the correct index? Code I Have so far: import random import sys def hangman(): print("Welcome to Hangman\n") answer = input("Would you like to play? yes(y) or no(n)") if answer is "y": print("Generating Random Word...Complete") wordlist = [] with open('sowpods.txt', 'r') as f: line = f.read() line = list(line.splitlines()) word = list(random.choice(line)) Key = matchUp(word) else: sys.exit() def matchUp(word): list = [] for x in word: list.append(word.index(x)) newDict = {} newDict = dict(zip(word, list)) return newDict hangman() So like this? You can skip the whole dictionary thing... a = "_" * len(word) def letter_check(letter): if letter in word: a[word.index(letter)] = letter # possibly print 'a' here else: # function for decrement tries here EDIT: Ooops... I forgot about potential repeated letters... um... how about this: a = "_" * len(word) def letter_check(letter): if letter in word: for x, y in enumerate(word): if letter == y: a[x] = letter else: # function for decrement tries here
https://codedump.io/share/8WSs2BRUIKIm/1/dictionary-keyvalue-removal-and-re-instertion-into-a-list-or-something
CC-MAIN-2017-17
en
refinedweb
Hello, I was wondering if I could get a little guidance regarding an error I keep running into on this assignment. I'm not necessarily looking for any answers as I'd like to complete as much of the assignment myself as possible. However, I've been beating my head against a brick wall on this one error, so if someone can point out why I'm getting this error and what I can do to avoid it, it would be greatly appreciated! The assignment focuses on argumentExceptions and formatExceptions. In my code, I've been able to provide an output for the correct exceptions, but when attempting to output my results, I get a NullReferenceException in line 39. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Exceptional { class WorkerTest { static void Main(string[] args) { Worker[] workerArray = new Worker[5]; int workerID; double hrSal; for (int x = 0; x < workerArray.Length; x++) { try { Console.Write("Input a work identification number: "); workerID = Convert.ToInt32(Console.ReadLine()); Console.Write("Input an hourly Salary for the worker: "); hrSal = Convert.ToDouble(Console.ReadLine()); workerArray[x] = new Worker(workerID, hrSal); } catch (Exception e) { Console.WriteLine(e.Message); } } Console.WriteLine(); for (int i = 0; i < workerArray.Length; i++) { Console.WriteLine("Worker # {0} \nHourly salary {1}\n--------\n ", workerArray[i].workerID, workerArray[i].hrSal.ToString("C")); } } } class Worker { public int workerID; public double hrSal; public Worker(int WorkerID, double HrSal) { workerID = WorkerID; hrSal = HrSal; if (hrSal < 5.0 || hrSal > 55.0) { workerID = 123; hrSal = 7.75; throw new ArgumentException("Unexpected Range."); } } } } I have attached to this post the requirements for the assignment. Edited by HackRabbyt: n/a
https://www.daniweb.com/programming/software-development/threads/387659/c-help-with-exceptions
CC-MAIN-2017-17
en
refinedweb
public class StartInstancesRequest extends AmazonWebServiceRequest implements java.io.Serializable, DryRunSupportedRequest<StartInstancesRequest> StartInstances operation. . AmazonEC2.startInstances(StartInstancesRequest), Serialized Form StartInstancesRequest() public StartInstancesRequest(java.util.List<java.lang.String> instanceIds) instanceIds- One or more instance IDs. public java.util.List<java.lang.String> getInstanceIds() public void setInstanceIds(java.util.Collection<java.lang.String> instanceIds) instanceIds- One or more instance IDs. public StartInstancesRequest withInstanceIds(java.lang.String... instanceIds) Returns a reference to this object so that method calls can be chained together. instanceIds- One or more instance IDs. public StartInstancesRequest withInstanceIds(java.util.Collection<java.lang.String> instanceIds) Returns a reference to this object so that method calls can be chained together. instanceIds- One or more instance IDs. public java.lang.String getAdditionalInfo() public void setAdditionalInfo(java.lang.String additionalInfo) additionalInfo- Reserved. public StartInstancesRequest withAdditionalInfo(java.lang.String additionalInfo) Returns a reference to this object so that method calls can be chained together. additionalInfo- Reserved. public Request<StartInstancesRequest> getDryRunRequest() getDryRunRequestin interface DryRunSupportedRequest<StartInstancesRequest> public java.lang.String toString() toStringin class java.lang.Object Object.toString() public int hashCode() hashCodein class java.lang.Object public boolean equals(java.lang.Object obj) equalsin class java.lang.Object
http://docs.aws.amazon.com/AWSAndroidSDK/latest/javadoc/com/amazonaws/services/ec2/model/StartInstancesRequest.html
CC-MAIN-2017-17
en
refinedweb
Hi,. -V -------------- next part -------------- Index: libavutil/mem.c =================================================================== --- libavutil/mem.c (revision 5808) +++ libavutil/mem.c (working copy) @@ -101,8 +101,9 @@ */ void *av_realloc(void *ptr, unsigned int size) { + void *old_ptr; #ifdef MEMALIGN_HACK - int diff; + long diff; #endif /* let's disallow possible ambiguous cases */ @@ -110,11 +111,54 @@ return NULL; #ifdef MEMALIGN_HACK - //FIXME this isn't aligned correctly, though it probably isn't needed + /* this should now be aligned correctly to 16 */ if(!ptr) return av_malloc(size); diff= ((char*)ptr)[-1]; - return realloc(ptr - diff, size + diff) + diff; -#else +); // realloc, so that ptr points to allocated + if(!ptr) // memory of the desired size, if realloc + { // is possible + ptr = old_ptr; + return NULL; + } + if( !(((long)ptr)&15) ) // if lucky, it's already aligned + return (ptr); + + old_ptr = ptr; // if it isn't aligned + ptr = memalign(16,size); // allocate a new aligned block of memory + if(!ptr) // if possible + { + ptr = old_ptr; + return NULL; + } + memcpy(ptr, old_ptr, size); // and move the memory there + + av_free(old_ptr); + return ptr; +#else /* the memalign hack isn't needed, nor is memalign, therefore malloc is + aligned on 16 (or does not need to be?) and, presumably, + the same applies to realloc */ return realloc(ptr, size); #endif }
http://ffmpeg.org/pipermail/ffmpeg-devel/2006-July/012513.html
CC-MAIN-2017-17
en
refinedweb
Unity Features You can replace the default blue avatar with a personalized avatar using the Oculus Platform package. The base Avatar SDK OvrAvatar.cs class is already set up to load the avatar specifications of users, but we need to call Oculus Platform functions to request valid user IDs. After getting a user ID, we can set the oculusUserID of the avatar accordingly. The timing is important, because this has to happen before the Start() function in OvrAvatar.cs gets called. The example below shows one way of doing this. It defines a new class called PlatformManager. It extends our existing Getting Started sample. When run, it replaces the default blue avatar with the personalized avatar of the user logged on to Oculus Home. - Import the Oculus Platform SDK Unity package into your Unity project. - Specify valid App IDs for both the Oculus Avatars and Oculus Platform plugins: - Click Oculus Avatars > Edit Configuration and paste your Oculus Rift App Id or Gear VR App Id into the field. - Click Oculus Platform > Edit Settings and paste your Oculus Rift App Id or Gear VR app Id into the field. - Create an empty game object named PlatformManager: - Click GameObject > Create Empty. - Rename the game object PlatformManager. - Click Add Component, enter New Script in the search field, and then select New Script. - Name the new script PlatformManager and set Language to C Sharp. - Copy and save the following text as Assets\PlatformManager.cs. using UnityEngine; using Oculus.Avatar; using Oculus.Platform; using Oculus.Platform.Models; using System.Collections; public class PlatformManager : MonoBehaviour { public OvrAvatar myAvatar;) { myAvatar.oculusUserID = message.Data.ID; } } } - In the Unity Editor, select PlatformManager from the Hierarchy. The My Avatar field appears in the Inspector. - Drag LocalAvatar from the Hierarchy to the My Avatar field. Handling Multiple Personalized Avatars If you have a multi-user scene where each avatar has different personalizations, you probably already have the user IDs of all the users in your scene because you had to retrieve that data to invite them in the first place. Set the oculusUserID for each user 's avatar accordingly. If your scene contains multiple avatars of the same person, you can iterate through all the avatar objects in the scene to change all their oculusUserID values. For example, the LocalAvatar and RemoteLoopback sample scenes both contain two avatars of the same player. Here is an example of how to modify the callback of our PlatformManager class to personalize the avatars in the sample scenes: using UnityEngine; using Oculus.Avatar; using Oculus.Platform; using Oculus.Platform.Models; using System.Collections; public class PlatformManager : MonoBehaviour {) { OvrAvatar[] avatars = FindObjectsOfType(typeof(OvrAvatar)) as OvrAvatar[]; foreach (OvrAvatar avatar in avatars) { avatar.oculusUserID = message.Data.ID; } } } } Avatar Prefabs The Avatar Unity package contains two prefabs for Avatars: LocalAvatar and RemoteAvatar. They are located in OvrAvatar >Content > PreFabs. The difference between LocalAvatar and RemoteAvatar is in the driver, the control mechanism behind avatar movements. The LocalAvatar driver is the OvrAvatarDriver script which derives avatar movement from the logged in user's Touch and HMD or. The RemoteAvatar driver is the OvrAvatarRemoteDriver script which gets its avatar movement from the packet recording and playback system. Sample Scenes There are four sample scenes in the Avatar Unity package: Controllers Demonstrates how first-person avatars can be used to enhance the sense of presence for Touch users. GripPoses A helper scene for creating custom grip poses. See Custom Touch Grip Poses. LocalAvatar Demonstrates the capabilities of both first-person and third-person avatars. Does not yet include microphone voice visualization or loading an Avatar Specification using Oculus Platform. RemoteLoopback Demonstrates the avatar packet recording and playback system. See Recording and Playing Back Avatar Pose Updates. Reducing Draw Calls with the Combine Meshes Option Each avatar in your scene requires 11 draw calls per eye per frame (22 total). The Combine Meshes option reduces this to 3 draw calls per eye (6 total) by combining all the mesh parts into a single mesh. This is an important performance gain for Gear VR as most apps typically need to stay within a draw call budget of 50 to 100 draw calls per frame. Without this option, just having 4 avatars in your scene would use most or all of that budget. You should almost always select this option when using avatars. The only drawback to using this option is that you are no longer able to access mesh parts individually, but that is a rare use case. Custom Touch Grip Poses The GripPoses sample lets you change the hand poses by rotating the finger joints until you get the pose you want. You can then save these finger joint positions as a Unity prefab that you can load at a later time. In this example, we will pose the left hand to make it look like a scissors or bunny rabbit gesture. Creating the left hand pose: - Open the Samples > GripPoses > GripPoses scene. - Click Play. - Press E to select the Rotate transform tool. In the Hierarchy window, expand LocalAvatar > hand_left > LeftHandPoseEditHelp > hands_l_hand_world > hands:b_l_hand. Locate all the joints of the fingers you want to adjust. Joint 0 is closest to the palm, subsequent joints are towards the finger tip. To adjust the pinky finger joints for example, expand hands:b_l_pinky0 > hands:b_l_pinky1 > hands:b_l_pinky2 > hands:b_l_pinky3. In the Hierarchy window, select the joint you want to rotate. In the Scene window, click a rotation orbit and drag the joint to the desired angle. - Repeat these two steps until you achieve the desired pose. Saving the left hand pose: - In the Hierarchy window, drag hand_l_hand_world to the Project window. - In the Project window, rename this transform to something descriptive, for example: poseBunnyRabbitLeft. Using the left hand pose: - In the Hierarchy window, select LocalAvatar. - Drag poseBunnyRabbitLeft from the Project window to the Left Hand Custom Pose field in the Inspector Window. Click Play again. You will see that the left hand is now frozen in our custom bunny grip pose. Settings for Rift Stand-alone Builds To make Rift avatars appear in stand-alone executable builds, we need to change two settings: - Add the Avatar shaders to the Always Included Shaders list in your project settings: - Click Edit > Project Settings > Graphics. - Under Always Included Shaders, add +3 to the Size and then press Enter. - Add the following shader elements: AvatarSurfaceShader, AvatarSurfaceShaderPBS, AvatarSurfaceShaderSelfOccluding. - Build as a 64-bit application: - Click File > Build Settings. - Set Architecture to x86_x64. Making Rift Hands Interact with the Environment To allow avatars to interact with objects in their environment, use the OVRGrabber and OVRGrabble components. For a working example, see the AvatarWithGrab sample scene included in the Oculus Unity Sample Framework.
https://developer3.oculus.com/documentation/avatarsdk/latest/concepts/avatars-sdk-unity/
CC-MAIN-2017-17
en
refinedweb
Yes, this can be done using the API. Assuming you have rbtools 5.x installed, the following script should work. Modify for your desired output. It should be noted that this will make a large number of HTTP requests, and is not optimized. Advertising from rbtools.api.client import RBClient client = RBClient("http://<your-rb-server-here>") root = client.get_root() groups = root.get_groups() while True: for group in groups: print group.name users = group.get_users() while True: for user in users: print "\t%s" % user.username try: users = users.get_next() except StopIteration: break try: groups = groups.get_next() except StopIteration: break On Wed, Sep 11, 2013 at 12:46 PM, Steve <seide.al...@gmail.com> wrote: > Better yet, is there a way to do this with the new RBTools 5.x API? > > --Steve > > > On Wednesday, September 11, 2013 8:54:07 AM UTC-7, Steve wrote: >> >> I can list all RB users this way: >> >> >>> from django.contrib.auth.models import User >> >>> for user in User.objects.all(): >> >>> print user >> >> That correctly prints out all of my users. But the same doesn't work for >> Group >> >> >>> from django.contrib.auth.models import Group >> >>> for group in Group.objects.all(): >> >>> print group >> default reviewer >> >> I have about 30 groups defined in RB, but I only get 'default reviewer'. >> What I'd like to do is list out all the groups, or more specifically, for >> each group, list all the user in the group. There's probably a set of >> django functions that will do that, but I'm having trouble finding them. >> Can someone point me in the right direction? >> >> Thanks >> >> --Ste.
https://www.mail-archive.com/reviewboard@googlegroups.com/msg12082.html
CC-MAIN-2017-17
en
refinedweb
Elm Friday: Functions (Part V) 11/20/15 Elm is a functional language, so naturally, functions and function calls are pretty important. We have already seen some functions in the previous episodes. This episode goes into more detail regarding function definition and function application. About This Series This is the fifth. Functions This is a function definition in Elm: multiply a b = a * b The function definition starts with the name of the function, followed by the parameter list. As opposed to C-style languages like Java, JavaScript and the likes, there are no parentheses or commas involved, the parameters are only separated by spaces. The equals character = separates the function name with the parameter list from the function body. The function body can be any Elm expression that produces a value. We can reference the parameters in the function body. Function definitions can use other functions in their body. This is how this looks like: square a = multiply a a Here, we define a square function by using the multiply function we defined earlier. Function calls also don’t need any parentheses, just list the function parameters you want to pass into the function separated by whitespace. You do need parentheses when you have nested expresssions: productOfSquares a b = multiply (square a) (square b) You can also declare anonymous functions (also known as lambda expressions) on the fly: incrementAll list = List.map (\ n -> n + 1) list This thing wrapped in (\ and ) is an anonymous function that takes one parameter and returns the parameter, incremented by one. This anonymous function is then applied to all elements in a list by using List.map. Actually, we could have written this shorter. The following is equivalent: incrementAll2 = List.map (\ c -> c + 1) Why is that the same? Because Elm supports something called currying. incrementAll defines a function that takes a list and produces another list. incrementAll2 also defines a function, but it is a function that takes no arguments and returns another function. So when we write incrementAll2 [1, 2, 3] Elm first evaluates incrementAll2, gets a function and then procedes to put the remaining arguments ( [1, 2, 3] in this case) into this function. The result is the same. If you find it hard to wrap your head around currying, don’t worry about it too much for now. You can always resort to write a more verbose version of your function without currying and come back to this concept later. As a rule of thumb, if the last element in the parameter list in the function declaration is simply repeated at the end of the function body (like list in this case), you can probably omit both. Let’s wrap this up. Here is a complete, working Elm program that uses the functions we defined above: import Html multiply a b = a * b square a = multiply a a productOfSquares a b = multiply (square a) (square b) incrementAll list = List.map (\ c -> c + 1) list incrementAll2 = List.map (\ c -> c + 1)]))) ] What happens in these lengthy expressions in the main function? Well, the functions we defined return mostly numbers (or lists, in the case of incrementAll). So we need to convert their results into strings via the toString function (which comes from the Basics package and is imported by default). We then use ++ to append the resulting string to a string literal ( "3 × 5 = ", for example) and use Html.text to convert the string into an HTML text node. Fancy Function Application Whoa, did you see what we did there to bring the result of one of our functions to the screen? Let’s take a look at Html.text ("3 × 5 = " ++ (toString (multiply 3 5))) for a moment. That’s a lot of parentheses right there. Elm has two operators, |> and <|, to write expressions like that in a more elegant fashion. |>: Take the expression to the left of the operator and put it into the function on the right hand side. <|: Take the expression to the right of the operator and put it into the function on the left hand side. Here is the main function of the previous program, rewritten with the new operators:]) ] If you like to go a bit crazy with this, you can even rewrite Html.text <| "3 × 5 = " ++ (toString <| multiply 3 5) as Html.text <| (++) "3 × 5 = " <| toString <| multiply 3 5 or square 4 |> toString |> (++) "4² = " |> Html.text Here we used the infix operator ++ as a non-infix function to be able to apply it with <| and |>. We also used a bit of currying again: (++) actually takes two arguments (the two strings that are to be concatenated). The expression (++) "3 × 5 = " is a partial function application, that is, we provide the first of the two arguments to yield a new function that takes only one argument, prepending "3 × 5 = " to everything that is passed to it. To read code like this with ease, just imagine this line as the ASCII art represenation of a data pipeline. In |> style pipelines, data flows from left to right, in <| style pipelines, data flows from right to left. So, for example, to decipher a line like Html.text <| (++) "3 × 5 = " <| toString <| multiply 3 5, you start at the end ( multiply 3 5), push this into the toString function to convert the number into a string, the resulting string the goes into the append function ( (++)) together with the string literal and finally the concatenated string goes into the Html.text function. This concludes the fifth episode of this blog post series on Elm. Make sure to check out the next episode, where we will take a look at type annotations. Comment
https://blog.codecentric.de/en/2015/11/elm-friday-part-05-functions/
CC-MAIN-2017-17
en
refinedweb
I'm trying to write simpler code for adding unique elements into a python list. I have a dataset that contains a list of dictionaries, and I'm trying to iterate through a list inside the dictionary Why doesn't this work? It's adding all the items, including the duplicates, instead of adding unique items. unique_items = [] unique_items = [item for d in data for item in d['items'] if item not in unique_items] unique_items = [] for d in data: for item in d['items']: if (item not in unique_items): unique_items.append(item) [{"items":["apple", "banana"]}, {"items":["banana", "strawberry"]}, {"items":["blueberry", "kiwi", "apple"]}] The easiest way is to use OrderedDict: from collections import OrderedDict from itertools import chain l = [{"items":["apple", "banana"]}, {"items":["banana", "strawberry"]}, {"items":["blueberry", "kiwi", "apple"]}] OrderedDict.fromkeys(chain.from_iterable(d['items'] for d in l)).keys() # ['apple', 'banana', 'strawberry', 'blueberry', 'kiwi'] If you want alternatives check OrderedSet recipe and package based on it.
https://codedump.io/share/ahd6rIDUNSRE/1/python-list-comprehension-adding-unique-elements-into-list
CC-MAIN-2017-17
en
refinedweb
docker runEstimated reading time: 29 minutes Edge only: This is the CLI reference for Docker CE Edge versions. Some of these options may not be available to Docker CE stable or Docker EE. You can view the stable version of this CLI reference or learn about Docker CE Edge. Description Run a command in a new container Usage docker run [OPTIONS] IMAGE [COMMAND] [ARG...] Options Parent command Extended description The docker run command first creates a writeable container layer over the specified image, and then starts it using the specified command. That is, docker run is equivalent to the API /containers/create then /containers/(id)/start. A stopped container can be restarted with all its previous changes intact using docker start. See docker ps -a to view a list of all containers. The docker run command can be used in combination with docker commit to change the command that a container runs. There is additional detailed information about docker run in the Docker run reference. For information on connecting a container to a network, see the “Docker network overview”. Examples Assign name and allocate pseudo-TTY (–name, -it) $ docker run --name test -it debian root@d6c0fe130dba:/# exit 13 $ echo $? 13 $ docker ps -a | grep test d6c0fe130dba debian:7 "/bin/bash" 26 seconds ago Exited (13) 17 seconds ago test This example runs a container named test using the debian:latest image. The -it instructs Docker to allocate a pseudo-TTY connected to the container’s stdin; creating an interactive bash shell in the container. In the example, the bash shell is quit by entering exit 13. This exit code is passed on to the caller of docker run, and is recorded in the test container’s metadata. Capture container ID (–cidfile) $ docker run --cidfile /tmp/docker_test.cid ubuntu echo "test" This will create a container and print test to the console. The cidfile flag makes Docker attempt to create a new file and write the container ID to it. If the file exists already, Docker will return an error. Docker will close this file when docker run exits. Full container capabilities (–privileged) $ docker run -t -i --rm ubuntu bash root@bc338942ef20:/# mount -t tmpfs none /mnt mount: permission denied This will not work, because by default, most potentially dangerous kernel capabilities are dropped; including cap_sys_admin (which is required to mount filesystems). However, the --privileged flag will allow it to run: $ docker run -t -i --privileged ubuntu bash root@50e3f57e16e6:/# mount -t tmpfs none /mnt root@50e3f57e16e6:/# df -h Filesystem Size Used Avail Use% Mounted on none 1.9G 0 1.9G 0% /mnt The --privileged flag gives all capabilities to the container, and it also lifts all the limitations enforced by the device cgroup controller. In other words, the container can then do almost everything that the host can do. This flag exists to allow special use-cases, like running Docker within Docker. Set working directory (-w) $ docker run -w /path/to/dir/ -i -t ubuntu pwd The -w lets the command being executed inside directory given, here /path/to/dir/. If the path does not exist it is created inside the container. Set storage driver options per container $ docker run -it --storage-opt size=120G fedora /bin/bash This (size) will allow to set the container rootfs size to 120G at creation time. This option is only available for the devicemapper, btrfs, overlay2, windowsfilter and zfs graph drivers.. Mount tmpfs (–tmpfs) $ docker run -d --tmpfs /run:rw,noexec,nosuid,size=65536k my_image The --tmpfs flag mounts an empty tmpfs into the container with the rw, noexec, nosuid, size=65536k options. Mount volume (-v, –read-only) $ docker run -v `pwd`:`pwd` -w `pwd` -i -t ubuntu pwd The -v flag mounts the current working directory into the container. The -w lets the command being executed inside the current working directory, by changing into the directory to the value returned by pwd. So this combination executes the command using the container, but inside the current working directory. $ docker run -v /doesnt/exist:/foo -w /foo -i -t ubuntu bash When the host directory of a bind-mounted volume doesn’t exist, Docker will automatically create this directory on the host for you. In the example above, Docker will create the /doesnt/exist folder before starting your container. $ docker run --read-only -v /icanwrite busybox touch /icanwrite/here Volumes can be used in combination with --read-only to control where a container writes files. The --read-only flag mounts the container’s root filesystem as read only prohibiting writes to locations other than the specified volumes for the container. $ docker run -t -i -v /var/run/docker.sock:/var/run/docker.sock -v /path/to/static-docker-binary:/usr/bin/docker busybox sh By bind-mounting the docker unix socket and statically linked docker binary (refer to get the linux binary), you give the container the full access to create and manipulate the host’s Docker daemon. On Windows, the paths must be specified using Windows-style semantics. PS C:\> docker run -v c:\foo:c:\dest microsoft/nanoserver cmd /s /c type c:\dest\somefile.txt Contents of file PS C:\> docker run -v c:\foo:d: microsoft/nanoserver cmd /s /c type d:\somefile.txt Contents of file The following examples will fail when using Windows-based containers, as the destination of a volume or bind-mount inside the container must be one of: a non-existing or empty directory; or a drive other than C:. Further, the source of a bind mount must be a local directory, not a file. net use z: \\remotemachine\share docker run -v z:\foo:c:\dest ... docker run -v \\uncpath\to\directory:c:\dest ... docker run -v c:\foo\somefile.txt:c:\dest ... docker run -v c:\foo:c: ... docker run -v c:\foo:c:\existing-directory-with-contents ... For in-depth information about volumes, refer to manage data in containers Publish or expose port (-p, –expose) $ without publishing the port to the host system’s interfaces. Set environment variables (-e, –env, –env-file) $ docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash This sets simple (non-array) environmental variables in the container. For illustration all three flags are shown here. Where -e, --env take an environment variable and value, or if no = is provided, then that variable’s current value, set via export, _TEST_BAR=FOO TEST_APP_42=magic helloWorld=true 123qwe=bar org.spring.config=something # pass through this variable from the caller TEST_PASSTHROUGH $ TEST_PASSTHROUGH=howdy=howdy HOME=/root 123qwe=bar org.spring.config=something $= HOME=/root 123qwe=bar org.spring.config=something Set metadata on container (-l, –label, –label-file) A label is a key=value pair that applies metadata to a container. To label a container with two labels: $ docker run -l my-label --label com.example.foo=bar ubuntu bash The my-label key doesn’t specify a value so the label defaults to an empty string( ""). To add multiple labels, repeat the label flag ( -l or --label). The key=value must be unique to avoid overwriting the label value. If you specify labels with identical keys but different values, each subsequent value overwrites the previous. Docker uses the last key=value you supply. Use the --label-file flag to load multiple labels from a file. Delimit each label in the file with an EOL mark. The example below loads labels from a labels file in the current directory: $ docker run --label-file ./labels ubuntu bash The label-file format is similar to the format for loading environment variables. (Unlike environment variables, labels are not visible to processes running inside a container.) The following example illustrates a label-file format: com.example.label1="a label" # this is a comment com.example.label2=another\ label com.example.label3 You can load multiple label-files by supplying multiple --label-file flags. For additional information on working with labels, see Labels - custom metadata in Docker in the Docker User Guide. Connect a container to a network (–network) When you start a container use the --network flag to connect it to a network. This adds the busybox container to the my-net network. $ docker run -itd --network=my-net busybox You can also choose the IP addresses for the container with --ip and --ip6 flags when you start the container on a user-defined network. $ docker run -itd --network=my-net --ip=10.10.9.75 busybox If you want to add a running container to a network use the docker network connect subcommand. You can connect multiple containers to the same network. Once connected, the containers can communicate easily need only another container’s IP address or name. For overlay networks or custom plugins that support multi-host connectivity, containers connected to the same multi-host network but launched from different Engines can also communicate in this way. Note: Service discovery is unavailable on the default bridge network. Containers can communicate via their IP addresses by default. To communicate by name, they must be linked. You can disconnect a container from a network using the docker network disconnect command. Mount volumes from container (–volumes-from) $ docker run --volumes-from 777f7dc92da7 --volumes-from ba8c0c54f0f2:ro -i -t ubuntu pwd The --volumes-from flag mounts all the defined volumes from the referenced containers. Containers can be specified by repetitions of the --volumes-from argument. The container ID may be optionally suffixed with :ro or :rw to mount the volumes in read-only or read-write mode, respectively. By default, the volumes are mounted in the same mode (read write or read only) as the reference container. Labeling systems like SELinux require that proper labels are placed on volume content mounted into a container. Without a label, the security system might prevent the processes running inside the container from using the content. By default, Docker does not change the labels set by the OS. To change the label in the container context, you can add either of two suffixes :z or :Z to the volume mount. These suffixes tell Docker to relabel file objects on the shared volumes. The z option tells Docker that two containers share the volume content. As a result, Docker labels the content with a shared content label. Shared volume labels allow all containers to read/write content. The Z option tells Docker to label the content with a private unshared label. Only the current container can use a private volume. Attach to STDIN/STDOUT/STDERR (-a) The -a flag tells docker run to bind to the container’s STDIN, STDOUT or STDERR. This makes it possible to manipulate the output and input as needed. $ echo "test" | docker run -i -a stdin ubuntu cat - This pipes data into a container and prints the container’s ID by attaching only to the container’s STDIN. $ docker run -a stderr ubuntu echo test This isn’t going to print anything unless there’s an error because we’ve only attached to the STDERR of the container. The container’s logs still store what’s been written to STDERR and STDOUT. $ cat somefile | docker run -i -a stdin mybuilder dobuild This is how piping a file into a container could be done for a build. The container’s ID will be printed after the build is done and the build logs could be retrieved using docker logs. This is useful if you need to pipe a file or something else into a container and retrieve the container’s ID once the container has finished running. Add host device to container (–device) $ docker run --device=/dev/sdc:/dev/xvdc \ --device=/dev/sdd --device=/dev/zero:/dev/nulo \ -i -t \ ubuntu ls -l /dev/{xvdc,sdd,nulo} brw-rw---- 1 root disk 8, 2 Feb 9 16:05 /dev/xvdc brw-rw---- 1 root disk 8, 3 Feb 9 16:05 /dev/sdd crw-rw-rw- 1 root root 1, 5 Feb 9 16:05 /dev/nulo It is often necessary to directly expose devices to a container. The --device option enables that. For example, a specific block storage device or loop device or audio device can be added to an otherwise unprivileged container (without the --privileged flag) and have the application directly access it. By default, the container will be able to read, write and mknod these devices. This can be overridden using a third :rwm set of options to each --device flag: $ docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc Command (m for help): q $ docker run --device=/dev/sda:/dev/xvdc:r --rm -it ubuntu fdisk /dev/xvdc You will not be able to write the partition table. Command (m for help): q $ docker run --device=/dev/sda:/dev/xvdc:rw -. Restart policies (–restart) Use Docker’s --restart to specify a container’s restart policy. A restart policy controls whether the Docker daemon restarts a container after exit. Docker supports the following restart policies: $ docker run --restart=always redis This will run the redis container with a restart policy of always so that if the container exits, Docker will restart it. More detailed information on restart policies can be found in the Restart Policies (–restart) section of the Docker run reference page. Add entries to container hosts file (–add-host) You can add other hosts into a container’s /etc/hosts file by using one or more --add-host flags. This example adds a static address for a host named docker: $ docker run --add-host=docker:10.180.0.1 --rm -it debian root@f38c87f2a42d:/# ping docker PING docker (10.180.0.1): 48 data bytes 56 bytes from 10.180.0.1: icmp_seq=0 ttl=254 time=7.600 ms 56 bytes from 10.180.0.1: icmp_seq=1 ttl=254 time=30.705 ms ^C--- docker ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max/stddev = 7.600/19.152/30.705/11.553 ms Sometimes you need to connect to the Docker host from within your container. To enable this, pass the Docker host’s IP address to the container using the --add-host flag. To find the host’s address, use the ip addr show command. The flags you pass to ip addr show depend on whether you are using IPv4 or IPv6 networking in your containers. Use the following flags for IPv4 address retrieval for a network device named eth0: $ HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print \$2}' | cut -d / -f 1` $ docker run --add-host=docker:${HOSTIP} --rm -it debian For IPv6 use the -6 flag instead of the -4 flag. For other network devices, replace eth0 with the correct device name (for example docker0 for the bridge device). Set ulimits in container (–ulimit) Since setting ulimit settings in a container requires extra privileges not available in the default container, you can set these using the --ulimit flag. --ulimit is specified with a soft and hard limit as such: <type>=<soft limit>[:<hard limit>], for example: $ docker run --ulimit nofile=1024:1024 --rm debian sh -c ` The values are sent to the appropriate syscall as they are set. Docker doesn’t perform any byte conversion. Take this into account when setting the values. For nproc usage Be careful setting nproc with the ulimit flag as nproc is designed by Linux to set the maximum number of processes available to a user, not to a container. For example, start four containers with daemon user: $ docker run -d -u daemon --ulimit nproc=3 busybox top $ docker run -d -u daemon --ulimit nproc=3 busybox top $ docker run -d -u daemon --ulimit nproc=3 busybox top $ docker run -d -u daemon --ulimit nproc=3 busybox top The 4th container fails and reports “[8] System error: resource temporarily unavailable” error. This fails because the caller set nproc=3 resulting in the first three containers using up the three processes quota set for the daemon user. Stop container with signal (–stop-signal) The --stop-signal flag. Optional security options (–security-opt) On Windows, this flag can be used to specify the credentialspec option. The credentialspec must be in the format or registry://keyname. Stop container with timeout (–stop-timeout) The --stop-timeout flag sets the timeout (in seconds) that a pre-defined (see --stop-signal) system call signal that will be sent to the container to exit. After timeout elapses the container will be killed with SIGKILL. Specify isolation technology for container (–isolation) This option is useful in situations where you are running Docker containers on Windows. The --isolation <value> option sets a container’s isolation technology. On Linux, the only supported is the default option which uses Linux namespaces. These two commands are equivalent on Linux: $ docker run -d busybox top $ docker run -d --isolation default busybox top On Windows, --isolation can take one of these values: The default isolation on Windows server operating systems is process. The default (and only supported) isolation on Windows client operating systems is hyperv. An attempt to start a container on a client operating system with --isolation process will fail. On Windows server, assuming the default configuration, these commands are equivalent and result in process isolation: PS C:\> docker run -d microsoft/nanoserver powershell echo process PS C:\> docker run -d --isolation default microsoft/nanoserver powershell echo process PS C:\> docker run -d --isolation process microsoft/nanoserver powershell echo process If you have set the --exec-opt isolation=hyperv option on the Docker daemon, or are running against a Windows client-based daemon, these commands are equivalent and result in hyperv isolation: PS C:\> docker run -d microsoft/nanoserver powershell echo hyperv PS C:\> docker run -d --isolation default microsoft/nanoserver powershell echo hyperv PS C:\> docker run -d --isolation hyperv microsoft/nanoserver powershell echo hyperv Configure namespaced kernel parameters (sysctls) at runtime The --sysctl sets namespaced kernel parameters (sysctls) in the container. For example, to turn on IP forwarding in the containers network namespace, run this command: $ docker run --sysctl net.ipv4.ip_forward=1 someimage Note: Not all sysctls are namespaced. Docker does not support changing sysctls inside of a container that also modify the host system. As the kernel evolves we expect to see more sysctls become namespaced. Currently supported sysctls IPC Namespace: kernel.msgmax, kernel.msgmnb, kernel.msgmni, kernel.sem, kernel.shmall, kernel.shmmax, kernel.shmmni, kernel.shm_rmid_forced Sysctls beginning with fs.mqueue.* If you use the --ipc=hostoption these sysctls will not be allowed. Network Namespace: Sysctls beginning with net.* If you use the --network=hostoption using these sysctls will not be allowed.
https://docs.docker.com/edge/engine/reference/commandline/run/
CC-MAIN-2017-17
en
refinedweb
I've constructed a bridge in a client. I've constructed a bridge in a constructor. So tonight, I construct a bridge in a factory. I must investigate the factory and abstract factory in Dart. Mostly because I am unsure how much value the standard structure of the pattern is needed in a language with factory constructors baked right in. Since they are baked in, that's what I opt for tonight. The abstraction in my bridge pattern example continues to be a Messenger: abstract class Messenger { Communication comm; Messenger(this.comm); void updateStatus(); }The purpose of classes implementing this interface is to simply facilitate status updates. There are a number of communication methods bridged from this abstraction: HTTP, websockets, etc. In last night's pass at the bridge pattern, the specific implementation was chosen in the abstraction's constructor (i.e. Messenger) and subsequently changed when certain thresholds were reached. Tonight, I move the responsibility of choosing the implementation out of the abstraction and into a factory constructor. Because it make sense and is so darn easy, I declare a factory constructor in the Communicationimplementation: import 'dart:math' show Random; // Implementor abstract class Communication { factory Communication() => new Random().nextBool() ? new WebSocketCommunication() : new HttpCommunication(); void send(String message); }I continue to find it strange that an abstract class can declare a constructor—even if it is a factory constructor. Each time, I must remind myself that it works for just this kind of case: choosing a subclass implementation. In this case, I randomly choose between the WebSocketCommunicationand HttpCommunicationconcrete implementations. No changes are required to either of the concrete implementations. They both continue to establish their connections to the server and define the appropriate code to send messages. The Messengerabstraction and the subclass refined for use in web clients, WebMessengerdo need to change slightly. The subclass no longer needs to concern itself with the Communicationimplementation. The Messengercan do that. Instead, the subclass can worry only about web page related things, like the text input element from which it will obtain messages to send back to the server: class WebMessenger extends Messenger { InputElement _messageElement; WebMessenger(this._messageElement); // ... }The Messengerabstraction needs to do a little more work now, but not much. In addition to declaring the Communicationinstance variable, it now assigns it in the constructor: abstract class Messenger { Communication comm; Messenger() : comm = new Communication(); void updateStatus(); }On a sidenote, I remain skeptical of initializer lists in constructors like that. I am trying to use them to see if they grow on me, but I think it just makes things noisy (especially if a constructor body is added to the mix). That pretty much does it. Through several page reloads, I find that the client does indeed randomly switch between implementations: $ ./bin/server.dart [WebSocket] message=asdf [WebSocket] message=asdf [WebSocket] message=asdf [HTTP] message=asdf [WebSocket] message=asdf [HTTP] message=asdfThat was fairly straight-forward and low on the surprise scale. I note before concluding that none of this precludes me from setting the Communciationimplementation in the client code. For example: main() { var message = new WebMessenger(query('#message')) ..comm = new HttpCommunication(); // ... }I also note that making more purposeful choices than the current random implementation selection is straight-forward. I can simply choose based on VM info, browser info, or any other bit of information on which one might opt between HTTP and websocket communication. Any or all of those choices would be right at home in the factory constructor. Or perhaps someday in an abstract factory constructor. Code for the bridge client and the backend server are on the Design Patterns in Dart public repository. Day #82
https://japhr.blogspot.com/2016/02/factory-bridges-in-dart.html
CC-MAIN-2017-17
en
refinedweb
... [more] ... . IBM Cloud Provisioning and Management for z/OS: CICS Scenario z Systems Software Redpaper, published 17 Nov 2016 In this IBM® Redbooks® publication, we show you a cloud services scenario on IBM z/OS®. In the scenario, we show you how to perform the following tasks: - Subscribe to a service to provision an IBM CICS® system. - Assign resources to the CICS system. - Use a template to tailor the CICS service - Publish the service in our Marketplace - Use that service to create a CICS region and start it. - Deprovision the CICS region once it is no longer required. We explain the basic terms that are associated with the cloud services and which roles play a part in ... ] IBM Cloud Provisioning and Management for z/OS: An Introduction IBM Z Redpaper,] IBM Spectrum Scale in an OpenStack Environment Software Defined Storage Redpaper, published 9 Jun 2016 OpenStack is open source software that is widely used as a base with which to build cloud and infrastructure as a service solutions. OpenStack often is deployed on commodity hardware and used to virtualize various parts of the infrastructure (compute, storage, and network) to ease the sharing of the infrastructure across applications, use cases, or workloads. IBM® Spectrum Scale is software that is used to manage storage and provide massive scale, a global namespace, and high-performance data access with many enterprise features. IBM Spectrum™ Scale is used in clustered environments and ... ] Cloud Security Guidelines for IBM Power Systems Power Systems Redbooks, published 1 Mar 2016, last updated 9 Mar 2016 This IBM® Redbooks® publication is a comprehensive guide that covers cloud security considerations for IBM Power Systems™. The first objectives of this book are to examine how Power Systems can fit into the current and developing cloud computing landscape and to outline the proven Cloud Computing Reference Architecture (CCRA) that IBM employs in building private and hybrid cloud environments. The book then looks more closely at the underlying technology and hones in on the security aspects for the following subsystems: - IBM Hardware Management Console - IBM PowerVM - ... [more] Hybrid Cloud]] A Practical Approach to Cloud IaaS with IBM SoftLayer: Presentations Guide Cloud Redbooks, published 17 Feb 2016 ... [more] Creating Hybrid Clouds with IBM Bluemix Integration Services Application Integration Solution Guide, published 17 Nov 2015, last updated 20 Jan 2016. Implementing IBM Spectrum Scale Software Defined Storage Redpaper, published 11 Nov 2015, last updated 2 Dec 2015 ... [more]
http://www.redbooks.ibm.com/Redbooks.nsf/domains/cloud?Open&start=21&count=20
CC-MAIN-2018-47
en
refinedweb
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of CD1 status. Section: 15.5.1.2 [headers] Status: CD1 Submitter: Bill Plauger Opened: 2004-01-30 Last modified: 2017-02-03 Priority: Not Prioritized View all other issues in [headers]. View all issues with CD1 status. Discussion: The C++ Standard effectively requires that the traditional C headers (of the form <xxx.h>) be defined in terms of the newer C++ headers (of the form <cxxx>). Clauses 17.4.1.2/4 and D.5 combine to require that: The rules were left in this form despited repeated and heated objections from several compiler vendors. The C headers are often beyond the direct control of C++ implementors. In some organizations, it's all they can do to get a few #ifdef __cplusplus tests added. Third-party library vendors can perhaps wrap the C headers. But neither of these approaches supports the drastic restructuring required by the C++ Standard. As a result, it is still widespread practice to ignore this conformance requirement, nearly seven years after the committee last debated this topic. Instead, what is often implemented is: The practical benefit for implementors with the second approach is that they can use existing C library headers, as they are pretty much obliged to do. The practical cost for programmers facing a mix of implementations is that they have to assume weaker rules: There also exists the possibility of subtle differences due to Koenig lookup, but there are so few non-builtin types defined in the C headers that I've yet to see an example of any real problems in this area. It is worth observing that the rate at which programmers fall afoul of these differences has remained small, at least as measured by newsgroup postings and our own bug reports. (By an overwhelming margin, the commonest problem is still that programmers include <string> and can't understand why the typename string isn't defined -- this a decade after the committee invented namespace std, nominally for the benefit of all programmers.) We should accept the fact that we made a serious mistake and rectify it, however belatedly, by explicitly allowing either of the two schemes for declaring C names in headers. [Sydney: This issue has been debated many times, and will certainly have to be discussed in full committee before any action can be taken. However, the preliminary sentiment of the LWG was in favor of the change. (6 yes, 0 no, 2 abstain) Robert Klarer suggests that we might also want to undeprecate the C-style .h headers.] Proposed resolution: Add to 15.5.1.2 [headers], para. 4: Except as noted in clauses 18 through 27 and Annex D,. Change D.6 [depr.c.headers], para. 2-3: -2- Every C header, each of which has a name of the form name.h, behaves as if each name placed in the Standard library namespace by the corresponding cname header is alsoplaced within the namespace scope of the namespace std and is followed by an explicit using-declaration (9.8 [namespace.udecl]). -3- [Example: The header <cstdlib> provides its declarations and definitions within the namespace std. The header <stdlib.h> makes these available also inthe global namespace, much as in the C Standard. -- end example]
https://cplusplus.github.io/LWG/issue456
CC-MAIN-2018-47
en
refinedweb
While answering a forum post on a function that processed a list I got thinking about how it would run in a real-life situation. Rather than a list being passed it would probably be a file. This almost worked except the line returns were passed in and I needed those stripped out. I was hoping to find an elegant solution and I did, a generator. If you have not used generators before this wiki post is a good starting point. If you have used list comprehension then it is exactly the same just with different brackets. I’ll use collections.Counter() in place of the function to demonstrate; for those using a Python version earlier than 2.7 you will to create your own function. First an example with a list which acts as the starting point: def basicCounter ( mylist ): # Python 2.7+ users could use collections.Counter instead retdic = dict() for item in mylist: retdic[item] = retdic.get(item,0) + 1 return retdic mylist = ['1','2','2','3','3','3'] counted = basicCounter(mylist) print counted Now let create a generator to process the lines in a file to remove the whitespace and line returns. The strip() function does this for a string, we just need to do this for every line in the file. This gives us our generator; (line.strip() for line in file). Add a bit of code for opening the file and we have our version of the above which uses the contents of a file for the input instead. # basicCounter as before # Python 2.5 users need the following line # from __future__ import with_statement with open(r'C:\path\to\file.txt') as myfile: counted = basicCounter(line.strip() for line in myfile) print counted There is nothing to stop you making the processing much more complex; simply create your function and replace line.strip() with yourfunction(line). You can also make the processing conditional by adding an if clause at the end.
https://quackajack.wordpress.com/2012/12/
CC-MAIN-2018-47
en
refinedweb
I’m runnig Rails 1.1.2 and I’m trying to use a namespace with my model class such as Namespace::Test so I run: script/generate model namespace/test create app/models/namespace create test/unit/namespace create test/fixtures/namespace create app/models/namespace/test.rb create test/unit/namespace/test_test.rb create test/fixtures/namespace/tests.yml create db/migrate create db/migrate/001_create_namespace_tests.rb However, the unit test has the line “fixtures :tests” which causes the test to fail because it looks for the fixture file in test/fixtures/tests.yml instead of test/fixtures/namespace.yml. How do I tell the unit test to look for fixtures in it’s own namespace? Should I file this as a bug or am I doing something wrong? Thanks, Todd
https://www.ruby-forum.com/t/problems-generating-a-model-with-a-namespace/61002
CC-MAIN-2018-47
en
refinedweb
MemSQL Start[c]UP 3.0 Round 2 Editorial Pardon me if I am wrong, but can 2C be solved using some sort of a ternary search? Given x slices of type 1, and y slices of type 2, is there an optimal greedy way of distributing them? If yes, I believe ternary search can be applied (I may be wrong, of course). Any help will be appreciated, thank you in advance! It is not needed, you know the number of pizzas of type A is (sum of pizza slices eaten where A greater than B / slices per pizza) plus or minus one, or the (sum of pizza slices eaten where A greater than or equal to B / slices per pizza) plus or minus one. Yes, Ternary Search can be applied. The optimal strategy of distributing slices among contestants is same as mentioned in the editorial.You can see my submission using Ternary Search here.Although it is not needed as pointed out by farmersrice So is 865C solved in ? Can someone elaborate Div2 E? Um, this is actually the solution kfoozminus explained to me: Let's maintain two multisets, "unused" and "already sold". After taking input "x", I tried to sell this pairing with a previous element(which I'm gonna buy). Took the minimum valued element among "unused" + "already sold" stuffs, let's call it "y". If(y < x) { if(y came from unused set) buy y and sell x. else if(y came from "already sold" set) put y in unused set and sell x. } else put "x" in unused set The solution actually comes from the idea — If I see that there's an "already sold" stuff smaller than x, it'd be optimal to sell "x" and not use y, rather than use y and buy x. Could you explain some details about DIV1-C? which takes exactly K seconds to complete which takes exactly K seconds to complete To complete to whole game, or game suffix? If the optimal strategy is to immediately switch to the deterministic game, then the answer is greater than K. Otherwise it's less than K If the optimal strategy is to immediately switch to the deterministic game, then the answer is greater than K. Otherwise it's less than K Could you clarify it? Binary search should be made by K or by answer? And is there a code, which corresponds the idea from editorial? You assume that the answer is K and compute f(K) (using dynamic programming). The answer is such x that x = f(x). Since f(x) is monotone, binary search can find an answer. Another view is as follows: assume I tell you that the answer is K. How do you verify that? Take a look at my submission. Keep in mind that the upper bound is very coarse and the number of iterations in the binary search is unnecessarily large. if (i+S[j] <= R) { play += (S[j] + EXP[j+1][i+S[j]]) * (1-P[j]); } else { play += (S[j] + ans) * (1-P[j]); } Can you please explain the "else" part? I was not able to figure out why it would be S[j]+ans in this case If i + S[j] > R, it means that when you're unlucky and the current level taking requires the longer timeslot causes you to be over the limit, you have to restart the game after finishing this level. So in this case isn't play += ans * (1-P[j]) sufficient? No. You need to spend the S[j] seconds playing, and only after that you can restart. Quoting the statement: After completing a level, you may decide to either continue the game and play the next level, or reset the game and start again from the first level. After completing a level, you may decide to either continue the game and play the next level, or reset the game and start again from the first level. Thanks for clarifying :D What is the semantics of K and f(K) ? Did you mean, that the answer is such x that R = f(x)? And could you clarify about "upper bound coarse"? I thought, that the iterations count difference between upper_bound and binary_search is very low, at most 1-2 iterations. you calculate the answer using dp, but anytime you start from the beginning you use K instead of dp[starting state] which is not yet calculated. f(K) is the answer for dp in starting state Now first thing to understand that if K is answer, then you will get K as answer to your dp (that's easy) Now understand that if K is more then answer you'll get more then answer (because all number can just increase) but less then K (thats harder) So now function f(k) — k has one zero and you search for it with binary search How can we show that if K is more than answer, f(K) will never be equal to K? Got the idea, thanks! I am confused by the upper bound in this problem. How would the upper bound of the binary search be calculated? The worst case should be that R is the sum of Fi's, Fi's are 99, Si's are 100 and Pi's are 80. So one has to pass every stage quickly to make it in R seconds. If I am right, how does this lead to the upper bound? Oh, is that: Yup, it's around that. But in the contest I just went for somewhat more generous bound. You don't want to fail a task just because of this, and time limit wasn't an issue. I don't understand why the upper bound of expection is not R but . Can you explain? In the best case, you have to pass all 50 stages fast — any slow move will result in a replay. You pass the whole game in one shot with probability 0.8^50. So, the number of game required for such one shot exists is 1 over 0.8^50. For each such pass you may have to play at worst 50 stages each with no more than 100 units of time. (This is a rough upper bound, so it is not that tight, but also enough for a binary search) Finally, i've got the idea of the solution, thanks! Also, i've understood your notice about too big upper bound, it wasn't about std::upper_bound function :) I still have the only question about optimal progress reset strategy. If I've got you solution right, the optimal strategy is to reset a progress only if all levels completion with slow completion of the current level becomes greater than R. Why it is right? It doesn't take in account the relation between Fi/Si/Pi at the beginning and at the ending. It seems, that in one situation it could be more profitable to reset a progress after slow completion of the first level, and in another situation it could be profitable to reset progress nearby before final level, but don't see it in DP calculation. Let's call a strategy a decision in which cells of the dp we will hit restart. Then for a fixed strategy P the function f_P produced by the dp is linear. Now f(x) = min f_P(x). So f in convex upward. Also f(0) > 0. Thus f(x) = x has at most one root. It is also possible to use Newton's method in problem C to make less iterations. My solution converges in 4 iterations. I believe the Pi ≥ 80 restriction is also unnecessary for this solution. 30890918 Can anybody elaborate Div2 C? EDIT: Missed part of statement. Can Div1 C be solved without bin search? Let's calculate p[i][t] — probability to finish game in at least R seconds if we are at the i-th level and t seconds have passed. Now, I think, it is optimal to go to the start every time when p[i][t] < p[0][0] (Is this correct?). Now let dp[i][t] be probability that we are at the i-th level with t seconds passed. Let's say answer is S. Then formula for S will be something like this S = dp[n][t1] * t1 + dp[n][t2] * t2 + ... + dp[i1][T1] * (S + T1) + dp[i2][T2] * (S + T2) + ... first we add runs which were successful (completed in less then R seconds) and then we add runs where we either chose to go back or got bad score and had to go back. We do simple dp and accumulate constant values and coefficients of S. In the end S = CONST / (1 - COEF) My code sometimes gives wrong answer, for example third pretest. In my code good[i][t] means probability to finish game with at least R seconds if we are at i-th level and t seconds have passed. #include <vector> #include <algorithm> #include <iostream> #include <cassert> #include <cstdlib> #include <cmath> #include <cstring> #include <cstdio> #include <ctime> #include <map> #include <set> #include <string> #include <cassert> #define INFLL 2000000000000000000 #define INF 2000000000 #define MOD 1000000007 #define PI acos(-1.0) using namespace std; typedef pair <int, int> pii; typedef long long ll; typedef vector <ll> vll; struct Level { int a; int b; double p; }; int n, r; Level arr[50]; double p[51][5001]; double good[51][5001]; double sums[51][5001]; double back, con; double dp[51][5001]; int main() { //freopen("input.txt", "r", stdin); //freopen("output.txt", "w", stdout); cin >> n >> r; for (int i = 0; i < n; i++) { cin >> arr[i].a >> arr[i].b >> arr[i].p; arr[i].p /= 100.0; } p[n][0] = 1; for (int i = n - 1; i >= 0; i--) { for (int j = 0; j <= 5000; j++) { if (j + arr[i].a <= 5000) p[i][j + arr[i].a] += p[i + 1][j] * arr[i].p; if (j + arr[i].b <= 5000) p[i][j + arr[i].b] += p[i + 1][j] * (1 - arr[i].p); } } for (int i = 0; i <= n; i++) { sums[i][0] = p[i][0]; for (int j = 1; j <= 5000; j++) sums[i][j] = sums[i][j - 1] + p[i][j]; } for (int i = 0; i <= n; i++) { for (int j = 0; j <= r; j++) { good[i][j] = sums[i][r - j]; } } dp[0][0] = 1; for (int i = 0; i < n; i++) { for (int j = 0; j <= 5000; j++) { if (j > r) { back += dp[i][j]; con += dp[i][j] * j; continue; } if (good[i][j] < good[0][0]) { back += dp[i][j]; con += dp[i][j] * j; continue; } if (j + arr[i].a <= 5000) dp[i + 1][j + arr[i].a] += dp[i][j] * arr[i].p; if (j + arr[i].b <= 5000) dp[i + 1][j + arr[i].b] += dp[i][j] * (1 - arr[i].p); } } for (int i = 0; i <= r; i++) con += dp[n][i] * i; for (int i = r + 1; i <= 5000; i++) { con += dp[n][i] * i; back += dp[n][i]; } printf("%.15f\n", con / (1 - back)); return 0; } According to me, your statement: it is optimal to go to the start every time when p[i][t] < p[0][0] is incorrect. We have to minimize the expected amount of time we play. it is optimal to go to the start every time when p[i][t] < p[0][0] Say dpi, j is the expected amount of time needed to play to finish in less than R seconds such that j seconds have already passed and we are at the ith level. It is optimal to reset if dpi, j + j > dp0, 0. But since we don't know what's dp0, 0, we say dp0, 0 = K for some K and verify if it's possible or not. If it's possible for K, it's possible for all values > K. So, we can apply binary search on K. Please elaborate the solution of 865B — Ordering Pizza. Can't able to understand from this point ->Then the first contestant should take the first s1 slices, then the second contestant should take the next s2.. Why the following approach is not valid? Let sumA be the sum of all slices of pizzas from the contestants where ai > bi. I sort the participants by (bi-ai), and then i distribute the pizzas starting from the index 0. When i take K slices of a participant that has ai > bi, i make sumA -= K. When i know that sumA < S, (i.e i can't eat more entire pizzas with sumA), i know that i have to share the remainder sumA slices. Submission: can anyone tell how to compute p(x^(-1))mod Q(x) in question G Hi, I have a question regarding 865B — Ordering Pizza. I've debugged for so many hours that I decided to ask for some insight. I always get runtime error for Test case 5. And I simply have no idea which part of my code, pasted below, could cause run-time error. Any insight is appreciated. Thanks!!!! #include <iostream> #include <algorithm> #include <cmath> #include <utility> // std::pair, std::make_pair #include <vector> using namespace std; typedef long long ll; typedef pair<ll, ll> pll; bool comp(pll a, pll b) { return a.first >= b.first; } // x is number of pieces to consume ll findBest(vector<pll> &ps, ll x) { ll res = 0; for (ll i = 0; i < ps.size(); i++) { ll num = ps[i].second; ll diff = ps[i].first; if (x - num >= 0) { res += (num * diff); x -= num; } else { res += (x * diff); break; } } return res; } int main() { ll n, s; cin >> n >> s; vector<pll> ps; ll total = 0; ll positives = 0; ll pieces = 0; // overall pieces for (ll i = 0; i < n; i++) { ll num, a, b; cin >> num >> a >> b; ps.push_back(make_pair(a-b, num)); total += num * b; pieces += num; if (a-b > 0) { positives += num; } } sort(ps.begin(), ps.end(), comp); ll pizzas = (pieces % s == 0) ? (pieces / s) : (pieces / s + 1); ll extra = pizzas * s - pieces; ll best = total; if (positives > 0) { best = max(best, findBest(ps, positives / s * s) + total); if (positives % s != 0) { ll x = max(positives, (positives/s + 1) * s - extra); best = max(best, findBest(ps, x) + total); } } cout << best << endl; return 0; } Your compare function is incorrect, as comp(a, a) should return false. OMG! You're amazing!!!!! That is indeed the error!! I've never encountered that as run-time error before (can only say that I'm not experienced enough I guess...). According to, the requirements are For all a, comp(a,a)==false If comp(a,b)==true then comp(b,a)==false if comp(a,b)==true and comp(b,c)==true then comp(a,c)==true Thank you again Michal!!!!! Would this logic work in Problem E of Div 2? Take maximum of 1 to N from segtree and let its index be x . We will sell a stock at this index and take minimum of 1 to x-1 from segtree and let its index be y. We will buy stock at index y. Now remove this values from segment tree. And keep on doing this till all values have been exhausted. I tried this solution but i am getting wrong answer submission:
http://codeforces.com/blog/entry/54888
CC-MAIN-2018-47
en
refinedweb
Random Generator Picks the Indy 500 Winner Random Generator Picks the Indy 500 Winner Let's employs a previously covered tool—Random Generator—to pick the winner of the upcoming Indy 500 race, displaying probability and randomness in Java in action. Join the DZone community and get the full member experience.Join For Free Download Microservices for Java Developers: A hands-on introduction to frameworks and containers. Brought to you in partnership with Red Hat. With the qualifications completed and the field set for the 101st running of the "Greatest Spectacle in Racing," I thought it would be cool to see which driver would be picked by Random Generator to get the win for the 2017 Indy 500. Photo courtesy of Bryon J. Realey For those who are not aware of Random Generator, you can read my series of articles on DZone: Building a Random Generator Inside the Random Generator Advanced Options for Random Generator Introducing Random Generator to Maven Central Or, you can review the source code on GitLab. Setting Up the Data Since Random Generator can process a java.util.List object, I created a simple Participant object to house each car that qualified for the Indy 500. import lombok.Data; @Data public class Participant { private Integer startPosition; private Integer carNumber; private String driverName; private String odds; private Integer rating; } If you are not aware of Lombok, it is a great utility to keep from having to manually create getters, setters, and a few other boilerplate elements in POJOs. Giving Odds to The Drivers Like any race with established participants, odds are generated for those placing wagers on the competition. While I am not a gambling person, I did decide to use these odds to weigh the participants for the Indy 500 race. After all, the drivers in the race certainly have different skill sets and experience — which has proven to be helpful for the first 100 races held in Speedway, Indiana. According to SportsBook.ag, the drivers with the best odds are Helio Castroneves, Juan Pablo Montoya, Scott Dixon, and Tony Kanaan with 8:1 odds. The remainder of the field has odds that vary from 9:1 to unlisted. For those unlisted, I gave them odds of 99:1 to win the race. In order to translate the odds into a "rating" field that Random Generator can understand, I used the following logic: 100 - the antecedent (or left hand portion of the ratio) So, those with 8:1 odds were given a rating of 92 (100 - 8). When enabling the rating functionality in Random Generator, the rating field will be used to favor drivers with better odds of winning the race. For this experiment, I opted to use a rating level of 2 (RATING_LEVEL_MEDIUM). Random Generator in Action With the List<Participant> set, I was able to run Random Generator using the following line of code: List results = randomGenerator.randomize(participants, 33, 2); From there, I can use a simple System.out command to paint the order Random Generator believes will occur: System.out.println("\nRandom Generator Results:"); for (int i = 0; i < results.size(); i++) { System.out.println((i + 1) + ". " + results.get(i).getDriverName() + " (" + results.get(i).getCarNumber() + ") {Odds = " + results.get(i).getOdds() + "}"); } My results provided the following finishing order for the 101st running of the Indy 500 on May 28th, 2017: Random Generator Results: 1. Tony Kanaan (10) {Odds = 8/1} 2. Josef Newgarden (2) {Odds = 9/1} 3. Will Power (12) {Odds = 10/1} 4. Graham Rahal (15) {Odds = 25/1} 5. James Hinchcliffe (5) {Odds = 15/1} 6. Jack Harvey (50) {Odds = 50/1} 7. Juan Montoya (22) {Odds = 8/1} 8. Helio Castroneves (3) {Odds = 8/1} 9. Sebastien Bourdais (18) {Odds = 60/1} 10. Alexander Rossi (98) {Odds = 20/1} 11. Ryan Hunter-Reay (28) {Odds = 10/1} 12. Marco Andretti (27) {Odds = 10/1} 13. Carlos Munoz (14) {Odds = 18/1} 14. Simon Pagenaud (1) {Odds = 10/1} 15. Ed Carpenter (20) {Odds = 25/1} 16. Takuma Sato (26) {Odds = 25/1} 17. Scott Dixon (9) {Odds = 8/1} 18. Oriol Servia (16) {Odds = 80/1} 19. Charlie Kimball (83) {Odds = 25/1} 20. Fernando Alonso (29) {Odds = 15/1} 21. JR Hildebrand (9) {Odds = 20/1} 22. Jay Howard (77) {Odds = 99/1} 23. Mikhail Aleshin (7) {Odds = 80/1} 24. Max Chilton (8) {Odds = 60/1} 25. Pippa Mann (63) {Odds = 99/1} 26. Spencer Pigot (11) {Odds = 99/1} 27. Buddy Lazier (44) {Odds = 99/1} 28. Zach Veach (40) {Odds = 99/1} 29. Connor Daly (4) {Odds = 99/1} 30. Gabby Chaves (88) {Odds = 99/1} 31. Sebastian Saavedra (17) {Odds = 99/1} 32. Sage Karam (24) {Odds = 99/1} 33. Ed Jones (19) {Odds = 99/1} Have a really great day! Download Building Reactive Microservices in Java: Asynchronous and Event-Based Application Design. Brought to you in partnership with Red Hat. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/random-generator-picks-the-indy-500-winner
CC-MAIN-2018-47
en
refinedweb
Migrating Legacy Strings to Fluent. Migration, either because the syntax used in these recipes changes, or because the referenced legacy strings and files are removed from repositories.. See also the How Migrations Are Run on l10n Repositories section.: # coding=utf8 # recipe includes a migrate function, which can contain multiple add_transforms calls..). The context.add_transforms function takes 3 arguments: - Path to the target l10n file. - Path to the source (en-US) file. - An array of Transforms. Transforms are AST nodes which describe how legacy translations should be migrated. requires a deeper understanding of: siteUsage = %1$S %2$S (Persistent) Which needs to be migrated to: site-usage = { $value } { $unit } (Persistent) %1$S (and %S) don’t match the format used in Fluent for variables, so they need to be replaced within the migration process. This is how the Transform is defined: FTL.Message( id=FTL.Identifier("site-usage-pattern"), value=REPLACE( "browser/chrome/browser/preferences/preferences.properties", "siteUsage", { "%1$S": VARIABLE_REFERENCE( "value" ), "%2$S": VARIABLE_REFERENCE( "unit" ) } ) ) This creates an FTL.Message, taking the value from the legacy string siteUsage, but replacing specified placeholders with Fluent variables. It’s also possible to replace content with a specific text: in that case, it needs to be defined as a TextElement. For example, to replace %S with some HTML markup: value=REPLACE( "browser/chrome/browser/preferences/preferences.properties", "searchResults.sorryMessageWin", { "%S": FTL.TextElement('<span data-</span>') } ) }, e.g. { another-string }. TERM_REFERENCE is used to create a reference to a term, e.g. { -brand-short-name }. Both Transforms need to be imported at the beginning of the recipe, e.g. from fluent.migrate.helpers import VARIABLE_REFERENCE Concatenating Strings¶ It’s quite common to concatenate multiple strings coming from DTD and properties, for example to create sentences with HTML markup. It’s possible to concatenate strings and text elements in a migration recipe using the CONCAT Transform. This allows to generate a single Fluent message from these fragments, avoiding run-time transformations as prescribed by Fluent’s social contract.Results.needHelpSupportLink =", { "%S": TERM_REFERENCE("-brand-short-name"), } ),. Complex Cases¶.NumberExpression("1"), default=False, value=COPY( "browser/chrome/browser/preferences/main.dtd", "useCurrentPage.label", ) ), FTL.Variant( key=FTL.VariantName("other"), default=True, value=COPY( "browser/chrome/browser/preferences/main.dtd", "useMultiple.label", ) ) ] ) ) ] ) ), FTL.Attribute( id=FTL.Identifier("accesskey"), value=COPY( "browser/chrome/browser/preferences/main.dtd", "useCurrentPage.accesskey", ) ), ], ), This Transform uses several concepts already described in this document. Notable new elements are: - The fact that the label attribute is defined as a Pattern. This is because, in this example, we’re creating a new value from scratch and migrating existing translations as its variants. Patterns are one of Fluent’s value types and, under the hood, all Transforms like COPYor REPLACEevaluate to Fluent Patterns. - A SelectExpressionis defined, with an array of Variantobjects. How to Test Migration Recipes¶ Unfortunately, testing migration recipes requires several manual steps. We plan to introduce automated testing for patches including migration recipes, in the meantime this is how it’s possible to test migration recipes. 1. Install Fluent Migration¶ The first step is to install the Fluent Migration Python library. It’s currently not available as a package, so the repository must be cloned locally and installed manually, e.g. with pip install -e .. Installing this package will make a migrate-l10n command available. 2. Clone gecko-strings¶ Migration recipes work against localization repositories, which means it’s not possible to test them directly against mozilla-central, unless the source path (the second argument) in ctx.add_transforms is temporarily tweaked to match mozilla-central paths. To test the actual recipe that will land in the patch, it’s necessary to clone the gecko-strings repository on the system twice, in two separate folders. One will simulate the reference en-US repository after the patch has landed, and the other will simulate a target localization. For example, let’s call the two folders en-US and test. hg clone en-US cp -r en-US test 3. Add new FTL strings to the local en-US repository¶ The changed (or brand new) FTL files from the patch need to be copied into the en-US repository. Remember that paths are slightly different, with localization repositories missing the locales/en-US portion. There’s no need to commit these changes locally. 4. Run the migration recipe¶ The system is all set to run the recipe with the following commands: cd PATH/TO/recipes migrate-l10n \ --lang test --reference-dir PATH/TO/en-US \ --localization-dir PATH/TO/test \ --dry-run \ name_of_the_recipe The name of the recipe needs to be specified without the .py extension, since it’s imported as a module. Alternatively, before running migrate-l10n, it’s possible to update the value of PYTHONPATH to include the folder storing migration recipes. export PYTHONPATH="${PYTHONPATH}:PATH/TO/recipes/" The --dry-run option allows to run the recipe without making changes, and it’s useful to spot syntax errors in the recipe. If there are no errors, it’s possible to run the migration without --dry-run and actually commit the changes locally. This is the output of a migration: Running migration bug_1411707_findbar for test WARNING:migrate:Plural rule for "'test'" is not defined in compare-locales INFO:migrate:Localization file toolkit/toolkit/main-window/findbar.ftl does not exist and it will be created Writing to test/toolkit/toolkit/main-window/findbar.ftl Committing changeset: Bug 1411707 - Migrate the findbar XBL binding to a Custom Element, part 1. Writing to test/toolkit/toolkit/main-window/findbar.ftl Committing changeset: Bug 1411707 - Migrate the findbar XBL binding to a Custom Element, part 2. Hint The warning about plural rules is expected, since test is not a valid locale code. At this point, the result of migration is committed to the local test folder. 5. Compare the resulting files¶ Once the migration has run, the test repository includes the migrated files, and it’s possible to compare them with the files in en-US. Since the migration code strips empty line between strings, it’s recommended to use diff -B between the two files, or use a visual diff to compare their content.. How)
https://firefox-source-docs.mozilla.org/intl/l10n/l10n/fluent_migrations.html
CC-MAIN-2018-47
en
refinedweb
Since its initial conception, one of the core philosophies in ASP.NET Web API has been to provide extension hooks for anything and everything – error handling is no exception (sorry…). However, there’s still room for improvement. In this article, I highlight a couple of the things I don’t like with how this works today, give an example workaround based on some code I wrote in a real system recently, and suggest how I would like this to work in future versions of ASP.NET Web API. If you want to skip all the detail, there’s a sample implementation of my proposed best-practice over at github. Update 2016-01-13: When putting the code sample together, I noticed a couple of things I could do even better. I’ve updated the code samples below to match where I think it mattered (most importantly in the passthrough handler). The current situation In ASP.NET Web API versions before 2.1, the default error handling mechanism was Exception filters. This had several drawbacks, the main one being that there are a number of cases that exception filters can’t handle, for example exceptions thrown from controller constructors or message handlers, during routing or response content serialization. The solution, introduced in Web API 2.1, was to introduce two interfaces: IExceptionLogger and IExceptionHandler. These provide a mechanism to hook into the error handling at a deeper level, and handle the problematic cases mentioned above too. Of course, the framework provides default implementations of these interfaces; the implementation of IExceptionHandler, for example, creates a response with the appropriate content type (json/xml/etc…) with some error details, which is certainly good enough in simple projects or internal API:s where the exact format of the error messages doesn’t matter much. Effectively, this means that to hook into the error handling mechanisms provided by the framework, you implement one of the interfaces, and hook it up in the application configuration. For an app hosted on the OWIN-pipeline (whether self-hosted or in IIS), this means adding something like this to your startup class: Now, all your exceptions occurring somewhere in your Web API will be caught and sent to the handler. What can’t this handle? This is all pretty nice as long as all your (interesting) code is implemented within the bounds of the API. However, when hosting with the OWIN pipeline there are a number of use cases where it’s very useful not to organize your code that way, but put logic for cross-cutting concerns in middleware. A few examples of such cross-cutting concerns are validating or manipulating HTTP headers on incoming requests and responses, custom logic for authentication and/or authorization, request logging or metering. In the OWIN stack, ASP.NET Web API sits on top of the stack, and the real action doesn’t kick in until all the other middlewares have already had the chance to do a lot of work – and throw a lot of exceptions. Thus, the IExceptionHandler we added above can’t help us here. Consider this example app: A request to any controller here will, regardless of what error handling I’ve hooked up with WebAPI, result in an exception that leaks out of my code, and up to the hosting platform. If hosted on IIS, this means a Yellow Screen Of Death; if self-hosted in a naïve console application, it might even result in a crash from which the service is unable to recover. What can we do about it? The first thing to do, is to write a middleware that will handle all exceptions, which we’ll register at the bottom of the stack, letting it handle all errors that occur anywhere: Now, the error in our buggy middleware will be caught and handled. However, this doesn’t go all the way; we now have one mechanism for handling errors in middlewares, and another (in principle entirely independent) for handling errors within the Web API. Reducing duplication One approach to harmonizing error handling across the entire stack would be to offload as much as possible of the actual work (logging etc) into service classes which can be called both from the IExceptionHandler implementation and from the middleware. However, this becomes more difficult than it could be, because we have access to the HTTP request and response through different, incompatible, API:s in the two places. The best workaround I’ve been able to come up with so far is to have the IExceptionHandler implementation forward the exception down the stack, and let the middleware do all the actual work. This resulted in the following implementation: In a previous version, I threw a new exception here with the context.Exception wrapped. Thanks to an anonymous user on StackOverflow I learned about the ExceptionDispatchInfo class which clearly was built with the sole purpose of improving this blog post. Note that we don’t just throw context.Exception as it is, as this would re-write the stack trace and discard important information about what went wrong. This way, we keep the original stack trace so that the logging mechanisms in the middleware can give us as much information as possible. Making it perfect The workaround above still has some drawbacks. For example, I’ve had troubles with the debugger, which won’t break on exceptions where they first happen, but instead break in the IExceptionHandler. To be able to inspect the state of the program when it crashes, I have to set a breakpoint in the handler, inspect the stack trace of context.Exception there, set a new breakpoint and reproduce the error a second time. Clearly a debugging experience sub-par to what I’ve come to expect in Visual Studio. A different approach, and one that I would have preferred, would be to simply turn off the exception handling mechanisms of the Web API stack, and let uncaught exceptions just propagate down the stack of middlewares until we handle it like any other exception. Unfortunately, despite numerous attempts, I haven’t been able to do this – it seems that it’s only possible to replace the default IExceptionHandler with a custom one, but not to remove it entirely (I asked about this on StackOverflow, but despite a couple of hundred views and a handsome bounty, no-one was able to suggest a way to do this). Do you have a better solution to this problem? Sound off in the comments section! I see your reminder (“[add link]”) ;-) I’d like to take a look at your code – thanks! Ah, sorry, David – fixed now! I learned a couple of things in the process, so now the workaround is a little less bad, too :) Great! Now I don’t even need to create a custom exception as a wrapper. Couple of minor corrections to make it compile: 1) you should override HandleAsync and make it async since you’re not returning a Task :). 2) ExceptionDispatchInfo doesn’t have a public constructor. Gotta use the Capture() method. So my proposition: public override async Task HandleAsync(ExceptionHandlerContext context, CancellationToken cancellationToken) { ExceptionDispatchInfo info = ExceptionDispatchInfo.Capture(context.Exception); info.Throw(); } Thanks, good catch! I’ve updated the code in the post to match the code sample on Github. Hi, Nice post on how to improve the error handling in Asp.net web 2.1. In this article, a couple of the things write with how this works today, give an example work around based on some code, and suggest how it would work in future versions of ASP.NET Web API and so much useful information available to know. I had a little bit idea about Asp.net when I hosted my business through Myasp.net. Thanks a lot for sharing with us. Thank you for this thorough writeup. I have left a comment on your original Stack Overflow post, but posting here as well since I’d love to see if anyone else is encountering a similar issue with this configuration whereby CORS headers aren’t set in the response? Works fine via non-CORS app (e.g. Paws/Fiddler), but when hitting an API, the underlying exception is thrown and properly handled, but then the response is overwritten by a subsequent dreaded CORS error: “No ‘Access-Control-Allow-Origin’ header is present on the requested resource. Origin ‘localhost:3000’; is therefore not allowed access. The response had HTTP status code 500.” And yes, CORS is configured properly and works fine everywhere except for when globally handled exceptions are involved (including working without a hitch when returning error responses from within a controller or action filters). Excellent question! CORS is another example of things that the framework does quite well, but only for itself. Just like for error handling, managing CORS headers outside of the Web API pipeline has to be added in a middleware component – again, sort-of defeating the purpose of doing it inside the WebAPI pipeline at all. Fortunately, the requirements on such a component are quite simple, so it should be pretty straightforward to implement yourself. And when you’ve done that, you can just as well disable all CORS configuration in Web API, because now you’re handing responsibility over to your own pipeline. Thanks for the quick response! It turns out the CORS implementation doesn’t necessarily set the context.CatchBlock.IsTopLevel, and therefore, exceptions thrown in a controller/filter (well, anywhere still in WebApi) don’t get handled! Sigh. The solution is as simple as overriding ExceptionHandler.ShouldHandle to always return true. For a wonderfully thorough discussion with solution/s, see. public class GlobalExceptionHandler : ExceptionHandler { public override void Handle(ExceptionHandlerContext context) { // handle here… } public override bool ShouldHandle(ExceptionHandlerContext context) { return true; // if inheriting from System.Web.Http.ExceptionHandling.ExceptionHandler, override ShouldHandle method and return true always // because CatchBlock.IsTopLevel might never have true value when enabling CORS. //return context.CatchBlock.IsTopLevel; } } Hi. Thanks for the article. The one thing I’m missing is how to change the response. Where you have “// log error, rewrite http response etc” the context.Response and context.Response.content are read only. I’m not sure how to change the response content to my json object. Thanks! The response property itself is read-only, so you can’t replace it, but it’s not immutable; in the sample implementation on GitHub you can see an example of how I change the response status code to 500 and write an error message to the response stream. Note that for this to work, you must not have written anything to the response stream before (if you have, it may have already been transmitted to the client). This is usually fine, though, since most applications read everything they need from the request, then do stuff with it – and that’s where things usually fail – and finally write to the response stream. The one thing I’ve found difficult to do something about is serialization errors; if something blows up while the framework is serializing the response (e.g. if the response object graph has circular references), changing the status code here will fail. But that’s really a limitation in the .NET networking stack, not a problem specifically with this middleware approach… I had this doubt.. and asked same in stackoverflow.. thanks for clearing it out. I thought that this might work for capturing routing errors but unfortunately I didn’t get lucky there. It seems that route errors must be thrown further down in the stack. Do you have any ideas about how you can handle routing errors such as trying to access a non existent controller in WebAPI 2.1? I have already used the Web.Config solution but it isn’t suitable for our purposes, it looks like this: In our case we are a pure WebAPI and so we do not want to return any html pages, by specifiying the customErrors section as above, we can get it to redirect the request to an error controller that returns the correct json response we expect with a 404 response code. This is fine for most tools and browsers, but when you observe the behaviour using fiddler you can see a 302 redirect and a 401 caused by the redirect not containing the auth header. Is there any way we can handle errors like this without doing a redirect? When hosted in an OWIN pipeline, ASP.NET Web API 2 actually acts just like any other middleware component. As such, if you want the error handling middleware from this post to handle routing errors, all you need to accomplish is to make ASP.NET Web API 2 _not_ handle the errors at all, and just throw them (and rely on something down the stack, i.e. the middleware, to handle them). It’s been a while since I used ASP.NET Web API 2 (I’m working almost exclusively with .NET Core apps nowadays) but this post hopefully has some useful info:
https://blog.jayway.com/2016/01/08/improving-error-handling-asp-net-web-api-2-1-owin/
CC-MAIN-2018-47
en
refinedweb
fairies game from real arcade, buy capcom arcade games, free arcade games v4, real arcade free online puzzle games, for arcade town games. stear crazy arcade game, do it yourself arcade games, cheap video arcade games, x men arcade game buy, williams arcade games. wwf arcade game, retro arcade games online, reflexive arcade games keygen fff, monster madness arcade game, pac man arcade game online. fisher price arcades computer games free, classic arcade games all in one, buy sell defender arcade game, arcade game play station, cops n robbers arcade game, contra arcade game download, arcade games virginia. classic arcade game mr do, stear crazy arcade game, xbox 360 arcade original xbox games, arcade bomb shooting games, free arcade games for windows. trackand field games arcade games, blazing angels arcade game, restaurants with arcade games in milwaukee, killer arcade games, top arcade games amusement. category 1982 arcade games, shooter arcade flash game, jailbreak arcade game, online arcade games sand zombie, top arcade games amusement. monster madness arcade game, game spy arcade adware, best arcade games of all times, play 2 player arcade games, www flash arcade games. nick arcad . com / games, arcade games for sale denver, classic arcade game records, new arcade bulldozer game, the 80s arcade games. bully arcade games, interactive buddy arcade game, classic 1980 s arcade games, joystick arcade games, teenage mutant ninja turtles ii arcade game. play fun arcade games, online fighting arcade games, free arcade games online color sudoku, strip arcade games on line, free ipod nano arcade games. president arcade games, all light gun arcade games, arcade game audio files, arcade forum highscore games, teenage mutant ninja turtles ii arcade game. Categories - Iphone arcade games - free ipod nano arcade games early penney arcade games arcade game hire sydney real arcade games forum iron horse arcade game retro arcade games online free web arcade the torture game williams arcade games top video arcade games import arcade games game spy arcade adware gatlinburg cabins with arcade games - dance music arcade game - arcade gaming in st george ut - ms pac-man arcade game - arcade games in pa - gamecube arcade games - arcade game timing - arcade game superstore images video mario kart ss1 - online mario arcade games - star wars free arcade games online - arcade game t shirts
http://manashitrad.ame-zaiku.com/barbie-arcade-games.html
CC-MAIN-2018-47
en
refinedweb
While REST (and not so RESTful) APIs have come to dominate there is still the occasionally find an API based on SOAP (especially if it uses a Microsoft back-end). For those interest in the different merits of the two technologies, there is an interesting infographic here. While REST APIs can be handled with just the standard library, SOAP really needs a module to hide a lot of the complexities and boilerplate. The original SOAP modules (like SOAPy) no longer appear to be maintained (and hence do not work on later versions of Python). However if you just need the client (rather than a server), a new module called Zeep has appeared which is being actively maintained and has recently reached v1 level maturity. Zeep can be installed with with pip in the usual way. An alternative to Zeep is the Suds-Jurko fork of Suds which may be a better choice if you need more options or find something Zeep does not support. SOAP is XML based and uses a schema called Web Services Description Language (WSDL) to completely describe the operations, methods in Python, and how they are called (no dynamic bindings here). Zeep creates a client by from the WSDL passed as the first parameter in the constructor (the only required parameter). All of the operations are then exposed through the clients service class. An example will hopefully make this clear. I am going to use one of the web services list at WebServiceX.NET, the one for distance type converting. This has an ChangeLengthUnit operation which takes 3 parameters; the length, the unit the length is in and the unit to convert to. Putting this together gives us the following code. import zeep soap = zeep.Client('') print(soap.service.ChangeLengthUnit(1,'Inches','Millimeters')) A few of points (apart from the coders must be American due to the way they spell metres). All Microsoft active server methods (.asmx) should expose the WSDL if you append ?wsdl to the URI. It doesn’t have to be a URI passed into the Client constructor. If the string begins with http (or https obviously) it will be treated as a URI. Otherwise it will assume it has been given a filepath to the WSDL file and try to open that. This is useful if you have to edit the WSDL file, say to fix a binding issue. Once you have the client, you can get a sanitized version of the WSDL with the method soap.wsdl.dump() which should help you establish which operations are available and how you call them. The WSDL link above should help explain the terms but as you can see, even this simple example with just one operation has a lengthy WSDL. Because SOAP uses standard types, zeep can convert the input and output correctly. If you look at the WSDL dump from above you will see the prefix xsd: listed – this schema defines all the standard data types and structures. If you check the return type of ChangeLengthUnit() you will notice it is a float.
https://quackajack.wordpress.com/2017/02/
CC-MAIN-2018-47
en
refinedweb
Category Archives: Pink Poesy Poetry, unsurprisingly. Poetry Workshop Attended a poetry workshop earlier today at the Fitz, across town. Harvested some of the more reasonable products below – four in response to artworks I’ve included (doubled up on the Rodin), and the last a prompt. Rough works, but hopefully of some service. Large Clenched Hand (Grande main crispée) Rodin Vitality expressed in its moment of expungence. Pain, cast in metal. The body radiating its emotion, its anger and its revolt, using nothing more than itself. Masculine, this could be the hand of Laocoön as he grapples with the serpent coils of his pride-wrought fate. Ah! Pride! Whomever this hand belongs to, it is a proud man. The despair, the anger expressed in the rictus clench could signal no less than a will -a prideful will- roundly thwarted. David and Goliath Degas Dun, nude, loose. Your colours, the olive green, the dirty taupe, evoke your crude life, your barbaric brutish existence. There could never have been honour won that day. Honour requires grace, and there is no grace to be found in the rude, shifting muck of your lives. Large Clenched Hand (Grande main crispée) Rodin Alive! Even in My agony, Beset by the cruelties of the World You will not take Me I refuse this fate I despite your arrows of Inevitability Reckon! I am a Man and though You would snuff Me out, You cannot deny that I have been Alive! A street, possibly in Port-Marly, 1875-77 Sisley I see your view the view that you moulded, filtered and regurgitated. And I deem it good. Masterful, even. Yet, it is not the skill of rendering the sky, nor the evocation of the shadow, nor any of the many other elements of quality that make me pause. No. It is simply the way you write your name. The clumsiness of it. Slap-dash. Work-a-day. Did you, too, regret the ugliness of your hand? Did you look on that text and grow sad at its lack of finesse? Six characters, rough-written, express more than the painting entire. Just as you reworked what you saw, so do I import my own assumptions. But, whatever phantasms I conjure, whatever gross errors I commit, I am left with that sliver of Truth. You and I, We are brothers. Sandbox 2×8’s screwed to one another, hanging together loosely, unevenly set atop flags of repurposed concrete. A shoddy affair, made in an amateur manner but fit to purpose. Good enough to hold back the spilling sands. You can remember the damp grit of it, even now – you can still feel it in your mouth, that not-quite-earthy taste, that roughness you knew, even then, was doing damage to your teeth. How many hours did you spend there, building imaginary worlds which, god-like, shifted to your every whim? Shifted, like so much sand. Solitary hours – yes, there were times you were joined, where your pantheon doubled, trebled – but it was never as good as when there was but a single will – a direction unfettered by compromise. A tyranny enlightened and self-contained. Contained by a set of 2×8’s screwed to one another and hanging together, loosely.. Vigil. Swelter Swelter The storied sailor may be right, and Hell is a cold, icy ocean trench that saps your will and chokes your heart; I wouldn’t pretend to know. Despair, though – Despair is hot. The heat of an over-burdened body The heat of all the rage and impotence clutched close and tight. The heat of a breath held too long, after the swirling eye-spots have blotted out vision and the lungs shudder to bursting. The heat of a fatal fever- too extreme to heal, too strong to dissipate. Despair has the heat of friction, born of all the wasted efforts and the rued missed chances, and the stupid, wanton mistakes. The heat smothers, blanketing you with its weight. It surrounds you even while it comes from inside, till the tears start from your bloodshot eyes and moans, undirected, start from your parched throat. Yes, Hell might be cold, but Despair, Despair is hot..
https://staggeringbrink.wordpress.com/category/pink-poesy/
CC-MAIN-2018-47
en
refinedweb
sd_journal_stream_fd - Man Page Create log stream file descriptor to the journal Synopsis #include <systemd/sd-journal.h> int sd_journal_stream_fd(const char *identifier, int priority, int level_prefix); Description The call returns a valid write-only file descriptor on success or a negative errno-style error code. Signal Safety sd_journal_stream_fd() is "async signal safe" in the meaning of signal-safety(7). Notes All functions listed here are thread-safe and may be called in parallel from multiple threads. These APIs are implemented as a shared library, which can be compiled and linked to with the libsystemd pkg-config(1) file. Examples; } See Also systemd(1), sd-journal(3), sd-daemon(3), sd_journal_print(3), syslog(3), fprintf(3), systemd.journal-fields(7) Referenced By sd-journal(3), sd_journal_print(3), systemd.directives(7), systemd.index(7).
https://www.mankier.com/3/sd_journal_stream_fd
CC-MAIN-2021-17
en
refinedweb
socketpair - Man Page create a pair of connected sockets Prolog This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. Synopsis #include <sys/socket.h> int socketpair(int domain, int type, int protocol, int socket_vector[2]); Description file descriptors shall be allocated as described in Section 2.14, File Descriptor Allocation.. Return Value Upon successful completion, this function shall return 0; otherwise, -1 shall be returned and errno set to indicate the error, no file descriptors shall be allocated, and the contents of socket_vector shall be left unmodified. Errors The socketpair() function shall fail if: - EAFNOSUPPORT The implementation does not support the specified address family. - EMFILE All, or all but one, of the file descriptors available to the process are currently. The following sections are informative. Examples None. Application Usage The documentation for specific address families specifies which protocols each address family supports. The documentation for specific protocols specifies which socket types each protocol supports. The socketpair() function is used primarily with UNIX domain sockets and need not be supported for other domains. Rationale None. Future Directions None. See Also Section 2.14, File Descriptor Allocation, socket() The Base Definitions volume of POSIX.1-2017, <sys_socket socket(3p), sys_socket.h(0p), sys_un.h(0p).
https://www.mankier.com/3p/socketpair
CC-MAIN-2021-17
en
refinedweb
The default pawn for the panoramic experience. More... #include <PanoramicPawn.h> The default pawn for the panoramic experience. Out of the box camera movement behaviours (fps style, mouse dragging / swipe gesture on mobile) Override ADefaultPawn::AddControllerPitchInput. Override ADefaultPawn::AddControllerYawInput. Override AActor::BeginPlay. Override ADefaultPawn::SetupPlayerInputComponent. Override AActor::Tick. The camera component attached to the pawn. In this mode to move the camera the user have to click, hold pressed and drag (swipe on mobile). In this mode the camera moves like on an fps game. Parent component of CameraComponent. The scale speed factor during dragging/swiping. Show mouse cursor.
https://www.unamedia.com/ue4-stereo-panoramic-player/api/class_a_panoramic_pawn.html
CC-MAIN-2021-17
en
refinedweb
How To Implement A* Pathfinding with Swift Learn how to implement the A* pathfinding algorithm into a Sprite Kit game to calculate paths between two points. Version - Other, Other, Other Update 8/3/15: In iOS 9, Apple introduces a new GameplayKit that includes a pathfinding API. This tutorial does not cover that; it covers a way to add pathfinding without GameplayKit. Note: This tutorial was update to Swift and Sprite Kit by Gabriel Hauber. The original Cocos2D tutorial was written by Johann Fradj. In this tutorial, you’ll learn how to add the A* Pathfinding algorithm into a simple Sprite Kit game. A* lets you calculate a navigable path between two points, and is especially useful for 2D tile-based games such as CatMaze, which you’ll work on in this tutorial. If you’re not familiar with the A* algorithm itself, you should read Introduction to A* Pathfinding first for all the details on how it works. This tutorial will cover the implementation of the algorithm into an existing Sprite Kit game. To go through this tutorial, it’s helpful if you have prior knowledge of the Sprite Kit framework on iOS or OS X. If you need a refresher there, check out our Sprite Kit Tutorials to take you all the way from core concepts to complete games! Find the shortest path to your keyboard, and begin! :] Getting Started Start by downloading the starter project. It’s a simple Sprite Kit based game configured with both iOS and OS X targets. Build and run the project for your platform of choice. If you run the OS X version, you should see the following: In this game, you take the role of a cat thief trying to make your way through a dungeon guarded by dangerous dogs. If you try to walk through a dog they will eat you – unless you can bribe them with a bone! You need to traverse the dungeon in the correct order so you have enough bones when you need them to get through a dog blocking your way as you make your way to the exit. Note that the cat can only move vertically or horizontally (not diagonally), and will move from one tile center to another. Each tile can be either walkable or unwalkable. Try out the game, and see if you can reach the exit! When you tap or click somewhere on the map, the cat will jump to an adjacent tile in the direction of your tap. (On the OS X version you can also use the cursor keys to move around). Cat Maze and A* Overview In this tutorial you will modify this so that the cat will automatically move all the way to the tile you tapped using the most efficient route, much like many RPGs or point-and-click adventure games. Open Cat.swift and take a look at the implementation of moveToward(_:). You will see that it calls moveInDirection(_:) with a direction based on the cat’s current position and the target position. You will change this so that instead of moving one tile at a time, the cat will find the shortest path to the target, and make its way along that path. First, replace the moveToward(_:) implementation with the following: func moveToward(target: CGPoint) { let toTileCoord = gameScene.tileMap.tileCoordForPosition(target) moveTo(toTileCoord) } Build and run the game now and make a move; the cat will teleport to the selected location. Well, that’s the end result you’re looking for so congratulations! ;] The actual move algorithm is in moveTo(_:), which is where you’ll to calculate the shortest path using the A* algorithm, and make the cat follow it. A Reusable Pathfinder The A* pathfinding algorithm is the same whether it is finding the best path for a cat in a maze full of dogs, or for a warrior in a dungeon full of monsters. Because of this, the pathfinder you create in this tutorial will be reusable in other projects. The pathfinding algorithm is going to need to know the following pieces of information from the game: - Given a map location, what map locations next to it are valid locations to move to? - What is the movement cost between two map locations? In the project navigator, right-click on the Shared group, and select New File…. Choose the iOS\Source\Swift File template and click Next. Name the file AStarPathfinder and select the Shared directory as the location in which to create the file. Ensure both CatMaze and CatMaze Mac targets are selected and click Create. Add the following code to the file you just created: protocol PathfinderDataSource: NSObjectProtocol { func walkableAdjacentTilesCoordsForTileCoord(tileCoord: TileCoord) -> [TileCoord] func costToMoveFromTileCoord(fromTileCoord: TileCoord, toAdjacentTileCoord toTileCoord: TileCoord) -> Int } /** A pathfinder based on the A* algorithm to find the shortest path between two locations */ class AStarPathfinder { weak var dataSource: PathfinderDataSource? func shortestPathFromTileCoord(fromTileCoord: TileCoord, toTileCoord: TileCoord) -> [TileCoord]? { // placeholder: move immediately to the destination coordinate return [toTileCoord] } } The PathfinderDataSource protocol describes the two requirements listed above: available (walkable) adjacent tiles, and the movement cost. This will be used by the AStarPathfinder, which you’ll fill out with the algorithm later. Setting up the Game to Use the Pathfinder In order to use the pathfinder, the Cat object will need to create an instance of it. But what is a good candidate for the pathfinder’s data source? You have two choices: - The GameSceneclass. It knows about the map, and so is a candidate for supplying information such as movement costs and what tiles are walkable, and so on. - The Catclass, but why would you choose this? Imagine a game which has multiple moveable character types, each of which has its own rules as to what is a walkable tile and movement costs. For example, a Ghost character may be able to move through walls, but it costs more to do so. Because you are such a fan of good design, you choose the second option :] Open Cat.swift and add the following class extension to the bottom of the file so it conforms to PathfinderDataSource: extension Cat: PathfinderDataSource { func walkableAdjacentTilesCoordsForTileCoord(tileCoord: TileCoord) -> [TileCoord] { let adjacentTiles = [tileCoord.top, tileCoord.left, tileCoord.bottom, tileCoord.right] return adjacentTiles.filter { self.gameScene.isWalkableTileForTileCoord($0) } } func costToMoveFromTileCoord(fromTileCoord: TileCoord, toAdjacentTileCoord toTileCoord: TileCoord) -> Int { return 1 } } As you can see, finding the adjacent tiles is really simple: create an array with the tile coordinates around the given tile coordinate, then filter the array to return only walkable tiles. Because you can’t move diagonally and because terrain is just walkable or unwalkable the cost is always the same. In your other apps, perhaps diagonal movement costs more or there are different terrain types such as swamps, hills, etc. Now that you’ve implemented the pathfinder’s data source, it’s time to create a pathfinder instance. Add the following properties to the Cat class: let pathfinder = AStarPathfinder() var shortestPath: [TileCoord]? In the initializer, set up the cat as the pathfinder’s data source immediately after the call to super.init(): pathfinder.dataSource = self In moveTo(_:), find the two lines of code that update the cat’s position and state: position = gameScene.tileMap.positionForTileCoord(toTileCoord) updateState() Replace those two lines with the following: shortestPath = pathfinder.shortestPathFromTileCoord(fromTileCoord, toTileCoord: toTileCoord) Once you fill out the pathfinding algorithm, this property will store the list of steps needed to get from point A to point B. Creating the ShortestPathStep Class In order to calculate the shortest path using the A* algorithm, you need to know for each of the path’s steps: - Location - F, G and H scores - parent step (so you can trace back along its length from end to beginning) You’ll capture all this information in a private class called ShortestPathStep. Add the following code to the top of AStarPathfinder.swift: /** A single step on the computed path; used by the A* pathfinding algorithm */ private class ShortestPathStep: Hashable { let position: TileCoord var parent: ShortestPathStep? var gScore = 0 var hScore = 0 var fScore: Int { return gScore + hScore } var hashValue: Int { return position.col.hashValue + position.row.hashValue } init(position: TileCoord) { self.position = position } func setParent(parent: ShortestPathStep, withMoveCost moveCost: Int) { // The G score is equal to the parent G score + the cost to move from the parent to it self.parent = parent self.gScore = parent.gScore + moveCost } } private func ==(lhs: ShortestPathStep, rhs: ShortestPathStep) -> Bool { return lhs.position == rhs.position } extension ShortestPathStep: Printable { var description: String { return "pos=\(position) g=\(gScore) h=\(hScore) f=\(fScore)" } } As you can see, this is a very simple class that keeps track of the following: - The step’s tile coordinate - The G score (the movement cost from the start to the step’s position) - The H score (the estimated number of tiles between the current position and destination) - The step before this step in the path (the parent) - The F score, that is, the score for this tile (calculated by adding G + H). The class also conforms to the Equatable protocol: two steps are equal if they refer to the same location, regardless of their G or H scores. Finally, it is also Printable for the purposes of human-friendly debug messages. Implementing the A* Algorithm Now the bootstrapping is over and it’s time to write the code to calculate the optimal path! First, add the following helper methods to the AStarPathfinder class: private func insertStep(step: ShortestPathStep, inout inOpenSteps openSteps: [ShortestPathStep]) { openSteps.append(step) openSteps.sort { $0.fScore <= $1.fScore } } func hScoreFromCoord(fromCoord: TileCoord, toCoord: TileCoord) -> Int { return abs(toCoord.col - fromCoord.col) + abs(toCoord.row - fromCoord.row) } The first method insertStep(_:inOpenSteps:) inserts a ShortestPathStep into the open list at the appropriate position ordered by F score. Note that it modifies the array in-place and is passed in as an inout parameter. The second method computes the H score for a square according to the Manhattan (or “city block”) method, which calculates the total number of steps moved horizontally and vertically to reach the final desired step from the current step, ignoring any obstacles that may be in the way. With these helper methods in place, you now have everything you need to implement the pathfinding algorithm itself. Delete the current placeholder code in shortestPathFromTileCoord(_:toTileCoord:) and replace it with the following: // 1 if self.dataSource == nil { return nil } let dataSource = self.dataSource! // 2 var closedSteps = Set<ShortestPathStep>() var openSteps = [ShortestPathStep(position: fromTileCoord)] while !openSteps.isEmpty { // 3 let currentStep = openSteps.removeAtIndex(0) closedSteps.insert(currentStep) // 4 if currentStep.position == toTileCoord { println("PATH FOUND : ") var step: ShortestPathStep? = currentStep while step != nil { println(step!) step = step!.parent } return [] } // 5 let adjacentTiles = dataSource.walkableAdjacentTilesCoordsForTileCoord(currentStep.position) for tile in adjacentTiles { // 6 let step = ShortestPathStep(position: tile) if closedSteps.contains(step) { continue } let moveCost = dataSource.costToMoveFromTileCoord(currentStep.position, toAdjacentTileCoord: step.position) if let existingIndex = find(openSteps, step) { // 7 let step = openSteps[existingIndex] if currentStep.gScore + moveCost < step.gScore { step.setParent(currentStep, withMoveCost: moveCost) openSteps.removeAtIndex(existingIndex) insertStep(step, inOpenSteps: &openSteps) } } else { // 8 step.setParent(currentStep, withMoveCost: moveCost) step.hScore = hScoreFromCoord(step.position, toCoord: toTileCoord) insertStep(step, inOpenSteps: &openSteps) } } } return nil This is an important method, so let's take it section by section: - If there's no valid data source then you can exit early. If there is one, you set up a shadowed local variable to unwrap it. - Set up the data structures to keep track of the steps. The open steps list starts with the initial position. - Remove the lowest F cost step from the open list and add it to the closed list. Because the list is ordered, the first step is always the one with the lowest F cost. - If the current step is the destination, you're done! For now, you're just logging the path out to the console. - Get the adjacent tiles coordinates of the current step and begin looping through them. - Get the step and check that it isn't already in the closed list. If not, calculate the movement cost. - If the step is in the open list, then grab that version of the step. If the current step and movement score is better than the old score, then replace the step's existing parent with the current step. - If the step isn't in the open list then compute the H score and add it. Build and run to try it out! If you touch the tile shown below: You should see this in the console: pos=[col=22 row=3] g=9 h=0 f=9 pos=[col=21 row=3] g=8 h=1 f=9 pos=[col=20 row=3] g=7 h=2 f=9 pos=[col=20 row=2] g=6 h=3 f=9 pos=[col=20 row=1] g=5 h=4 f=9 pos=[col=21 row=1] g=4 h=3 f=7 pos=[col=22 row=1] g=3 h=2 f=5 pos=[col=23 row=1] g=2 h=3 f=5 pos=[col=24 row=1] g=1 h=4 f=5 pos=[col=24 row=0] g=0 h=0 f=0 Remember the path is built backwards, so you have to read from bottom to top to see what path the algorithm has chosen. Try to match these up to the tiles in the maze so you can see that it really is the shortest path! Following the Yellow Brick Path Now that the path is calculated, you just have to make the cat follow it. The cat will have to remember the whole path, and then follow it step by step. Open Cat.swift and find moveTo(_:). Add the following code to the end of the method, after the line that sets shortestPath: if let shortestPath = shortestPath { for tileCoord in shortestPath { println("Step: \(tileCoord)") } } The pathfinder isn't actually yet returning the path, so switch to AStarPathfinder.swift. Remember that when the algorithm finishes, it has a path from the final step back to the beginning. This needs to be reversed, and returned to the caller as an array of TileCoords instead of ShortestPathSteps. Add the following helper method to the AStarPathfinder class: private func convertStepsToShortestPath(lastStep: ShortestPathStep) -> [TileCoord] { var shortestPath = [TileCoord]() var currentStep = lastStep while let parent = currentStep.parent { // if parent is nil, then it is our starting step, so don't include it shortestPath.insert(currentStep.position, atIndex: 0) currentStep = parent } return shortestPath } You're reversing the array by inserting each step's parent at the beginning of an array until the beginning step is reached. Inside shortestPathFromTileCoord(_:toTileCoord:), find the block of code inside the if currentStep.position == toTileCoord { statement that logs out the path. Replace it with the following code: return convertStepsToShortestPath(currentStep) This will run your helper method to put the steps in the correct order and return that path. Build and run. If you try moving the cat, you should see this in the console: Step: [col=24 row=1] Step: [col=23 row=1] Step: [col=22 row=1] Step: [col=21 row=1] Step: [col=20 row=1] Step: [col=20 row=2] Step: [col=20 row=3] Step: [col=21 row=3] Step: [col=22 row=3] Yes! Now you have tile coordinates ordered from start to finish (instead of reversed), nicely stored in an array for you to use. Getting the Cat on the Path The last thing to do is to go through the shortestPath array and animate the cat to follow the path. Add the following method to the Cat class: func popStepAndAnimate() { if shortestPath == nil || shortestPath!.isEmpty { // done moving, so stop animating and reset to "rest" state (facing down) removeActionForKey("catWalk") texture = SKTexture(imageNamed: "CatDown1") return } // get the next step to move to and remove it from the shortestPath let nextTileCoord = shortestPath!.removeAtIndex(0) println(nextTileCoord) // determine the direction the cat is facing in order to animate it appropriately let currentTileCoord = gameScene.tileMap.tileCoordForPosition(position) // make sure the cat is facing in the right direction for its movement let diff = nextTileCoord - currentTileCoord if abs(diff.col) > abs(diff.row) { if diff.col > 0 { runAnimation(facingRightAnimation, withKey: "catWalk") } else { runAnimation(facingLeftAnimation, withKey: "catWalk") } } else { if diff.row > 0 { runAnimation(facingForwardAnimation, withKey: "catWalk") } else { runAnimation(facingBackAnimation, withKey: "catWalk") } } runAction(SKAction.moveTo(gameScene.tileMap.positionForTileCoord(nextTileCoord), duration: 0.4), completion: { let gameOver = self.updateState() if !gameOver { self.popStepAndAnimate() } }) } This method pops one step off the array and animates the movement of the cat to that position. The cat has a walking animation too, so the method will start and stop that as appropriate in addition to ensuring the cat is facing in the correct direction. At the end of the method inside the runAction(_:duration:completion:) call, you need to update the game's state to check for dogs, bones, etc. and then schedule another call to popStepAndAnimate() if there's another step along the path. Finally, change the code in moveToward(_:) to call popStepAndAnimate() instead of printing out the steps: if let shortestPath = shortestPath { popStepAndAnimate() } Build and run, and. . . The cat automatically moves to the final destination that you touch, collects bones and vanquishes menacing dogs! :] Note: As you play the game, you'll see that if you select a new location before it has reached the previous one, the cat moves strangely. That's because the current movement path is interrupted and another one started. Since this tutorial isn't focused on gameplay, we'll gloss over this for now although the final project download at the end of this tutorial has this issue fixed – in Cat.swift look for references to currentStepAction and pendingMove. Congratulations! You have now implemented A* pathfinding in a simple Sprite Kit game from scratch! :] Bonus: Diagonal Movement What if you also want to let the cat move diagonally? You just have to update two functions: walkableAdjacentTilesCoordForTileCoord(_:): include the diagonal tiles as well. costToMoveFromTileCoord(_:toAdjacentTileCoord:): calculate an appropriate movement cost for diagonal movement. You might wonder how you should compute the cost for diagonal movement. This is actually quite easy with some simple math! The cat is moving from the center of a tile to another, and because the tiles are squares, A, B and C form a right-angled triangle as diagonally is 1.41. In comparison, moving left then up costs 2, so diagonal movement is clearly better! Now, computing with integers is more efficient than floats, so instead of using floats to represent the cost of a diagonal move, you can simply multiply the costs by 10 and round them. So horizontal and vertical moves will cost 10, and diagonal moves 14. Time to try this out! First replace costToMoveFromTileCoord(_:toAdjacentTileCoord:) in Cat.swift: func costToMoveFromTileCoord(fromTileCoord: TileCoord, toAdjacentTileCoord toTileCoord: TileCoord) -> Int { return (fromTileCoord.col != toTileCoord.col) && (fromTileCoord.row != toTileCoord.row) ? 14 : 10 } In order to add the diagonals to the walkable adjacent tiles, first add some new helper properties to the TileCoord class in TileMap.swift: /** coordinate top-left of self */ var topLeft: TileCoord { return TileCoord(col: col - 1, row: row - 1) } /** coordinate top-right of self */ var topRight: TileCoord { return TileCoord(col: col + 1, row: row - 1) } /** coordinate bottom-left of self */ var bottomLeft: TileCoord { return TileCoord(col: col - 1, row: row + 1) } /** coordinate bottom-right of self */ var bottomRight: TileCoord { return TileCoord(col: col + 1, row: row + 1) } Back in Cat.swift, modify walkableAdjacentTilesForTileCoord(_:) to return the diagonally adjacent squares: func walkableAdjacentTilesCoordsForTileCoord(tileCoord: TileCoord) -> [TileCoord] { var canMoveUp = gameScene.isWalkableTileForTileCoord(tileCoord.top) var canMoveLeft = gameScene.isWalkableTileForTileCoord(tileCoord.left) var canMoveDown = gameScene.isWalkableTileForTileCoord(tileCoord.bottom) var canMoveRight = gameScene.isWalkableTileForTileCoord(tileCoord.right) var walkableCoords = [TileCoord]() if canMoveUp { walkableCoords.append(tileCoord.top) } if canMoveLeft { walkableCoords.append(tileCoord.left) } if canMoveDown { walkableCoords.append(tileCoord.bottom) } if canMoveRight { walkableCoords.append(tileCoord.right) } // now the diagonals if canMoveUp && canMoveLeft && gameScene.isWalkableTileForTileCoord(tileCoord.topLeft) { walkableCoords.append(tileCoord.topLeft) } if canMoveDown && canMoveLeft && gameScene.isWalkableTileForTileCoord(tileCoord.bottomLeft) { walkableCoords.append(tileCoord.bottomLeft) } if canMoveUp && canMoveRight && gameScene.isWalkableTileForTileCoord(tileCoord.topRight) { walkableCoords.append(tileCoord.topRight) } if canMoveDown && canMoveRight && gameScene.isWalkableTileForTileCoord(tileCoord.bottomRight) { walkableCoords.append(tileCoord.bottomRight) } return walkableCoords } Note that the code to add the diagonals is a bit more complicated than for horizontal/vertical movement. Why is this? The code enforces the rule that if the cat is to move, say, diagonally to the tile to the bottom-left, it must also be able to move in both the down and left directions. This prevents the cat from walking through walls. The following diagram illustrates this: In the diagram, - O = Origin - T = Top - B = Bottom - L = Left - R = Right - TL = Top - Left - ... Take for example the case shown in the top left of the above image. The cat wants to go from the origin (O) to the bottom left diagonal square. If there is a wall to the left or below (or both), then moving diagonally would cut through the corner of a wall (or two). So the bottom-left direction is open only if there is no wall to the left or below. Tip: You can simulate different type of terrain by updating costToMoveFromTileCoord(_:toAdjacentTileCoord:) to take the terrain type into consideration. Lowering the cost will result in the cat preferring those squares; increasing it will cause the cat to tend to avoid the squares if a better route is available. Build and run your project. Verify that the cat does, indeed, take the diagonal when it can. A challenge How would you give the cat automatic dog-avoidance behaviour? Rather than having to manually navigate the cat around a dog to get to a bone, what could you do to have the cat automatically choose the dog-free route if one is available? [spoiler title="Avoid Dogs"] You can't make a tile containing a dog an "unwalkable" tile because then the cat couldn't navigate the maze! Instead, you can increase the cost of visiting a tile containing a dog. For example, you could say that moving to a tile with a dog in it costs 10 times as much as moving to an empty tile. i.e.: func costToMoveFromTileCoord(fromTileCoord: TileCoord, toAdjacentTileCoord toTileCoord: TileCoord) -> Int { let baseCost = (fromTileCoord.col != toTileCoord.col) && (fromTileCoord.row != toTileCoord.row) ? 14 : 10 return baseCost * (gameScene.isDogAtTileCoord(toTileCoord) ? 10 : 1) } [/spoiler] Where to Go From Here? Download the final project with all of the code from the tutorial (including diagonal movements and dog avoidance behaviour). Congratulations, you now know the basics of the A* algorithm and how to implement it! You should be able to: - Implement the A* algorithm in your own game - Refine it as necessary (by allowing different kind of terrain, better heuristics, etc...) and optimize it A great place to go to for more information on the A* algorithm is Amit’s A* Pages. If you have any questions or comments about this tutorial, please join the forum discussion below!
https://www.raywenderlich.com/1734-how-to-implement-a-pathfinding-with-swift
CC-MAIN-2021-17
en
refinedweb
26362/how-to-convert-a-unicode-string-to-string How do you convert a Unicode string (containing extra characters like £ $, etc.) into a Python string? Here is an easy solution: def numberToBase(n, b): ...READ MORE Hey, To split a string you can use ...READ MORE Hey, @Roshni, It is very simple to execute, ...READ MORE You need to decode the bytes object ...READ MORE suppose you have a string with a ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE you are passing the parsed datetime object to ...READ MORE Understand that every 'freezing' application for Python ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/26362/how-to-convert-a-unicode-string-to-string
CC-MAIN-2021-17
en
refinedweb
Hello, folks! Good to see you all again! 🙂 Today, we will be focusing on an Important Error Metric – Recall in Python. Let us begin! Table of Contents First, what is an Error Metric? In the domain of data science and Machine Learning, where we are required to implement the model for predictions and real-life problems, it is very important for us to understand the effect of every model or algorithm on the data values. Now, the question arises that How are we going to check for the effect of every model on our data? This is when Error metrics comes into picture. Error metrics are the different aspects through which we can check for the accuracy and closeness of the model for the data values. There are various error metrics for regression as well as classification model. Some of which includes, Today, we will be focusing on Recall in Python as the error metric! Recall in Python as an Error Metric! “Recall” is a Classification error metric. It evaluates the outcome of the classification algorithms for which the target/response value is a category. Basically, Recall in Python defines the amount of the values that are predicted rightly and are actually correctly labelled. By this, we mean to say, it represents the percentage of values that were actually rightly labelled and are now predicted correctly as well. Let us try to understand this with the help of an example! Consider a variable ‘Pole’ with values ‘True, False’. Now, the job of Recall error metric would be to find out how well the model works in the below scenario which is, how many values which were labelled as True and actually predicted as True samples. So, technically speaking, recall is the error metric that accounts for the ability of the Classifies to predict the positive labelled samples correctly. Recall = True Positive/ (True Positive + False Negative) Let us now implement the concept of Recall with various examples in the below section. 1. Recall with Decision Trees Let us begin with importing the dataset! We have used Bike Prediction dataset and have imported it using pandas.read_csv() function. You can find the dataset here. import pandas BIKE = pandas.read_csv("Bike.csv") Splitting the dataset We have segregated the dataset into training and testing dataset using train_test_split() function. ) Now, time to define the Error Metrics! We have created a customized function ‘err_metric’ and have calculated the precision, recall, accuracy and f1 score as shown below– # Error metrics -- Confusion matrix\FPR\FNR\f1 score\) Implementing the model! Let us now apply the Decision Tree model on our dataset. We have used DecisionTreeClassfier() method to apply it on our data. #Decision Trees decision = DecisionTreeClassifier(max_depth= 6,class_weight='balanced' ,random_state =0).fit(X_train,Y_train) target = decision.predict(X_test) targetclass_prob = decision.predict_proba(X_test)[:, 1] confusion_matrix = pd.crosstab(Y_test,target) err_metric(confusion_matrix) Output: As seen below, we get the value of Recall as 0.57 i.e. 57% which means 57% of the data that is actually correctly labelled is predicted rightly. Precision value of the model: 0.25 Accuracy of the model: 0.6028368794326241 Recall value of the model: 0.5769230769230769 Specificity of the model: 0.6086956521739131 False Positive rate of the model: 0.391304347826087 False Negative rate of the model: 0.4230769230769231 f1 score of the model: 0.3488372093023256 2. Recall in Python using sklearn library Python sklearn offers us with recall_score() method that depicts the recall value for a set of data values. Syntax: recall_score(x, y, average='weighted') - x: Actual values - y: Predicted set of values - average: string, [None, ‘binary’ (default), ‘micro’, ‘macro’, ‘samples’, ‘weighted’] In the below example, x refers to the actual set of values while y represents the predicted values. from sklearn.metrics import recall_score x = [10,20,30,40,50,60] y = [10,21,30,40,50,80] print("Recall value:") recall_score(x, y, average='weighted') Output: Recall value: 0.6666666666666666 Conclusion By this, we have come to the end of this topic. Feel free to comment below, in case you come across any question. For a deeper understanding, try executing the concept of recall with various datasets and do let us know your experience in the comment box! Till then, Stay tuned! See you in the next article! Enjoy Learning with JournalDev 🙂
https://www.journaldev.com/46533/recall-in-python
CC-MAIN-2021-17
en
refinedweb
Bleu score in Python is a metric that measures the goodness of Machine Translation models. Though originally it was designed for only translation models, now it is used for other natural language processing applications as well. The BLEU score compares a sentence against one or more reference sentences and tells how well does the candidate sentence matched the list of reference sentences. It gives an output score between 0 and 1. A BLEU score of 1 means that the candidate sentence perfectly matches one of the reference sentences. This score is a common metric of measurement for Image captioning models. In this tutorial, we will be using sentence_bleu() function from the nltk library. Let’s get started. Table of Contents Calculating the Bleu score in Python To calculate the Bleu score, we need to provide the reference and candidate sentences in the form of tokens. We will learn how to do that and compute the score in this section. Let’s start with importing the necessary modules. from nltk.translate.bleu_score import sentence_bleu Now we can input the reference sentences in the form of a list. We also need to create tokens out of sentences before passing them to the sentence_bleu() function. 1. Input and Split the sentences The sentences in our reference list are: 'this is a dog' 'it is dog 'dog it is' 'a dog, it is' We can split them into tokens using the split function. reference = [ 'this is a dog'.split(), 'it is dog'.split(), 'dog it is'.split(), 'a dog, it is'.split() ] print(reference) Output : [['this', 'is', 'a', 'dog'], ['it', 'is', 'dog'], ['dog', 'it', 'is'], ['a', 'dog,', 'it', 'is']] This is what the sentences look like in the form of tokens. Now we can call the sentence_bleu() function to calculate the score. 2. Calculate the BLEU score in Python To calculate the score use the following lines of code: candidate = 'it is dog'.split() print('BLEU score -> {}'.format(sentence_bleu(reference, candidate))) Output : BLEU score -> 1.0 We get a perfect score of 1 as the candidate sentence belongs to the reference set. Let’s try another one. candidate = 'it is a dog'.split() print('BLEU score -> {}'.format(sentence_bleu(reference, candidate))) Output : BLEU score -> 0.8408964152537145 We have the sentence in our reference set, but it isn’t an exact match. This is why we get a 0.84 score. 3. Complete Code for Implementing BLEU Score in Python Here’s the complete code from this section. from nltk.translate.bleu_score import sentence_bleu reference = [ 'this is a dog'.split(), 'it is dog'.split(), 'dog it is'.split(), 'a dog, it is'.split() ] candidate = 'it is dog'.split() print('BLEU score -> {}'.format(sentence_bleu(reference, candidate ))) candidate = 'it is a dog'.split() print('BLEU score -> {}'.format(sentence_bleu(reference, candidate))) 4. Calculating the n-gram score While matching sentences you can choose the number of words you want the model to match at once. For example, you can choose for words to be matched one at a time (1-gram). Alternatively, you can also choose to match words in pairs (2-gram) or triplets (3-grams). In this section we will learn how to calculate these n-gram scores. In the sentence_bleu() function you can pass an argument with weights corresponding to the individual grams. For example, to calculate gram scores individually you can use the following weights. Individual 1-gram: (1, 0, 0, 0) Individual 2-gram: (0, 1, 0, 0). Individual 3-gram: (1, 0, 1, 0). Individual 4-gram: (0, 0, 0, 1). Python code for the same is given below: from nltk.translate.bleu_score import sentence_bleu reference = [ 'this is a dog'.split(), 'it is dog'.split(), 'dog it is'.split(), 'a dog, it is'.split() ] candidate = 'it is a dog'.split() print('Individual 1-gram: %f' % sentence_bleu(reference, candidate, weights=(1, 0, 0, 0))) print('Individual 2-gram: %f' % sentence_bleu(reference, candidate, weights=(0, 1, 0, 0))) print('Individual 3-gram: %f' % sentence_bleu(reference, candidate, weights=(0, 0, 1, 0))) print('Individual 4-gram: %f' % sentence_bleu(reference, candidate, weights=(0, 0, 0, 1))) Output : Individual 1-gram: 1.000000 Individual 2-gram: 1.000000 Individual 3-gram: 0.500000 Individual 4-gram: 1.000000 Be default the sentence_bleu() function calculates the cumulative 4-gram BLEU score, also called BLEU-4. The weights for BLEU-4 are as follows : (0.25, 0.25, 0.25, 0.25) Let’s see the BLEU-4 code: score = sentence_bleu(reference, candidate, weights=(0.25, 0.25, 0.25, 0.25)) print(score) Output : 0.8408964152537145 That’s the exact score we got without the n-gram weights added. Conclusion This tutorial was about calculating the BLEU score in Python. We learned what it is and how to calculate individual and cumulative n-gram Bleu scores. Hope you had fun learning with us!
https://www.journaldev.com/46659/bleu-score-in-python
CC-MAIN-2021-17
en
refinedweb
Start with the project you created in Tutorial 2: Vector Load and Save. Take the following steps to add the ability to edit objects using automation. In the project workspace, click on the Class View tab and then click to open the "VecAut classes" branch. Right click the "CVecAutApp" class and select Add "Add Variable..." For "Variable Type" enter L_BOOL. For "Variable Declaration" enter m_bToolbarDrawing. This variable is will be set to TRUE when drawing with the automation, and set to FALSE when the pointer (NONE) is selected. Click to open the "CVecAutApp" class branch. Double-click the "CVecAutApp" constructor. Add the following line after the last call to LSettings::SetLicenseFile(): m_bToolbarDrawing = FALSE; It is necessary to derive a class from LContainer and override the "ContainerCallBack" function. This is used to set the m_bToolBarDrawing variable. Right click the "VecAut classes" branch and choose Add "Class..." From "Categories" choose "C++ Class". For "Class Name" enter "MyContainer". For "Base Class" enter "LContainer". Click "Finish". Click "Class view" tab. Double click on "MyContainer" to go to the class definition. Then add the following lines: public: L_INT ContainerCallBack(CONTAINEREVENTTYPE nEventType, L_VOID * pEventData); Replace the auto generated #Include line with the following #Include, (keep in mind, you may have to change the path to where the header files reside): #include "..\..\..\..\..\include\ClassLib\ltwrappr.h" In "MyContainer.cpp" add the following code: L_INT MyContainer::ContainerCallBack(CONTAINEREVENTTYPE nEventType, L_VOID * pEventData) { if (nEventType == CONTAINER_EVENT_TYPE_DRAW) { pCONTAINEROBJECTDATA pContainerObjectData; pContainerObjectData = (pCONTAINEROBJECTDATA)pEventData; switch (pContainerObjectData->fState) { case CONTAINER_STATE_BEGIN: //start theApp.m_bToolbarDrawing = TRUE; break; case CONTAINER_STATE_PROCESS: //drawing //Do nothing break; case CONTAINER_STATE_END://finished theApp.m_bToolbarDrawing = FALSE; break; } } return LContainer::ContainerCallBack (nEventType, pEventData); } Go to the top of MyContainer.cpp, and add the following lines after the last #include: #include "VecAut.h" extern CVecAutApp theApp; Now it is necessary to derive a class from LVectorWindow and override the MsgProcCallBack() member function to process mouse messages. Right click the "VecAut classes" branch and choose Add "Class..." From "Categories" choose "C++ Class". For "Class Name" enter "MyVectorWindow ". For "Base Class" enter "LVectorWindow". Click "Finish". You will get a dialog box stating that "The New Class Wizard could not find the appropriate header file(s)..." Click "OK". Click "Class view" tab. Double click on "MyVectorWindow" to go to the class definition. Then add the following lines: public: LRESULT MsgProcCallBack(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam); Replace the auto generated #Include line with the following #Include, (keep in mind, you may have to change the path to where the header files reside): 1) Double click the "MsgProcCallBack" branch and add the following code: LRESULT MyVectorWindow::MsgProcCallBack(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { switch (uMsg) { case WM_RBUTTONDOWN: { if (theApp.m_bToolbarDrawing == FALSE) { LVectorObject MyObject; POINT point; point.x = LOWORD(lParam); point.y = HIWORD(lParam); HitTest(&point, &MyObject); theApp.m_Automation.EditVectorObject (&MyObject); } } } return LVectorWindow::MsgProcCallBack( hWnd, uMsg, wParam, lParam); } Click on the "Class View" tab. Then click to open the "VecAut classes" branch. Click to open the "CVecAutDoc" branch. Double-click the m_VectorWindow member variable. Change the declaration from: LVectorWIndow m_VectorWindow to: MyVectorWindow m_VectorWindow; Go to the top of MyVectorWindow.cpp, and add the following lines after the last #include: #include "VecAut.h" extern CVecAutApp theApp; Open VecAutDoc.h and after #include "ltwrappr.h" add the following line: #include "MyVectorWindow.h" Now add code to set the container properties. These affect the look of a selected vector object. Click to open the "CVecAutView" branch in the "Class View" tab. Double click the "OnCreate" member function, and add the following local variable: CONTAINERMETRICS ContainerMetrics; Add the following code before the "return 0" //Set the container metrics ContainerMetrics.dwMask = CMF_BORDERCOLOR | CMF_HANDLECOLOR | CMF_HANDLEHEIGHT | CMF_HANDLEWIDTH | CMF_ENABLEHANDLES; ContainerMetrics.crBorder = RGB(255,0,0); ContainerMetrics.crHandle = RGB(0,255,0); ContainerMetrics.nHandleHeight = 8; ContainerMetrics.nHandleWidth = 8; ContainerMetrics.fEnableHandles = TRUE; GetDocument()->m_Container.SetMetrics(&ContainerMetrics); Compile and run the program. Choose File->New from the menu bar. Choose View->Vector Toolbar. Select the line tool and draw several lines. Now select the "None" tool (with the arrow icon). Right-click on one of the vector lines. Handles appear. Drag one of the handles to rotate the line. Drag in the center of the line to move the entire line. Double click to end the "edit mode". Direct Show .NET | C API | Filters Media Foundation .NET | C API | Transforms Media Streaming .NET | C API
https://www.leadtools.com/help/sdk/v21/automation/clib/the-automation-container.html
CC-MAIN-2021-17
en
refinedweb
Create and Open Projects and Solutions Create are grouped by frameworks. your system. For Unity and Xamarin, several other options can be provided as well, for example the path to UnityEngine.dll, the target platform (Android or iOS), the type of app (blank, Android Wear, ...) Note that these options, too, depend on available frameworks on your system, such as the Mono/Android versions installed. When you have specified the project options, just click Create. Once a project has been created, files can be added or removed by right-clicking on the project node in the Solution Explorer. Open existing projects and solutions To open an existing project or solution, choose the corresponding item on the welcome screen. Existing solutions are listed in the left pane. You can search the list in case you have a lot of solutions — just start typing the solution name and a search field will appear. If Rider is already running, press Ctrl+Shift+O or choose .sln file if necessary.from the menu. When opening a project, Rider will generate an In the dialog that appears, you can choose either a solution file (.sln ), a project file (for example, .csproj) — in this case Rider will generate an .sln file if necessary, or you can just select a folder — in this case, you will be able to choose any of the solution files located in the selected folder or in any of its subfolders. If your solution is directory-based, that is all its projects are arranged in subfolders without the .sln file, you can choose the root folder in the Select Path dialog to open all projects from subfolders, for example: Note that whenever you choose a folder in the Select Path dialog, you can always opt for opening it as a folder (as opposed to a .NET project). In this case you will be able to browse all files in this folder but code analysis and other features will not be available in .NET files (*.cs, *.vb, and so on). You can choose any of the recently opened solutions from the list under. Rider also allows opening a project from source control. It can, for example, log in to GitHub, and clone a repo to the local file system. Once done, it will prompt you to choose a solution to open from those available in the repo. When opening a .NET Core project, Rider will automatically restore all packages and will also detect the list of target frameworks from,. Trusted and untrusted solutions Each MSBuild project in your solution contains an MSBuild script that is executed not only when you build the project, but also when you merely open the solution. This happens because the IDE runs MSBuild on the project script to understand the structure of the project and its dependencies, and without this understanding the IDE would be nothing more than a basic text editor. Malicious actors can use this design to base an attack on modified project scripts. To address this security threat, JetBrains Rider relies on the concept of trusted solutions and trusted locations. By default, each solution you open is considered untrusted and you will see a dialog where you can either make this solution trusted and open it or choose not open it. Once a solution was opened, it becomes trusted and you will not be asked for confirmation when you open it again. You can also configure a list of directories where you keep your solutions to make all of them trusted. This list is configurable on the page of JetBrains Rider settings Ctrl+Alt+S. Install custom project templates Rider supports the template system used by the .NET tooling dotnet new, which allows you to use project templates from the dotnet templates gallery as well as custom templates that you can create on your own. There are two ways to install new project templates. You can run dotnet new --install [template package]in the command line, where [template package]is the template id from the dotnet templates gallery. In the New Project/ New Solution dialog, click More Templates on the left, then click Install Template, and then choose a folder or a package file with the custom project template. When the path to the template appears in the list, click Reload. Create custom project template Create a project with the desired structure. You can take any project as a starting point or create a new one using any of the existing templates. To illustrate this, we take the simplest project that contains just one file.ConsoleAppAsyncMain ├── bin ├── obj ├── MyProject.csproj ├── Program.cs Program.cs contains async mainmethod as the default application entry point:using System; using System.Threading.Tasks; namespace MyProject { class Program { static async Task Main(string[] args) { Console.WriteLine("Hello World!"); } } } Copy the project folder to a location from which you will use it as a template. Remove bin, obj, and any other directories and fies that are not related to the source code of the template. In the copied project directory (which is now the template directory), add a folder named .template.config and a file named template.json inside it. Now the template structure should look as follows:MyTemplates ├── ConsoleAppAsyncMain ├── .template.config ├── template.json ├── MyProject.csproj ├── Program.cs Specify template properties in the template descriptor template.json The minimal descriptor can look as shown below, but you can provide a much more detailed configuration if necessary. You can find more information and examples in this Microsoft .NET Blog article.{ "author": "Your Name", "name": "Async Main Console Application", "description": "A project for creating a command-line application that can run on .NET Core on Windows, Linux and macOS, and has an async Main method.", "identity": "YourName.ConsoleApp.1.0", "shortName": "consoleasync", "tags": { "language": "C#", "type": "project" }, "sourceName": "MyProject", "symbols": { "Framework": { "type": "parameter", "description": "The target framework for the project.", "datatype": "choice", "choices": [ { "choice": "netcoreapp2.0" }, { "choice": "netcoreapp3.0" } ], "defaultValue": "netcoreapp2.0" } } } Properties in the above configuration are self-explanatory, except "sourceName": "MyProject". When you create projects using this template, the value of this property will be replaced everywhere with the name you specify for the new project. In our example, the new name will replace the project file name and the namespace in Program.cs. Your new project template is ready, you can install it in the New Project/ New Solution dialog — click More Templates on the left, then click Install Template, and then choose the ConsoleAppAsyncMain folder wherever you saved it. When the path to the template appears in the list, click Reload. As soon as the template is installed, you can find it in the list on the left and use it to create new projects: Manage recent solutions Each time you open a new solution, Rider saves it in its history and lets you quickly reopen it from the Up and Down keys, and then press Delete when it is selected.menu. If you want to remove a solution from your history, choose from the main menu, select a solution using the
https://www.jetbrains.com/help/rider/Creating_and_Opening_Projects_and_Solutions.html
CC-MAIN-2021-17
en
refinedweb
CASL VueCASL Vue This package allows to integrate @casl/ability with Vue 3 application. So, you can show or hide UI elements based on user ability to see them. InstallationInstallation For Vue 2.x: npm install @casl/vue@1.x @casl/ability # or yarn add @casl/vue@1.x @casl/ability # or pnpm add @casl/vue@1.x @casl/ability For Vue 3.x: npm install @casl/vue @casl/ability # or yarn add @casl/vue @casl/ability # or pnpm add @casl/vue @casl/ability Getting startedGetting started This package provides a Vue plugin, several hooks for new Vue Composition API and Can component. The pluginThe plugin The plugin provides reactive Ability instance and optionally defines $ability and $can global properties, in the same way as it was for Vue 2.x. The only difference with the previous version is that it requires Ability instance to be passed as a mandatory argument: import { createApp } from 'vue'; import { abilitiesPlugin } from '@casl/vue'; import ability from './services/ability'; createApp() .use(abilitiesPlugin, ability, { useGlobalProperties: true }) .mount('#app'); Later, we can use either $ability or $can method in any component: <template> <div v- <a @Add Post</a> </div> </template> globalProperties is the same concept as global variables which may make life a bit more complicated because any component has access to them (i.e., implicit dependency) and we need to ensure they don't introduce name collisions by prefixing them. So, instead of exposing $ability and $can as globals, we can use provide/inject API to inject $ability: createApp() .use(abilitiesPlugin, ability) .mount('#app'); And to inject an Ability instance in a component, we can use ABILITY_TOKEN: <template> <div> <div v- <a @Add Post</a> </div> </div> </template> <script> import { ABILITY_TOKEN } from '@casl/vue'; export default { inject: { $ability: { from: ABILITY_TOKEN } } } </script> This is a bit more verbose but allows us to be explicit. This works especially good with new Composition API: <template> <div> <div v- <a @Add Post</a> </div> </div> </template> <script> import { useAbility } from '@casl/vue'; export default { setup() { // some code const { can } = useAbility(); return { // other props can }; } } </script> provideAbility hookprovideAbility hook Very rarely, we may need to provide a different Ability instance for a sub-tree of components, and to do this we can use provideAbility hook: <template> <!-- a template --> </template> <script> import { provideAbility } from '@casl/vue'; import { defineAbility } from '@casl/ability'; export default { setup() { const myCustomAbility = defineAbility((can) => { // ... }); provideAbility(myCustomAbility) } } </script> See CASL guide to learn how to define Abilityinstance. Can componentCan component There is an alternative way we can check permissions in the app, by using Can component. Can component is not registered by the plugin, so we can decide whether we want to use component or v-if + $can method. Also, this helps tree shaking to remove it if we decide to not use it. To register component globally, we can use global API (we can also register component locally in components that use it): import { Can, abilitiesPlugin } from '@casl/vue'; createApp() .use(abilitiesPlugin, ability) .component(Can.name, Can) // component registration .mount('#app'); And this is how we can use it: <template> <Can I="create" a="Post"> <a @Add Post</a> </Can> </template> It accepts default slot and 5 properties: do- name of the action (e.g., read, update). Has an alias I on- checked subject. Has a, an, thisaliases field- checked field <template> <Can I="read" : Yes, you can do this! ;) </Can> </template> not- inverts ability check and show UI if user cannot do some action: <template> <Can not You are not allowed to create a post </Can> </template> passThrough- renders children in spite of what ability.canreturns. This is useful for creating custom components based on Can. For example, if you need to disable button based on user permissions: <template> <div> <Can I="delete" a="Post" passThrough <button :Delete post</button> </Can> </div> </template> Property names and aliasesProperty names and aliases As you can see from the code above, the component name and its property names and values create an English sentence, actually a question. The example above reads as "Can I delete a Post?". There are several other property aliases which allow constructing a readable question. And here is a guidance to help you do this: use the a(or an) alias when you check by Type <Can I="read" a="Post">...</Can> use thisalias when you check action on a particular instance. So, the question can be read as "Can I read this particular post?" <Can I="read" :...</Can> use doand onif you are bored and don't want to make your code more readable :) <Can do="read" :...</Can> <Can do="read" :...</Can> Component vs reactive AbilityComponent vs reactive Ability Let's consider PROS and CONS of both solutions in order to make the decision. Can Component: PROS: - declarative - can cache permissions check results until props or ability changes (currently does not) CONS: - more expensive to create - adds nesting in template - harder to use in complex boolean expressions - harder to pass permission check as a prop to another component Reactive Ability: PROS: - easy to use - declarative in template with v-if - easy to pass as a prop to another component - easy to use in complex boolean expressions (either in js or in template) CONS: - more expensive to check, conditions are re-evaluated on each re-render Despite the fact that reactive ability check is a bit more expensive, they are still very fast and it's recommended to use reactive ability instead of <Can> component. TypeScript supportTypeScript support The package is written in TypeScript, so don't worry that you need to keep all the properties and aliases in mind. If you use TypeScript, your IDE will suggest you the correct usage and TypeScript will warn you if you make a mistake. There are few ways to use TypeScript in a Vue app, depending on your preferences. But let's first define our AppAbility type: import { Ability, AbilityClass } from '@casl/ability'; type Actions = 'create' | 'read' | 'update' | 'delete'; type Subjects = 'Article' | 'User' export type AppAbility = Ability<[Actions, Subjects]>; export const AppAbility = Ability as AbilityClass<AppAbility>; Augment Vue typesAugment Vue types There is no other way for TypeScript to know types of global properties without augmentation. To do this, let's add src/shims-ability.d.ts file with the next content: import { AppAbility } from './AppAbility' declare module 'vue' { interface ComponentCustomProperties { $ability: AppAbility; $can(this: this, ...args: Parameters<this['$ability']['can']>): boolean; } } Composition APIComposition API With composition API, we don't need to augment Vue types and can use useAbility hook: import { useAbility } from '@casl/vue'; import { AppAbility } from './AppAbility'; export default { setup(props) { const { can } = useAbility<AppAbility>(); return () => can('read', 'Post') ? 'Yes' : 'No'; } } Additionally, we can create a separate useAppAbility hook, so we don't need to import useAbility and AppAbility in every component we want to check permissions but instead just import a single hook: import { useAbility } from '@casl/vue'; import { AppAbility } from '../AppAbility'; export const useAppAbility = () => useAbility<AppAbility>(); Options APIOptions API It's also possible to use @casl/vue and TypeScript with options API. By default, ABILITY_TOKEN is typed as InjectionKey<Ability>, to cast it to InjectionKey<AppAbility>, we need to use a separate variable: import { InjectionKey } from 'vue'; import { ABILITY_TOKEN } from '@casl/vue'; // previous content that defines `AppAbility` export const TOKEN = ABILITY_TOKEN as InjectionKey<AppAbility>; and now, when we inject AppAbility instance, we will have the correct types: <script lang="ts"> import { defineComponent } from 'vue'; import { TOKEN } from './AppAbility'; export default defineComponent({ inject: { ability: { from: TOKEN } }, created() { this.ability // AppAbility } }); </script> Read Vue TypeScript for more details. Update Ability instanceUpdate Ability instance Majority of applications that need permission checking support have something like AuthService or LoginService or Session service (name it as you wish) which is responsible for user login/logout functionality. Whenever user login (and logout), we need to update Ability instance with new rules. Usually you will do this in your LoginComponent. Let's imagine that server returns user with a role on login: <template> <form @submit. <input type="email" v- <input type="password" v- <button type="submit">Login</button> </form> </template> <script> import { AbilityBuilder, Ability } from '@casl/ability'; import { ABILITY_TOKEN } from '@casl/vue'; export default { name: 'LoginForm', inject: { $ability: { from: ABILITY_TOKEN } }, data: () => ({ email: '', password: '' }), methods: { login() { const { email, password } = this; const params = { method: 'POST', body: JSON.stringify({ email, password }) }; return fetch('path/to/api/login', params) .then(response => response.json()) .then(({ user }) => this.updateAbility(user)); }, updateAbility(user) { const { can, rules } = new AbilityBuilder(Ability); if (user.role === 'admin') { can('manage', 'all'); } else { can('read', 'all'); } this.$ability.update(rules); } } }; </script> See Define rules to get more information of how to define Ability Want to help?Want to help? Want to file a bug, contribute some code, or improve documentation? Excellent! Read up on guidelines for contributing. If you'd like to help us sustain our community and project, consider to become a financial contributor on Open Collective See Support CASL for details
https://www.npmjs.com/package/@casl/vue
CC-MAIN-2021-17
en
refinedweb
Django: Moving away from project vs app dichotomy Kamal Mustafa Apr 8 I was looking at one of exercise of a developer learning how to build app using Django. His directory structure look like this:- ceri main manage.py ... ceri is the chosen name for this project. But looking deeper in each directory, you'll see that ceri contain only settings.py while in main there's views.py, models.py etc. Directory main basically the meat of this application. I'm baffled looking at this. They probably following Django tutorial and start with:- django-admin startproject ceri Since the "project", at business level is called "ceri", it's logical for this developer to run startproject using "ceri" as well. But then he came to next thing the tutorial ask him to do:- django-admin startapp ... My guess is the dev will stuck for a while thinking for the appropriate name. Giving up, he just pick "main" for the name. This "app" vs "project" IMHO is unnecessary to the beginner. There's nothing that prevent you from putting all your views and models in ceri directory but newbie to Django will be given impression that these "project" vs "app" is required thing. The argument for this is to encourage writing reusable apps from the beginning. If you write your "app" generic enough, you can just plug that app into another project. But 80% of the time, you'll just writing app that specific to that project. Most often, people will pick a generic name for their apps such as "user", "order", "customer" etc, convulating the top namespace. I bet that they will hit situation where third party package they want to use clash with their generically named app first than a situation where they want to reuse their app. It's not that developer should be discouraged from writing generic app but this can be introduced at later chapter. We can reduced confusion and this kind of question at Stackoverflow. Programmers who only code at work What's your opinion on programmers who are not passionate about programming, ha...
https://dev.to/k4ml/django-moving-away-from-project-vs-app-dichotomy-3e7
CC-MAIN-2018-30
en
refinedweb
Customized: John Smith commented on your idea – “Switching to a better version control system”: That’s a brilliant idea. I totally agree with you. Let’s make this happen. Looks a lot better than just: “John Smith commented on something.” Okay, so the first thing we need to do is to define the notice type for comments. Here are the values I used for my notice type: - label: comment_posted - display: Comment Posted - description: someone posted a comment for your content( 'comment_posted', _( 'Comment Posted' ), _( 'someone posted a comment for your content' ) ) signals.post_syncdb.connect( create_notice_types, sender = notification ) except ImproperlyConfigured: print 'Skipping creation of NoticeTypes as notification app not found' The next step is to create a signal handler and wire it to the post_save signal for the Comment model. from django.db.models import signals from django.contrib.comments.models import Comment from django.contrib.contenttypes.models import ContentType from django.utils.translation import ugettext as _ from django.core.exceptions import ImproperlyConfigured from django.db.models import get_app try: notification = get_app( 'notification' ) except ImproperlyConfigured: notification = None # valid content types VALID_CTYPES = [ 'link', 'note', 'update' ] def send_comment_notification( sender, **kwargs ): # no point in proceeding if notification is not available if not notification: return if 'created' in kwargs: if kwargs[ 'created' ] == True: # get comment instance instance = kwargs[ 'instance' ] # get comment's content object and its ctype obj = instance.content_object ctype = ContentType.objects.get_for_model( obj ) # check for valid content types if ctype.name not in VALID_CTYPES: return # for customized notification message if ctype.name == 'link': type = _( 'your link' ) descr = obj.link elif ctype.name == 'note': type = _( 'your note' ) descr = obj.title elif ctype.name == 'update': type = _( 'your status update' ) descr = obj.update # send notification to content owner if notification: data = { 'comment': instance.comment, 'user': instance.user, 'type': type, 'descr': descr, } # notification is sent to the original content object # owner/creator notification.send( [ obj.user ], 'comment_posted', data ) # connect signal signals.post_save.connect( send_comment_notification, sender = Comment ) Nothing fancy or complicated here. Some things to note: - I created a list of content type names called VALID_CTYPES which I use to control which content types trigger the creation of a custom notification. - Based on the content type’s name, I define some text that will be used for in the notification message (type). - I also assign a descriptive name of the item in question (descr). The descriptive name should map to whatever field in the model instance that best describes that instance. You can also use __unicode__() if you wish. You just have to make sure that it is defined in the model class and is suited for display in the notification message. - The notification is only sent to the owner/creator of the content being commented on. Finally, we create a simple template for the notification e-mail and put it in templates/notification/comment_posted/: {% load i18n %} {% blocktrans with comment as comment and type as type and descr as descr and user.get_full_name as user %} {{ user }} commented on {{ type }} - "{{ description }}": {{ comment }} {% endblocktrans %} There you have it. Some food for thought: - In the code above, I only send the notification to the owner/creator of the content. However, you could easily add some code to grab a list of all the users who previously commented on the same content object and spam them as well (ala Facebook)! - If you have permalinks defined for your models, you could pass the actual content object to the notification template and use get_absolute_url() to display a direct link to the object’s view. One Comment can you label which .py file you are working for the code blocks?
http://www.nomadjourney.com/2009/05/customized-comment-notifications-from-django/
CC-MAIN-2018-30
en
refinedweb
[ ] Gilles updated MATH-843: ------------------------ Fix Version/s: 3.1 > Precision.EPSILON: wrong documentation > -------------------------------------- > > Key: MATH-843 > URL: > Project: Commons Math > Issue Type: Bug > Affects Versions: 3.0 > Reporter: Dominik Gruntz > Priority: Minor > Labels: documentation > Fix For: 3.1 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > The documentation of the Field {{EPSILON}} in class {{org.apache.commons.math3.util.Precision}} states, that {{EPSILON}} is the smallest positive number such that {{1 - EPSILON}} is not numerically equal to 1, and its value is defined as 1.1102230246251565E-16. > However, this is NOT the smallest positive number with this property. > Consider the following program: > {code} > public class Eps { > public static void main(String[] args) { > double e = Double.longBitsToDouble(0x3c90000000000001L); > double e1 = 1-e; > System.out.println(e); > System.out.println(1-e); > System.out.println(1-e != 1); > } > } > {code} > The output is: > {code} > % java Eps > 5.551115123125784E-17 > 0.9999999999999999 > true > {code} > This proves, that there are smaller positive numbers with the property that 1-eps != 1. > I propose not to change the constant value, but to update the documentation. The value {{Precision.EPSILON}} is > an upper bound on the relative error which occurs when a real number is > rounded to its nearest Double floating-point number. I propose to update > the api docs in this sense. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: For more information on JIRA, see:
http://mail-archives.apache.org/mod_mbox/commons-issues/201208.mbox/%3C645847203.21201.1345153418165.JavaMail.jiratomcat@arcas%3E
CC-MAIN-2018-30
en
refinedweb
I’m not sure if there’s an agreed upon existing convention in JavaScript for it, but all of our classes are capitalized, regardless if they are static or not. The only way to find out is to look at the code (or I guess the documentation). The only suggestion I could find was Crockford’s rule to use all caps for global variables (which is what I guess our static classes really are (global to at least the Cesium namespace) and normal capitalization for instance based classes. This isn’t a huge problem now, but could easily be a pain point for our API moving forward (I’ve ran into it myself a few times already). Should we decide on a convention or leave things as is?
https://community.cesium.com/t/do-we-wanted-to-differentiate-between-static-and-non-static-class-naming/14
CC-MAIN-2020-29
en
refinedweb
$ cnpm install koa-wdm A development middleware for koa2.x with typescript for use with webpack bundles. This module requires a minimum of Node v10.16.3 and Webpack v4.41.0, and must be used with a server that accepts koa2.x-style middleware. First thing's first, install the module. npm: npm install --save-dev koa-wdm yarn: yarn add -D koa-wdm import Koa from 'koa' import webpack from 'webpack' import webpackConfig from './webpack.config' import getMiddleware from '../src/index' const app = new Koa() const compiler = webpack(webpackConfig) app.use(getMiddleware(compiler)) app.listen(8080, () => { console.log('start at 8080') }) The middleware accept a object to change some behavior. It also support zero config. The index path for web server, defaults to "index.html". Type: string Default: index.html In the rare event that a user would like to provide a custom logging interface, this property allows the user to assign one. Created with ansi-colors Type: Out(An object consist of 5 function attributes, log, info, debug, warn, error) Default: Logger It controller if ouput the log message. Type: boolean Default: true Options for formatting statistics displayed during and after compile. For more information and property details, please see the webpack documentation. Type: boolean Default: { context: process.cwd(), colors: true, }
https://developer.aliyun.com/mirror/npm/package/koa-wdm/v/1.0.4
CC-MAIN-2020-29
en
refinedweb
The Proximity Box is an interactive box that detects a user's proximity! Last seen at the 2015 San Mateo Maker Fair! The Proximity Box senses a user's proximity by detecting the intensity of infrared light reflected off the user. The modular boxes vary the color of the several hundred LEDs on their surfaces to indicate the proximity of a detected object. If you want to know about the innards, skim past the following picture gallery. The Proximity Box uses infrared sensors for input and addressable LEDs for output. The basic principle of the Proximity Box is to shine infrared light from the interactive surface to a nearby object and then detect the intensity of reflections from objects. Less intensity would mean that the object is further away, since intensity diminishes with distance. One of the challenges of using infrared light is that ambient infrared interferes with the sensing. I found a solution by using the Vishay TSSP4938 sensors, which have an optical filter to select only a particular wavelength of infrared and have an integrated demodulator to pick out modulated infrared from the environment. Modulation can be done by simply blinking an IR LED (or IR LED strips in my case) at the correct modulation frequency as specified in the sensor's datasheet. Vishay's TSSP line is quite special because they can express intensity as a pulse width. The more popular version of the sensor only detects whether or not the light has exceeded a threshold and outputs a binary high or low. With the TSSP sensor, a weakly detected signal would correspond to a narrow (in time) pulse, while a strong one would correspond to a wide pulse. An incredibly strong signal would give a constant level, an "infinite width pulse." Note that the pulse encoding is not a continuous PWM signal, but a single impulse-like pulse beginning when any low intensity IR is detected. In math-land, the pulse is u(t) - u(t-a), where a is proportional to intensity. More about Vishay's TSSP can be found in their application note/brochure. After the sensors detect proximity, corresponding colors were displayed on a very long chain of APA102 RGB LEDs (see my post on the LED). The APA102 RGB LEDs are daisy-chained LEDs with memory and a simple shift-register-like SPI interface. Although it may seem that there are four separate surfaces for the Proximity Box, the LEDs are are actually daisy chained across all four boxes. The hardware, modular in design, is composed of the Motherboard, the Panel, and the Subpanel. A single Motherboard connects to four Panels. Each Panel connects to four Subpanels. In total there are 128 infrared sensors and 384 output LEDs. Since I was a busy student at the time, the hardware was designed with lack of time on mind. For example, through-hole components were chosen over SMD, since it's easier to solder or probe through-hole components than SMD. Another instance is that I used a resistor network in a DIP form instead of individual ones to save time while soldering. The motherboard houses the microcontroller, supporting circuity, and connectors. The microcontroller used was the MSP430 (MSP-EX430F5529LP), chosen for its availability at the time. Supporting circuity includes a 555 timer for modulating the infrared LEDs without controller intervention, a MUX for selecting the infrared LEDs, and a level shifter. The motherboard also includes 16 potentiometers to tune the intensity of the infrared LEDs in case a stronger or weaker signal is needed. The motherboard has no power circuity, because the Proximity Box is powered by a PSU taken from a computer. Just three wires from the mother board are needed to push data to the output LED strip. The motherboard is connected to four panel circuits which correspond to the four boxes. The panel circuits contain TI's TCA9535 I2C I/O expanders that read in the pulses from the TSSP sensors. Since no hardware interrupts are available at this scale, the I/O expanders were simply sampled very quickly to obtain the pulse width. The panels also have four FETs that modulate the infrared LEDs. The subpanels were not actual circuit boards, but a schematic exists for it. Each panel is connected to four subpanels, which corresponds to the four rows of sensors on each box. The subpanels were constructed with plenty of hot glue and ribbon cables. The gist of the code is that I had to measure the pulse width (indicating signal strength) of every sensor through I2C, convert pulse width to pixel color, and then send the data to the LEDs. The challenge was doing this fast enough for the system to respond in real time. Thinking that I was working with a large and slow system, I did a few hacks with loop unrolling, a few I2C header-skipping shortcuts specific to the I/O expander, and parallelized polling. It turns out I underestimated my hacks, so the ProxBox updates so fast (I don't remember the exact rate, but it was definitely over 100 Hz), it's almost like a shadow under your finger tips. The code for this project is in this GitHub repository (used with the Code Composer IDE). I have to admit, some of the code is disgusting, since I added a lot of fixes in the last minute. The following is a cleaned up version of my main.c. #include <msp430.h> #include <stdint.h> #include "led_drive.h" #include "sensor.h" ... setup/configuration function definitions here int main(void) { WDTCTL = WDTPW | WDTHOLD; // Stop watchdog ti config_clock(); config_spi(); config_i2c(); config_gpio(); // LED Table uint8_t led_table[TABLE_SIZE]; // [ 0x0E + Brightness | B | G | R ] off(led_table, NUM_LED); uint32_t i, reset_count=0; // SENSE Table uint16_t sense_table[NUM_SENSE]; for(;;) { P1OUT ^= 0x01; // blinky gather(sense_table); off(led_table, NUM_LED); process(led_table, sense_table); put_data(led_table, NUM_LED); } // main loop return 0; } 1/21/16: added block diagram 1/1/16: Revised wording. I finally found my found and fixed my PCB files. The KiCad people decided that it would be a good idea to change the file format and break everything. Written on the 13th of November in 2015
http://jgtao.me/content/11-13-15/
CC-MAIN-2020-29
en
refinedweb
Original article can be found here (source): Deep Learning on Medium Our approach is - extract the paragraphs of each research paper (processed data) (code section) - get contextualized embedding from a pretrained BERT which was fine-tuned on Natural Language Inference (NLI) data (code section) - apply contextualized embedding on query (code section) - apply cosine similarity on both the paragraphs and the query, to get the most similar paragraphs and then return the papers of these paragraphs (code section) A- What is BERT ? Multiple approaches have been proposed for language modeling, they can be classified into 2 main categories - recurrent based seq2seq models - Transformer based models (BERT) Recurrent Based seq2seq models use LSTM (a modification to RNN), used in an encoder decoder architecture The encoder is built using Bidirectional LSTM, to encode the input text, to build an internal encoding, The decoder receives both, the generated internal encoding and the reference words, the decoder also contains LSTM, to be able generate output one word at a time. You can know more about this approach in our series on using seq2seq LSTM based models for text summarization, where we go into much details on how this models are built. Transformer based Models Another research efforts, tried to build the language models without using recurrent models, to give the system even more power in working with long sentences, as LSTM finds it difficult to represent long sequence of data, hence long sentences. Transformers are built to rely on attention models, specifically self-attention, which are neural networks built to understand how to attend to specific words in the input sentences, transformers are also built in an encoder decoder structure. The encoder and decoder, each contains a set of blocks, Encoder : contains a stack of blocks each containing (self-attention , feed forward network), where it receives the input, and in a bidirectional manner, attends to all text from the input, the previous and the next words, then passes it to the feed forward network, this structure (block) is repeated multiple times according to the number of blocks in the encoder Decoder : Then after encoding is done, the encoder passes this internal encoding to the decoder step, which also contains multiple blocks, where each of them contains the same self-attention* (with a catch) and an encoder decoder attention, then a feed-forward network. *The difference in that self-attention, is that it only attends to the previous words not the whole sentence. So the decoder receives both the reference and the internal encoding of the encoder (same in concept as the encoder of the seq2seq encoder-decoder recurrent model) You can know more about the Transformer architecture in jalammar’s amazing blog Now comes BERT : It turns out, we don’t need entire Transformer to adopt a fine-tunable language model for NLP tasks, we can work with only the decoder like in what OpenAI has proposed, however, since it uses the decoder, the model only trains a forward model, without looking in both the previous and the coming (hence bi-directional), this is why BERT was introduced, where we only use the Transformer Encoder. BERT is a modification to the original Transformer, which only relies on the Encoder structure, we apply the bidirectional manner using only the encoder block, it can seem counter intuitive, which it is !!, as bidirectional conditioning would allow each word to indirectly see itself in a multi-layered context (more about it here), so BERT uses the ingenious method of using MASKS in training. BERT is trained given a huge amount of text, applying masking [MASK] to 15% of the words, then it is trained to predict the MASKED word. We mainly use a pretrained BERT model, and then use it as our corner-stone step for our tasks, which are mainly categorized into 2 main types - Task Specific tasks (Question answering, text summarization , classification, Single sentence tagging, …….) - Building a contextualized word embeddings, which is our goal today. so lets built a contextualized word embeddings There are actually multiple ways to generate the embeddings from BERT encoder blocks (12 blocks in this example) In this tutorial we will focus on the task of using pre-trainined BERT to build the embeddings of sentences, we would simply pass our sentences to pre-trained BERT, to generate our own contextualized embeddings. B- Our Approach: 1. divide the literature dataset for the corona virus COVID-19 into paragraphs, the dataset can be found here in the kaggle competition, (code section) the the processed dataset can be found here, the steps for reading and processing the json files can be found here, where we convert the json files to a csv, we use the same process used by maksimeren 2. Encode the sentences (code section) We use the library provided by UKPLab called sentence-transformers, this library makes it truly easy to use BERT and other architectures like ALBERT and XLNet for sentence embedding, they also provide simple interface to query and cluster data. !pip install -U sentence-transformers then we would download the pre-trained BERT model which was fine-tuned on Natural Language Inference (NLI) data (code section) from sentence_transformers import SentenceTransformer import scipy.spatial import pickle as pkl embedder = SentenceTransformer('bert-base-nli-mean-tokens') then we would encode the list of the paragraphs (the processed data can be found here) corpus = df_sentences_list corpus_embeddings = embedder.encode(corpus,show_progress_bar=True) 3. Encode the query and run similarity (code section) the query are the sentences we need to find answers to, or in other words, search the paragraph dataset for similar paragraphs, hence similar literature papers # Query sentences: queries = ['What has been published about medical care?', 'Knowledge of the frequency, manifestations, and course of extrapulmonary manifestations of COVID-19, including, but not limited to, possible cardiomyopathy and cardiac arrest', 'Use of AI in real-time health care delivery to evaluate interventions, risk factors, and outcomes in a way that could not be done manually', 'Resources to support skilled nursing facilities and long term care facilities.', 'Mobilization of surge medical staff to address shortages in overwhelmed communities .', 'Age-adjusted mortality data for Acute Respiratory Distress Syndrome (ARDS) with/without other organ failure – particularly for viral etiologies .']query_embeddings = embedder.encode(queries,show_progress_bar=True) Then we would run cosine similarity between both the embedded query and the previously embedded paragraphs, and return the 5 most similar paragraphs, and the details of their papers # Find the closest 5 sentences of the corpus for each query sentence based on cosine similarity closest_n = 5 print("\nTop 5 most similar sentences in corpus:") for query, query_embedding in zip(queries, query_embeddings): distances = scipy.spatial.distance.cdist([query_embedding], corpus_embeddings, "cosine")[0] results = zip(range(len(distances)), distances) results = sorted(results, key=lambda x: x[1]) for idx, distance in results[0:closest_n]: print("Score: ", "(Score: %.4f)" % (1-distance) , "\n" ) print("Paragraph: ", corpus[idx].strip(), "\n" ) row_dict = df.loc[df.index== corpus[idx]].to_dict() print("paper_id: " , row_dict["paper_id"][corpus[idx]] , "\n") print("Title: " , row_dict["title"][corpus[idx]] , "\n") print("Abstract: " , row_dict["abstract"][corpus[idx]] , "\n") print("Abstract_Summary: " , row_dict["abstract_summary"][corpus[idx]] , "\n") C- Results ========================================================= ==========================Query========================== === What has been published about medical care? ========= ========================================================= Score: (Score: 0.8296) Paragraph: how may state authorities require persons to undergo medical treatment Title: Chapter 10 Legal Aspects of Biosecurity----------------------------------Score: (Score: 0.8220) Paragraph: to identify how one health has been used recently in the medical literature Title: One Health and Zoonoses: The Evolution of One<br>Health and Incorporation of Zoonoses ========================================================= ==========================Query============================== === Knowledge of the frequency, manifestations, and course of extrapulmonary manifestations of COVID-19, including, but not limited to, possible cardiomyopathy and cardiac arrest ===== =========================================================Score: (Score: 0.8139) Paragraph: clinical signs in hcm are explained by leftsided chf complications of arterial thromboembolism ate lv outflow tract obstruction or arrhythmias capable of Title: Chapter 150 Cardiomyopathy------------------------------------ Score: (Score: 0.7966) Paragraph: the term arrhythmogenic cardiomyopathy is a useful expression that refers to recurrent or persistent ventricular or atrial arrhythmias in the setting of a normal echocardiogram the most commonly observed rhythm disturbances are pvcs and ventricular tachycardia vt however atrial rhythm disturbances may be recognized including atrial fibrillation paroxysmal or sustained atrial tachycardia and atrial flutter Title: Chapter 150 Cardiomyopathy========================================================= ==========================Query========================== === Use of AI in real-time health care delivery to evaluate interventions, risk factors, and outcomes in a way that could not be done manually ========================================================= Score: (Score: 0.8002) Paragraph: conclusion several methods and approaches could be used in the healthcare arena time series is an analytical tool to study diseases and resources management at healthcare institutions the flexibility to follow up and recognize data patterns and provide explanations must not be neglected in studies of healthcare interventions in this study the arima model was introduced without the use of mathematical details or other extensions to the model the investigator or the healthcare organization involved in disease management programs could have great advantages when using analytical methodology in several areas with the ability to perform provisions in many cases despite the analytical possibility by statistical means this approach does not replace investigators common sense and experience in disease interventionsTitle: Disease management with ARIMA model in time<br>series ------------------------------------------- Score: (Score: 0.7745) Paragraph: whether the health sector is in fact more skillintensive than all other sectors is an empirical question as is that of whether the incidence of illness and the provision and effectiveness of health care are independent of labour type in a multisectoral model with more than two factors possibly health carespecific and other reallife complexities the foregoing predictions are unlikely to be wholly true nevertheless these effects will still operate in the background and thus give a useful guide to the interpretation of the outcomes of such a modelTitle: A comparative analysis of some policy options<br>to reduce rationing in the UK's NHS: Lessons from a<br>general equilibrium model incorporating positive<br>health effects for the full results refer to our code notebook D- Comments We were truly impressed by both, - the ease of use of the sentence-transformers library, which made it extremely easy to apply BERT for embedding and extracting similarity. - We were truly impressed by the quality of the results, as BERT is built on the concept of representing the contexts of text, using it resulted in truly relevant answers - We believe that by using the paragraphs themselves, not just the abstract of papers, we are able to return not just the most similar paper, but the most similar part inside the papers. - We hope that by this, we are helping to structure the world of continuously increasing literature research efforts in the fight against this corona covid-19 virus. E- References - We use the library provided by UKPLab called sentence-transformers, this library makes it truly easy to use BERT and other architectures like ALBERT,XLNet for sentence embedding, they also provide simple interface to query and cluster data. - We have used the code from maksimeren for data processing, we truly like to thank him. - We used the concept of drawing BERT, discussed here Jay Alammar in illustrating how our architecture works, we also referred to multiple illustrations and explanations he made, his blogs are extremely informative and easily understood. - We used the pre-trained models disccess in Conneau et al., 2017, show in the InferSent-Paper (Supervised Learning of Universal Sentence Representations from Natural Language Inference Data) that training on Natural Language Inference (NLI) data can produce universal sentence embeddings. - Attention is all you need, the transformer paper - BERT, BERT code
https://mc.ai/covid-19-bert-literature-search-engine/
CC-MAIN-2020-29
en
refinedweb
- General questions - Programming questions - How do I setup a new SBT project? - How do I deploy Scio jobs to Dataflow? - How do I use the SNAPSHOT builds of Scio? - How do I unit test pipelines? - How do I combine multiple input sources? - How do I log in a job? - How do I use Beam’s Java API in Scio? - What are the different types of joins and performance implication? - How to create Dataflow job template? - How do I cancel a job after certain time period? - Why can’t I have an SCollection inside another SCollection? - BigQuery questions - What is BigQuery dataset location? - How stable is the type safe BigQuery API? - How do I work with nested Options in type safe BigQuery? - How do I unit test BigQuery queries? - How do I stream to a partitioned BigQuery table? - How do I invalidate cached BigQuery results or disable cache? - How does BigQuery determines job priority? - Streaming questions - Other IO components - Development environment issues - Common issues - What does “Cannot prove that T1 <:< T2” mean? - How do I fix invalid default BigQuery credentials? - Why are my typed BigQuery case classes not up to date? - How do I fix “SocketTimeoutException” with BigQuery? - Why do I see names like “main@{NativeMethodAccessorImpl...}” in the UI? - How do I fix “RESOURCE_EXHAUSTED” error? - Can I use “scala.App” trait instead of “main” method? - How to inspect the content of an SCollection? - How do I improve side input performance? - How do I control concurrency (number of DoFn threads) in Dataflow workers - How to manually investigate a Cloud Dataflow worker General questions What’s the status of Scio? Scio is widely being used for production data pipelines at Spotify and is our preferred framework for building new pipelines on Google Cloud. We run Scio on Google Cloud Dataflow service in both batch and streaming modes. However it’s still under heavy development and there might be minor breaking API changes from time to time. Who’s using Scio? Spotify uses Scio for all new data pipelines running on Google Cloud Platform, including music recommendation, monetization, artist insights and business analysis. We also use BigQuery, Bigtable and Datastore heavily with Scio. We use Scio in both batch and streaming mode. As of mid 2017, there’re 200+ developers and 700+ production pipelines. The largest batch job we’ve seen uses 800 n1-highmem-32 workers (25600 CPUs, 166.4TB RAM) and processes 325 billion rows from Bigtable (240TB). We also have numerous jobs that process 10TB+ of BigQuery data daily. On the streaming front, we have many jobs with 30+ n1-standard-16 workers (480 CPUs, 1.8TB RAM) and SSD disks for real time machine learning or reporting. For a incomplete list of users, see the Powered By page. What’s the relationship between Scio and Apache Beam? Scio is a Scala API built on top of Apache Beam’s Java SDK. Scio aims to offer a concise, idiomatic Scala API for a subset of Beam’s features, plus extras we find useful, like REPL, type safe BigQuery, and IO taps. What’s the relationship between Scio and Google Cloud Dataflow? Scio (version before 0.3.0) was originally built on top of Google Cloud Dataflow’s Java SDK. Google donated the code base to Apache and renamed it Beam. Cloud Dataflow became one of the supported runners, alongside Apache Flink & Apache Spark. Scio 0.3.x is built on top of Beam 0.6.0 and 0.4.x is built on top of Beam 2.x. Many users run Scio on the Dataflow runner today. How does Scio compare to Scalding or Spark? Check out the wiki page on Scio, Scalding and Spark. Also check out Big Data Rosetta Code for some snippets. What are GCE availability zone and GCS bucket location? - GCE availability zone is where the Google Cloud Dataflow service spins up VM instances for your job, e.g. us-east1-a. - Each GCS bucket ( gs://bucket) has a storage class and bucket location that affects availability, latency and price. The location should be close to GCE availability zone. Dataflow uses --stagingLocationfor job jars, temporary files and BigQuery I/O. Programming questions How do I setup a new SBT project? Read the documentation. How do I deploy Scio jobs to Dataflow? When developing locally, you can do sbt "runMain MyClass ... or just runMain MyClass ... in the SBT console without building any artifacts. When deploying to the cloud, we recommend using sbt-pack or sbt-native-packager plugin instead of sbt-assembly. Unlike assembly, they pack dependency jars in a directory instead of merging them, so that we don’t have to deal with merge strategy and dependency jars can be cached by Dataflow service. At Spotify we pack jars with sbt-pack, build docker images with sbt-docker together with orchestration components e.g. Luigi or Airflow and deploy them with Styx. How do I use the SNAPSHOT builds of Scio? Commits to Scio master are automatically published to Sonatype via continuous integration. To use the latest SNAPSHOT artifact, add the following line to your build.sbt. resolvers += Resolver.sonatypeRepo("snapshots") Or you can configure SBT globally by adding the following to ~/.sbt/1.0/global.sbt. resolvers ++= Seq( Resolver.sonatypeRepo("snapshots") // other resolvers ) How do I unit test pipelines? Any Scala or Java unit testing frameworks can be used with Scio but we provide some utilities for ScalaTest. PipelineTestUtils- utilities for testing parts of a pipeline JobTest- for testing pipelines end-to-end with complete arguments and IO coverage SCollectionMatchers- ScalaTest matchers for SCollection PipelineSpec- shortcut for ScalaTest FlatSpecwith utilities and matchers The best place to find example useage of JobTest and SCollectionMatchers are their respective tests in JobTestTest and SCollectionMatchersTest. For more examples see: - scio-examples - How do I combine multiple input sources? How do I combine multiple input sources, e.g. different BigQuery tables, files located in different GCS buckets? You can combine SCollections from different sources into one using the companion method SCollection.unionAll, for example: import com.spotify.scio._ import com.spotify.scio.avro._ import com.spotify.scio.values._ import com.spotify.scio.avro.TestRecord object MyJob { def main(cmdlineArgs: Array[String]): Unit = { val (sc, args) = ContextAndArgs(cmdlineArgs) val collections = Seq("gs://bucket1/data/*.avro", "gs://bucket2/data/*.avro") .map(sc.avroFile[TestRecord](_)) val all = SCollection.unionAll(collections) } } How do I log in a job? You can log in a Scio job with most common logging libraries but slf4j is included as a dependency. Define the logger instance as a member of the job object and use it inside a lambda. import com.spotify.scio._ import org.slf4j.LoggerFactory object MyJob { private val logger = LoggerFactory.getLogger(this.getClass) def main(cmdlineArgs: Array[String]): Unit = { val (sc, args) = ContextAndArgs(cmdlineArgs) sc.parallelize(1 to 100) .map { i => logger.info(s"Element $i") i * i } // ... } } How do I use Beam’s Java API in Scio? Scio exposes a few things to allow easy integration with native Beam Java API, notably: ScioContext#customInputto apply a PTransform[_ >: PBegin, PCollection[T]](source) and get a SCollection[T]. SCollection#applyTransformto apply a PTransform[_ >: PCollection[T], PCollection[U]]and get a SCollection[U] SCollection#saveAsCustomOutputto apply a PTransform[_ >: PCollection[T], PDone](sink) and get a ClosedTap[T]. See BeamExample.scala for more details. Custom I/O can also be tested via the `JobTest` harness. What are the different types of joins and performance implication? - Inner ( a.join(b)), left ( a.leftOuterJoin(b)), outer ( a.fullOuterJoin(b)) performs better with a large LHS. So ashould be the larger data set with potentially more hot keys, i.e. key with many values. Every key-value pair from every input is shuffled. join/ leftOuterJoinmay be replaced by hashJoin/ leftHashJoinif the RHS is small enough to fit in memory (e.g. < 1GB). The RHS is used as a multi-map side input for the LHS. No shuffle is performed. - Consider skewedJoinif some keys on the LHS are extremely hot. - Consider sparseOuterJoinif you want a full outer join where RHS is much smaller than LHS, but may not fit in memory. - Consider cogroupif you need to access value groups of each key. MultiJoinsupports inner, left, outer join and cogroup of up to 22 inputs. - For multi-joins larger inputs should be on the left, e.g. size(a) >= size(b) >= size(c) >= size(d)in MultiJoin(a, b, c, d). - Also see this section on Cloud Dataflow Shuffle service. How to create Dataflow job template? For Apache Beam based Scio (version >= 0.3.0) use DataflowRunner and specify templateLocation option. For example in CLI --templateLocation=gs://<bucket>/job1. Read more about templates here. How do I cancel a job after certain time period? You can wait on the ScioResult and call the internal PipelineResult#cancel() method if a timeout exception happens. import com.spotify.scio._ import scala.concurrent.duration._ object MyJob { def main(cmdlineArgs: Array[String]): Unit = { val (sc, args) = ContextAndArgs(cmdlineArgs) // ... val closedSc: ScioExecutionContext = sc.run() val result: ScioResult = closedSc.waitUntilFinish(1.minute, cancelJob = true) } } Why can’t I have an SCollection inside another SCollection? You cannot have an SCollection inside another SCollection, i.e. anything with type SCollection[SCollection[T]]. To explain this we have to go back to the relationship between ScioContext and SCollection. Every ScioContext represents a unique pipeline and every SCollection represents a stage in the pipeline execution, i.e. the state of the pipeline after some transforms has be applied. We start a pipeline code with val sc = ..., create new SCollections with methods on sc, e.g. sc.textFile, and transform them with methods like .map, .filter, .join. Therefore each SCollection can trace its root to one single sc. The pipeline is submitted for execution when we call sc.run(). Hence we cannot have an SCollection inside another SCollection just as we cannot have a pipeline inside another pipeline. BigQuery questions What is BigQuery dataset location? - Each BigQuery dataset has a location (e.g. US, EU) and every table inside are stored in the same location. Tables in a JOINmust be from the same region. Also one can only import/export tables to a GCS bucket in the same location. Starting from v0.2.1, Scio will detect the dataset location of a query and create a staging dataset for ScioContext#bigQuerySelectand @BigQueryType.fromQuery. This location should be the same as that of your --stagingLocationGCS bucket. The old -Dbigquery.staging_dataset.locationflag is removed. Because of these limitations and performance reasons, make sure --zone, --stagingLocation and location of BigQuery datasets are consistent. -Dbigquery.staging_dataset.location How stable is the type safe BigQuery API? Type Safe BigQuery API is considered stable and widely used at Spotify. There are several caveats however: - Both legacy and SQL syntax are supported although the SQL syntax is highly recommended - The system will detect legacy or SQL syntax and choose the correct one - To override auto-detection, start the query with either #legacysqlor #standardsqlcomment line - Legacy syntax is less predictable, especially for complex queries and may be disabled in the future - Case classes generated by @BigQueryType.fromTableor @BigQueryType.fromQueryare not recognized in IntelliJ IDEA, but see this section for a workaround How do I work with nested Options in type safe BigQuery? Any nullable field in BigQuery is translated to Option[T] by the type safe BigQuery API and it can be clunky to work with rows with multiple or nested fields. For example: def doSomething(s: String): Unit = () import com.spotify.scio.bigquery.types.BigQueryType @BigQueryType.fromSchema("""{ |"fields": [{ | "type":"RECORD", | "mode": "NULLABLE", | "name":"user", | "fields":[ | {"mode": "NULLABLE", "name":"email", "type": "STRING"}, | {"mode": "REQUIRED","name":"name","type":"STRING"}] |}]}""".stripMargin) class Row def doSomethingWithRow(row: Row) = { if (row.user.isDefined) { // Option[User] val email = row.user.get.email // Option[String] if (email.isDefined) { doSomething(email.get) } } } For comprehension is a nicer alternative in these cases: def doSomethingWithRowUsingFor(row: Row) = { val e: Option[String] = for { u <- row.user e <- u.email } yield e e.foreach(doSomething) } Also see these slides and this blog article. How do I unit test BigQuery queries? BigQuery doesn’t provide a way to unit test query logic locally, but we can query the service directly in an integration test. Take a look at BigQueryIT.scala. MockBigQuery will create temporary tables on the service, feed them with mock data, and substitute table references in your query string with the mocked ones. How do I stream to a partitioned BigQuery table? Currently there is no way to create a partitioned BigQuery table via Scio/Beam when streaming, however it is possible to stream to a partitioned table if it is already created. This can be done by using fixed windows and using the window bounds to infer date. As of Scio 0.4.0-beta2 this looks as follows: import com.spotify.scio._ import org.apache.beam.sdk.values.ValueInSingleWindow import org.apache.beam.sdk.transforms.SerializableFunction import org.apache.beam.sdk.transforms.windowing.IntervalWindow import com.google.api.services.bigquery.model.TableRow import org.apache.beam.sdk.io.gcp.bigquery.{BigQueryIO, TableDestination} import BigQueryIO.Write.{CreateDisposition, WriteDisposition} import org.joda.time.format.DateTimeFormat import org.joda.time.{DateTimeZone, Duration} class DayPartitionFunction() extends SerializableFunction[ValueInSingleWindow[TableRow], TableDestination] { override def apply(input: ValueInSingleWindow[TableRow]): TableDestination = { val partition = DateTimeFormat.forPattern("yyyyMMdd").withZone(DateTimeZone.UTC) .print(input.getWindow.asInstanceOf[IntervalWindow].start()) new TableDestination("project:dataset.partitioned$" + partition, "") } } object BQPartitionedJob { def myStringToTableRowConversion: String => TableRow = ??? def main(cmdlineArgs: Array[String]): Unit = { val (sc, args) = ContextAndArgs(cmdlineArgs) sc.pubsubSubscription[String]("projects/data-university/topics/data-university") .withFixedWindows(Duration.standardSeconds(30)) // Convert to `TableRow` .map(myStringToTableRowConversion) .saveAsCustomOutput( "SaveAsDayPartitionedBigQuery", BigQueryIO.writeTableRows().to( new DayPartitionFunction()) .withWriteDisposition(WriteDisposition.WRITE_APPEND) .withCreateDisposition(CreateDisposition.CREATE_NEVER) ) sc.run() } } In Scio 0.3.X it is possible to achieve the same behaviour using SerializableFunction[BoundedWindow, String] and BigQueryIO.Write.to. It is also possible to stream to separate tables with a Date suffix by modifying DayPartitionFunction, specifying the Schema, and changing the CreateDisposition to CreateDisposition.CREATE_IF_NEEDED. How do I invalidate cached BigQuery results or disable cache? Scio’s BigQuery client in Scio caches query result in system property bigquery.cache.directory, which defaults to $PWD/.bigquery. Use rm -rf .bigquery to invalidate all cached results. To disable caching, set system property bigquery.cache.enabled to false. How does BigQuery determines job priority? By default Scio runs BigQuery jobs with BATCH priority except when in the REPL where it runs with INTERACTIVE. To override this, set system property bigquery.priority to either BATCH or INTERACTIVE. Streaming questions How do I update a streaming job? Dataflow allows streaming jobs to be updated on the fly by specifying --update, along with --jobName=[your_job] on the command line. See for detailed docs. Note that for this to work, Dataflow needs to be able to identify which transformations from the original job map to those in the replacement job. The easiest way to do so is to give unique names to transforms in the code itself. In Scio, this can be achieved by calling .withName() before applying the transform. For example: import com.spotify.scio._ def main(cmdlineArgs: Array[String]): Unit = { val (sc, args) = ContextAndArgs(cmdlineArgs) sc.textFile(args("input")) .withName("MakeUpper").map(_.toUpperCase) .withName("BigWords").filter(_.length > 6) } In this example, the map’s transform name is “MakeUpper” and the filter’s is “BigWords”. If we later decided that we want to count 6 letter words as “big” too, then we can change it to _.length > 5, and because the transform name is the same the job can be updated on the fly. Other IO components How do I access various files outside of a ScioContext? - For Scio version >= 0.4.0 Starting from Scio 0.4.0 you can use Apache Beam’s Filesystems abstraction: import org.apache.beam.sdk.io.FileSystems // the path can be any of the supported Filesystems, e.g. local, GCS, HDFS def readmeResource = FileSystems.matchNewResource("gs://<bucket>/README.md", false) def readme = FileSystems.open(readmeResource) - For Scio version < 0.4.0 This part is GCS specific. You can get a `GcsUtil` instance from ScioContext, which can be used to open GCS files in read or write mode. import com.spotify.scio.ContextAndArgs import org.apache.beam.sdk.extensions.gcp.options.GcsOptions def main(cmdlineArgs: Array[String]): Unit = { val (sc, args) = ContextAndArgs(cmdlineArgs) val gcsUtil = sc.optionsAs[GcsOptions].getGcsUtil // ... } How do I reduce Datastore boilerplate? Datastore Entity class is actually generated from Protobuf which uses the builder pattern and very boilerplate heavy. You can use the Magnolify library to seamlessly convert bewteen case classes and Entitys. See MagnolifyDatastoreExample.scala for an example job and MagnolifyDatastoreExampleTest.scala for tests. How do I throttle Bigtable writes? Currently Dataflow autoscaling may not work well with large writes BigtableIO. Specifically It does not take into account Bigtable IO rate limits and may scale up more workers and end up hitting the limit and eventually fail the job. As a workaround, you can enable throttling for Bigtable writes in Scio 0.4.0-alpha2 or later. val btProjectId = "" val btInstanceId = "" val btTableId = "" import com.spotify.scio.values._ import com.spotify.scio.bigtable._ import com.google.cloud.bigtable.config.{BigtableOptions, BulkOptions} import com.google.bigtable.v2.Mutation import com.google.protobuf.ByteString def main(cmdlineArgs: Array[String]): Unit = { // ... val data: SCollection[(ByteString, Iterable[Mutation])] = ??? val btOptions = BigtableOptions.builder() .setProjectId(btProjectId) .setInstanceId(btInstanceId) .setBulkOptions(BulkOptions.builder() .enableBulkMutationThrottling() .setBulkMutationRpcTargetMs(10) // lower latency threshold, default is 100 .build()) .build() data.saveAsBigtable(btOptions, btTableId) // ... } How do I use custom Kryo serializers? Define a registrar class that extends IKryoRegistrar and annotate it with @KryoRegistrar. Note that the class name must ends with KryoRegistrar, i.e. MyKryoRegistrar for Scio to find it. trait UserRecord trait AccountRecord import com.twitter.chill.KSerializer import com.esotericsoftware.kryo.Kryo import com.esotericsoftware.kryo.io.{Input, Output} class UserRecordSerializer extends KSerializer[UserRecord] { def read(x$1: Kryo, x$2: Input, x$3: Class[UserRecord]): UserRecord = ??? def write(x$1: Kryo, x$2: Output, x$3: UserRecord): Unit = ??? } class AccountRecordSerializer extends KSerializer[AccountRecord] { def read(x$1: Kryo, x$2: Input, x$3: Class[AccountRecord]): AccountRecord = ??? def write(x$1: Kryo, x$2: Output, x$3: AccountRecord): Unit = ??? } = { // register serializers for additional classes here k.forClass(new UserRecordSerializer) k.forClass(new AccountRecordSerializer) //... } } Registering just the classes can also improve Kryo performance. By registering, classes will be serialized as numeric IDs instead of fully qualified class names, hence saving space and network IO while shuffling. make trait MyRecord1 trait MyRecord2 = { k.registerClasses(List(classOf[MyRecord1], classOf[MyRecord2])) } } What Kryo tuning options are there? See KryoOptions.java for a complete list of available Kryo tuning options. These can be passed via command line, for example: --kryoBufferSize=1024 --kryoMaxBufferSize=8192 --kryoReferenceTracking=false --kryoRegistrationRequired=true Among these, --kryoRegistrationRequired=true might be useful when developing to ensure that all data types in the pipeline are registered. Development environment issues How do I keep SBT from running out of memory? SBT might run out of memory sometimes and show an OutOfMemoryError: Metaspace error. Override default memory setting with -mem <integer>, e.g. sbt -mem 1024. How do I fix SBT heap size error in IntelliJ? If you encounter an SBT error with message “Initial heap size set to a larger value than the maximum heap size”, that is because IntelliJ has a lower default -Xmx for SBT than -Xms in our .jvmopts. To fix that, open Preferences -> Build, Execution, Deployment -> Build Tools -> sbt, and update Maximum heap size, MB to 2048. How do I fix “Unable to create parent directories” error in IntelliJ? You might get an error message like java.io.IOException: Unable to create parent directories of /Applications/IntelliJ IDEA CE.app/Contents/bin/.bigquery/012345abcdef.schema.json. This usually happens to people who run IntelliJ IDEA with its bundled JVM. There are two solutions. - Install JDK from java.com and switch to it by following the “All platforms: switch between installed runtimes” section in this page. - Override the bigquery .cachedirectory as a JVM compiler parameter. On the bottom right of the IntelliJ window, click the icon that looks like a clock, and then “Configure…”. Then, edit the JVM parameters to include the line -Dbigquery.cache.directory=</path/to/repository>/.bigquery. Then, restart the compile server by clicking on the clock icon -> Stop, and then Start. How to make IntelliJ IDEA work with type safe BigQuery classes? Due to issue SCL-8834 case classes generated by @BigQueryType.fromTable or @BigQueryType.fromQuery are not recognized in IntelliJ IDEA. There are two workarounds. The first, IDEA plugin solution, is highly recommended. - IDEA Plugin Inside IntelliJ, Preferences -> Plugins -> Browse repositories ... and search Scio. Install the plugin, restart IntelliJ, recompile the project (use SBT or IntelliJ). You have to recompile the project each time you add/edit @BigQueryType macro. Plugin requires Scio >= 0.2.2. Documentation. - Use case class from @BigQueryType.toTable First start Scio REPL and generate case classes from your query or table. import com.spotify.scio.bigquery.types.BigQueryType @BigQueryType.fromQuery("SELECT tornado, month FROM [bigquery-public-data:samples.gsod]") class Tornado Next print Scala code of the generated classes. Tornado.toPrettyString() You can then paste the @BigQueryType.fromQuery code into your pipeline and use it with sc.typedBigQuery. import com.spotify.scio._ import com.spotify.scio.values._ import com.spotify.scio.bigquery._ def main(cmdlineArgs: Array[String]): Unit = { val (sc, args) = ContextAndArgs(cmdlineArgs) val data: SCollection[Tornado] = sc.typedBigQuery[Tornado] // ... } Common issues What does “Cannot prove that T1 <:< T2” mean? Sometimes you get an error message like Cannot prove that T1 <:< T2 when saving an SCollection. This is because some sink methods have an implicit argument like this which means element type T of SCollection[T] must be a sub-type of TableRow in order to save it to BigQuery. You have to map out elements to the required type before saving. def saveAsBigQuery(tableSpec: String)(implicit ev: T <:< TableRow) In the case of saveAsTypedBigQuery you might get an Cannot prove that T <:< com.spotify.scio.bigquery.types.BigQueryType.HasAnnotation. error message. This API requires an SCollection[T] where T is a case class annotated with @BigQueryType.toTable. For example: import com.spotify.scio._ import com.spotify.scio.values._ import com.spotify.scio.bigquery._ import com.spotify.scio.bigquery.types.BigQueryType @BigQueryType.toTable case class Result(user: String, score: Int) def main(cmdlineArgs: Array[String]): Unit = { val (sc, args) = ContextAndArgs(cmdlineArgs) val p: SCollection[(String, Int)] = ??? p.map(kv => Result(kv._1, kv._2)) .saveAsTypedBigQueryTable(Table.Spec(args("output"))) } Scio uses Macro Annotations and Macro Paradise plugin to implement annotations. You need to add Macro Paradise plugin to your scala compiler as described here. How do I fix invalid default BigQuery credentials? If you don’t specify a secret credential file for BigQuery [1], Scio will use your default credentials (via GoogleCredential.getApplicationDefault), which: Returns the Application Default Credentials which are used to identify and authorize the whole application. The following are searched (in order) to find the Application Default Credentials: - Credentials file pointed to by the GOOGLE_APPLICATION_CREDENTIALSenvironment variable - Credentials provided by the Google Cloud SDK - gcloud auth application-default logincommand - Google App Engine built-in credentials - Google Cloud Shell built-in credentials - Google Compute Engine built-in credentials The easiest way to configure it on your local machine is to use the gcloud auth application-default login command. [1] Keep in mind that you can specify your credential file via -Dbigquery.secret. Why are my typed BigQuery case classes not up to date? Case classes generated by @BigQueryType.fromTable or other macros might not update after table schema change. To solve this problem, remove the cached BigQuery metadata by deleting the .bigquery directory in your project root. If you would rather avoid any issues resulting from caching and schema evolution entirely, you can disable caching by setting the system property bigquery.cache.enabled to false. How do I fix “SocketTimeoutException” with BigQuery? BigQuery requests may sometimes timeout, i.e. for complex queries over many tables. exception during macro expansion: [error] java.net.SocketTimeoutException: Read timed out It can be fixed by increasing the timeout settings (default 20s). sbt -Dbigquery.connect_timeout=30000 -Dbigquery.read_timeout=30000 Why do I see names like “main@{NativeMethodAccessorImpl...}” in the UI? Scio traverses JVM stack trace to figure out the proper name of each transform, i.e. flatMap@{UserAnalysis.scala:30} but may get confused if your jobs are under the com.spotify.scio package. Move them to a different package, e.g. com.spotify.analytics to fix the issue. How do I fix “RESOURCE_EXHAUSTED” error? You might see errors like RESOURCE_EXHAUSTED: IO error: No space left on disk in a job. They usually indicate that you have allocated insufficient local disk space to process your job. If you are running your job with default settings, your job is running on 3 workers, each with 250 GB of local disk space. Consider modifying the default settings to increase the number of workers available to your job (via --numWorkers), to increase the default disk size per worker (via --diskSizeGb). Can I use “scala.App” trait instead of “main” method? Your Scio applications should define a main method instead of extending scala.App. Applications extending scala.App due to delayed initialization and closure cleaning may not work properly. How to inspect the content of an SCollection? There is multiple options here: - Use debug() method on an SCollection to print its content as the data flows through the DAG during the execution (after the run or runAndCollect) - Use a debugger and setup break points - make sure to break inside of your functions to stop control at the execution not the pipeline construction time - In Scio-REPL, use runAndCollect() to execute the pipeline and materialize the contents of an SCollection How do I improve side input performance? By default Dataflow workers allocate 100MB (see DataflowWorkerHarnessOptions#getWorkerCacheMb) of memory for caching side inputs, and falls back to disk or network. Therefore jobs with large side inputs may be slow. To override this default, register DataflowWorkerHarnessOptions before parsing command line arguments and then pass --workerCacheMb=N when submitting the job. import com.spotify.scio._ import org.apache.beam.sdk.options.PipelineOptionsFactory import org.apache.beam.runners.dataflow.options.DataflowWorkerHarnessOptions def main(cmdlineArgs: Array[String]): Unit = { PipelineOptionsFactory.register(classOf[DataflowWorkerHarnessOptions]) val (sc, args) = ContextAndArgs(cmdlineArgs) // ... } How do I control concurrency (number of DoFn threads) in Dataflow workers By default Google Cloud Dataflow will use as many threads (concurrent DoFns) per worker as appropriate (precise definition is an implementation detail), in same cases you might want to control this. Use NumberOfWorkerHarnessThreads option from DataflowPipelineDebugOptions. For example to use a single thread per worker on 8 vCPU machine, simply specify 8 vCPU worker machine type, and --numberOfWorkerHarnessThreads=1 in CLI or set corresponding option in DataflowPipelineDebugOptions. How to manually investigate a Cloud Dataflow worker First find the VM of the worker, the easiest place is through the GCE instance groups: gcloud compute ssh --project=<project> --zone=<zone> <VM> To find the id of batch (for batch job) container: docker ps | grep "batch\|streaming" | awk '{print $1}' To get into the harness container: docker exec -it <container-id> /bin/bash To install java jdk tools: apt-get update apt-get install default-jdk -y To find java process: jps To get GC stats: jstat -gcutil <pid> 1000 1000 To get stacktrace: jstack <pid>
https://spotify.github.io/scio/FAQ.html
CC-MAIN-2020-29
en
refinedweb
In an earlier post, I had explained convolution and deconvolution in deep neural networks. The purpose of this post is to demo these operations using PyTorch. Before doing that, we will visit different operations associated with a convolution operation. Convolution Operation Convolution is an operation on two functions of real valued arguments. One of these functions, considered a signal, is an n-dimensional array of numbers, for example a 3-dimensional array of numbers representing a color image. The second function is a kernel or filter of identical dimension but whose size is typically much smaller than the input array size. The array representing the kernel function is called kernel mask. The purpose of the convolution operation is to transform the input into a new array with the aim of highlighting some property of the input array. Thus, convolution can be viewed as feature extraction, and the transformed array is often called feature map where feature implies a particular characteristic of the input extracted by the kernel. The convolution operation is performed by moving the kernel mask over the signal array and calculating the kernel response at each location. To understand the convolution operation, let’s consider a 3-dimensional input array representing the red, green, and blue channels of a colored image patch and a 3×3 convolution filter as shown below. [I am using identical filters for the three input channels for convenience. In practice, each channel has its own mask weights.] To perform convolution at a particular position of the input array, we place the center of the convolution mask at the desired position and perform element by element multiplication between the signal array elements and the convolution mask elements followed by summation for each input channel as shown in the figure below. The responses from three channels are then added to produce the output of the convolution operation. The response over the entire input array is obtained by moving the mask center one step at a time and repeating the calculations. Padding Looking at the above figure, we see that we cannot place the center of kernel mask anywhere in the top or bottom row or in the left or rightmost column; doing so will place part of the mask outside the input array. However, if we were to pad our input array with an additional row at top and bottom, and with an additional column on left and right with all element values being zero, then we can place the convolution mask even at all positions in the top or bottom rows or left or rightmost columns of the input array. Adding extra rows/columns is what is meant by padding in convolution. Without padding, the result of convolution for the above example would be a 6×6 feature map. With padding, the result would be 8×8, same as the input array size. Although the mask used in the example here is a square mask, it is not necessary to have mask height (H) same as mask width (W). It is easy to see that we must add (H-1)/2 rows on top and bottom of the input, and (W-1)/2 columns on each side of the input to maintain the feature map size identical to the input array size. [These numbers for padding assume H and W to be odd integers, which is common.] Stride and Dilation We generally move the kernel mask over the input array to the next pixel. However, we can skip a pixel or two in between when moving the mask. The parameter stride determines how the mask is moved during convolution. A stride of 1 means moving to the next pixel with no skipping of pixels/cells and a stride of 2 means moving by two pixels. A stride value other than the default value of 1 means convolution response will be calculated at fewer positions. This means the size of the resulting feature map will be smaller than the input even with padding. Thus, setting a suitable value for stride allows us to down sample the convolution result. The figure below shows the positions where a 3×3 mask would be placed with the default stride value of 1 (blue cells) and with a stride value of 2 (Cells marked with X), when there is no padding. Clearly, stride of 2 will down sample the input to produce a smaller feature map. Another convolution layer parameter is dilation. This parameter is used to enlarge the mask so that convolution is applied over a larger area. This is different from using a larger kernel mask to start with. The figure below illustrates how a 3×3 mask would be enlarged for dilation of 2. The original 3×3 mask is considered to have a dilation of 1 which means the mask elements are adjacent to each other. The mask on right is the dilated version of the mask on left. As you can see, dilating the convolution mask ignores a certain number of input array elements while computing convolution. The main use of dilation is to produce better quality output in semantic segmentation. Pooling and Rectification A typical convolutional neural network (CNN) is used for classification. In such a network, you will find a large number of convolution layers. Since convolution is a linear operation, we need to insert some nonlinearity between two consecutive convolution layers. Thus, the output of convolution layer is rectified via running it through ReLU (Rectifier Linear Unit). The rectified output of each convolutional layer is followed by a pooling layer whose task is to down sample the convolution result. This is done by replacing a block of convolution layer cells with a single cell. For example, the convolution layer output can be divided into adjacent groups of 2×2 blocks to be replaced by the 2×2 block average. This is called average pooling. When a 2×2 blocks is replaced by the maximum value of the block, the resulting pooling is known as max pooling. Irrespective of the type of pooling used, the basic advantage of pooling is the resulting down sampling which in turn speeds up the computation and minimizes the variance in data moving forward. Visualizing Convolution Output With the above introduction to the different operations involved with a single convolution layer, lets try to put together a demo to show the effect of different parameters on convolution operation. To do the demo, lets get an image that we will use. from PIL import Image import matplotlib.pyplot as plt %matplotlib inline pil_image = Image.open('data/panda.jpg') plt.imshow(pil_image) The image size is 3x235X180. Next, we import the necessary libraries. Since PyTorch accepts tensors, the image read earlier will be converted to a tensor. Furthermore, the input to convolutional layer should be of the size batch_size x Number_of_input_channels x input_height x input_width. Since our batch size is going to be one, we need to add this information to our demo image as well. We are going to use four convolution filters. These will not be learned but set by defining a numpy array. The code for this part including the visualization of the filters is shown below. import torch import torch.nn as nn import torch.nn.functional as F import numpy as np from torchvision import datasets, models, transforms # Transform PIL image to a tensor transform = transforms.ToTensor() img = transform(pil_image) img = img.unsqueeze(0) #Define filters filter_array = np.array([[-1, -0.5,0, 0.5, 1], [-1, -0.5,0, 0.5, 1], [-1, -0.5, 0, 0.5, 1], [-1, -0.5,0, 0.5, 1], [-1, -0.5,0, 0.5, 1]]) filter_1 = filter_array filter_2 = -filter_1 filter_3 = filter_1.T filter_4 = -filter_3 filters = np.array([filter_1, filter_2, filter_3, filter_4]) #Visualize filters fig = plt.figure(figsize=(12, 6)) fig.subplots_adjust(left=0, right=0.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05) for i in range(4): ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[]) ax.imshow(filters[i], cmap='hot') ax.set_title('Filter %s' % str(i+1)) Now, we will set up a two-layer convolution network to perform convolution. The code for this is given below. class DemoNet(nn.Module): def __init__(self, wt1,wt2): super(DemoNet, self).__init__() # We initialize the weights of the convolutional layer as the 4 defined filters self.conv1 = nn.Conv2d(3, 4, kernel_size=5,stride=1,dilation=1, bias=False) # define a pooling layer self.pool1 = nn.MaxPool2d(2, 2) #Define another conv layer self.conv2 = nn.Conv2d(4,4,kernel_size=5, bias =False) self.pool2 = nn.MaxPool2d(2,2) # Set filter weights with torch.no_grad(): self.conv1.wt1 = torch.nn.Parameter(wt1) self.conv2.wt2 = torch.nn.Parameter(wt2) def forward(self, x): # calculates the output of a convolutional layer # pre- and post-activation conv1_x = self.conv1(x) activated1_x = F.relu(conv1_x) # apply pooling pooled1_x = self.pool1(activated1_x) conv2_x = self.conv2(pooled1_x) activated2_x = F.relu(conv2_x) pooled2_x = self.pool2(activated2_x) # returns all layers return conv1_x, activated1_x, pooled1_x, conv2_x,activated2_x,pooled2_x Next, we define a function that will be used to visualize the output of the convolution layer filters. def visualize_layer(layer, n_filters= 4): fig = plt.figure(figsize=(12, 12)) for i in range(n_filters): ax = fig.add_subplot(1, n_filters, i+1) ax.imshow(np.squeeze(layer[0,i].data.numpy())) ax.set_title('Filter %s' % str(i+1)) Now, we are ready to instantiate our network, feed the input image, and compute the output at different layers. We will use the same filter set for both convolution layers. Further, we will use identical kernel weights for every input channel by repeating the weights. wt1 = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor).repeat(1, 3, 1, 1) wt2 = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor).repeat(1, 4, 1, 1) model = DemoNet(wt1,wt2) #Compute output conv1_x, activated1_layer, pooled1_layer, conv2_x,activated2_layer,pooled2_layer = model.forward(img) Lets now visualize the output of the first convolution layer. The first row below shows the outputs of the four filters before rectification. The second row of four images is the output of the first convolution layer after rectification. visualize_layer(conv1_x) visualize_layer(activated1_layer) We should remember that the convolution output (images in the top row) has both positive and negative values while the rectified output (images in the bottom row) has only positive values. This is the reason two row of images look so different. Next, we visualize the second convolution layer in a similar manner. visualize_layer(conv2_x) visualize_layer(activated2_layer) We see that second layer output appears to highlight some image features as eyes as short linear segments. The complexity of such features increase with increasing convolution layers. This is why we need multiple convolution layers for better accuracy. Although all images are displayed at same size, the tick marks on axes indicate that the images at the output of the second layer filters are half of the input image size because of pooling. In this case, we also notice much more variation in the rectified output. To see how changing the stride value from 1 to 2 will change the output, we set the stride to 2 for both layers and run the network again. The first row of the images show the second layer output before rectification and the second row after rectification. With stride of 2, the output at second layer is heavily down sampled and we have a coarser representation of the features. Now, lets see the effect of dilation. With a dilation value of 3, the result at the first layer before and after rectification is shown below. In this case, image features appear prominently compared to output without dilation. 1×1 Convolution A 1×1 convolution is often confusing because its utility is not obvious. A 1×1 convolution applied to a single image will only scale the pixel values by a factor of the 1×1 convolution weight; thus, it is unclear what benefit might be there of such a convolution. Well! to understand what benefit might be there, lets consider m input channels over which 1×1 convolution is to be applied. In this case, the 1×1 convolution operation can be expressed with the following equation where a_k is the scaling factor or weight assigned to the k-th input channel input(k): As this equation indicates, 1×1 convolution performs a weighted aggregation of the input channel values along the depth axis; thus it is often called the depth convolution. This is also illustrated in the figure below. The main usage of 1×1 convolution is in reducing computation or dimensionality reduction by reshaping input before filtering. Suppose at some intermediate stage in your convolution network, you have 64 filtered images or feature maps of size 28×28 pixels. You want to apply 16 different convolution masks of size 3×3 to these 28x28x64 images. This will require 28*28*16*3*3*64 (7225344) operations. Instead of directly applying 16, 3×3 masks on 64 channels of incoming images, we first reshape the incoming images to 28x28x4 via 4, 1×1 convolution filters. This will require 28*28*4*1*1*64 (200704) operations. Next, applying 16, 3×3 filters on the reshaped input will require 28*28*4*3*3*16 (451584) operations. Adding these two sets of operations, we can see that reshaping via 1×1 convolution requires about 90% fewer operations. Lets now perform 1×1 convolution on the output of our demo network. To do this, we add another convolution layer to our network and make necessary changes to the network definition. Instead of providing equal weight across all four channels, I am just using [1.0, 0.5, 0.25, -1.0] as weight values for no particular reason. The result of 1×1 convolution is then the feature map shown below,before and after rectification. Thus, 1×1 convolution allows to reduce the dimensionality while retaining the features of the input images. Deconvolution The use of the term deconvolution in deep learning is different from its meaning in signal and image processing. While convolution without padding results in a smaller sized output, deconvolution increases the output size. With stride values greater than 1, deconvolution can be used as a way of up sampling the data stream. This appears to be its main usage in deep learning. Both the convolution and deconvolution operations in deep learning are actually implemented as matrix multiplication operations and the deconvolution is actually transposed convolution. [Transposed convolution term is gaining more usage in deep learning literature to minimize confusion with actual deconvolution operation.] You can visit my earlier post on this topic where I explain how convolution and deconvolution operations are carried out as matrix multiplications. With the transposed convolution, it is possible to recover the original input with learning. Here I will just show the result of deconvolution operation using a single channel image that will be put through convolution first and then through deconvolution. By selecting a 5×5 kernel of all zeros except the center element of value -1, we will see that the sequence of convolution and deconvolution recovers our original image as shown below. The first image below is the input image, the center image is the output of the convolution, and the last image is the result of deconvolution. ker = torch.tensor([[0,0,0],[0,0,0],[0,-1,0],[0,0,0],[0,0,0]]).type(torch.FloatTensor).repeat(1,1,1) con = nn.Conv2d(1,1,3, stride=1, bias=False) con.weight = torch.nn.Parameter(ker.unsqueeze(0)) con_out = mm(img[0].unsqueeze(0).unsqueeze(0)) decon = nn.ConvTranspose2d(1,1,3, stride=1,bias=False) decon.weight = torch.nn.Parameter(ker.unsqueeze(0)) decon_out= decon(con_out) As we can see from above, there are various parameter choices available in the convolution layer that can be used to control up or down sampling of the data as it moves through numerous layers in a deep convolutional neural network. Before closing this post, I want to tell you that the actual operation in the convolution layer is not really convolution but cross-correlation. However, the term convolution has come to be accepted and used because the convolution masks are not specified before hand, as we did in this example, but are rather learned. Since the difference between convolution and correlation is whether the kernel mask is flipped before applying or not, one can argue that the masks used are flipped versions of the actual learned masks.
https://iksinc.online/2020/03/09/convolution-and-deconvolution-revisited/
CC-MAIN-2020-29
en
refinedweb
First time here? Check out the FAQ! We have a requirement which describes that our webservice namespace (we are the service provider) should be like Is this possible to implement? In Ivy Designer (6.0.4) it looks like there are only points allowed no slashes. Is there a workaround possible asked 06.12.2016 at 09:12 peter_muc 29●11●11●15 accept rate: 0% edited 07.12.2016 at 10:56 Yes, there is a workaround. First, each WS Process is backed by a Java file, which contains the configuration/annotations to provide the WS endpoint methods. This file is located in the folder [project]/src_wsproc/[fully-qualified-name].java. The fully-qualified-name could be defined in the inscription mask of the process. The file is generated by the project builder 'Ivy Web Serivce Process Class Builder', which is defined in the project properties -> builders. [project]/src_wsproc/[fully-qualified-name].java The generaded file is annotated with @javax.jws.WebService. This annotation has the property targetNamespace which allows to define the webserivce namespace, as asked in the question. But per default this property is not set and could not be configuration in the inscription mask. @javax.jws.WebService targetNamespace Because the file gets recreaed when the process changes, in could not be changed directly. Therefore the java file has to be copied to the src-folder of your project and the file in the src_wsproc-folder has to be deleted. The version in the src-folder could be configured/changed as requested. BUT it has to be in line with the configuration of the WS Start Elements. Its now under control of the developer - means, when a WS Start Element configuration changes, the change has to be adapted in the java file too! src src_wsproc Axon.ivy >= 6.5 The java file in the src_wsproc-folder gets not recreated anymore, because the project specific file in the src-folder will be recognized ;-) Axon.ivy < 6.5 The java file in the src_wsproc-folder gets recreatd as soon the process changes. So it has to be deleted by the developer again and again. Or the corrsponding builder is disabled, but then NO WS Process Java files of this project would be generaded any more... Example of the annotation on the corresponding class: @javax.jws.WebService(targetNamespace="") public class MySAPWebService extends ch.ivyteam.ivy.webservice.process.restricted.AbstractWebServiceProcess { ... } answered 07.12.2016 at 16:05 Flavio Sadeghi ♦♦ 1.8k●5●7●23 accept rate: 75% Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown webservice ×47 Asked: 06.12.2016 at 09:12 Seen: 1,186 times Last updated: 07.12.2016 at 16:05 How can I evalute webservice response? The maven build has problem with axis2 How to use MTOM in Ivy web service call How to increase java heap space Web service development in AXON Ivy WebService Process: XmlElement(required=true) does not work How can I integrate with SAP? How can I enable webservice call SOAP request and respons logging on the eninge Mocking SOAP web services Import certificate for HTTPS web service calls
https://answers.axonivy.com/questions/2245/fully-qualified-name-of-webservice-including-sign
CC-MAIN-2020-29
en
refinedweb
Update 2012-01-06: Warning: this post is what you get if you don't know about the function __import__ and try to reinvent it. Imagine I have two different python files that are drop-in replacements for each other, perhaps for some kind of plugin system: $ cat a/c.py def d(): print "hello world a" $ cat b/c.py def d(): print "hello world b"And futher imagine I want to use both of them in the same program. I might write: import c c.d() import c c.d()But that has no chance of working. Python doesn't know where to find 'c'. So I need to tell python where to look for my code by changing sys.path: sys.path.append("a") import c c.d() sys.path.append("b") import c c.d()This will work, ish. It will print "hello world a" twice. Part of this is that the path "a" is sys.path before "b". I really only want this sys.path change to last long enough for my import to work. So maybe I should do: class sys_path_containing(object): def __init__(self, fname): self.fname = fname def __enter__(self): self.old_path = sys.path sys.path = sys.path[:] sys.path.append(self.fname) def __exit__(self, type, value, traceback): sys.path = self.old_path with sys_path_containing("a"): import c c.d() with sys_path_containing("b"): import c c.d()This is closer to working. I define a context manager so that code run with sys_path_containing will see a different sys.path. So my first "import c" will see a sys.path like ["foo", "bar", "a"] and my second import will see ["foo", "bar", "b"]. Each is isolated from the other and from other system changes. Unfortunately, it still won't work, because python remembers what it has imported before and doesn't do it again, this will still only print "hello world a" twice. Switching the second "import c" to a "reload(c)" does fix this problem, but at the expense of you already having to know whether something is loaded. Switching to "del sys.modules['c']" and using __import__ would work, though. Let's make that change and put it all into a context manager that does most of the work for us: class imported(object): def __init__(self, fname): self.fname = os.path.abspath(fname) def __enter__(self): if not os.path.exists(self.fname): raise ImportError("Missing file %s" % self.fname) self.old_path = sys.path sys.path = sys.path[:] file_dir, file_name = os.path.split(self.fname) sys.path.append(file_dir) file_base, file_ext = os.path.splitext(file_name) module = __import__(file_base) del sys.modules[file_base] return module def __exit__(self, type, value, traceback): sys.path = self.old_path with imported("a/c.py") as c: c.d() with imported("b/c.py") as c: c.d()This will print "hello world a" and then "hello world b". Yay!
https://www.jefftk.com/p/importing-python-by-path
CC-MAIN-2020-29
en
refinedweb
This. For those of you wishing to follow along, the data file can be downloaded using the buttons below. So as always, we fist need to specify our imports, after which point we will read in the first csv file: import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline df = pd.read_csv('AppleStore.csv',index_col='id') We then print out the first 5 rows of the newly created DataFrame to verify our import has gone as expected. df.head(5) An often useful method to call at this stage is the “.info()” method as shown below. This shows us whether we have any missing data or “null” values, along with the datatype of each column’s data. df.info() along with the “.describe()” method also – this gets us the “5 figure summary” of each numerical data column, along with the corresponding count and mean. df.describe() So let’s imagine that we wanted to investigate the data from the perspective of how the data differs for each “prime_genre” – that is, for each genre of app. Firstly, I would tend to get an idea of how many unique genres we are dealing with, and a list of our genres categories can be extracted in one of mutiple ways – 2 are shown below. We can either use the build in “.unique()” method as follows: df['prime_genre'].unique() array([‘Games’, ‘Productivity’, ‘Weather’, ‘Shopping’, ‘Reference’, ‘Finance’, ‘Music’, ‘Utilities’, ‘Travel’, ‘Social Networking’, ‘Sports’, ‘Business’, ‘Health & Fitness’, ‘Entertainment’, ‘Photo & Video’, ‘Navigation’, ‘Education’, ‘Lifestyle’, ‘Food & Drink’, ‘News’, ‘Book’, ‘Medical’, ‘Catalogs’], dtype=object) which as you can see, returns an array containing the unique values held in our “prime_genres” column. Another way to extract this information would be to create a “set” of the unique values, and cast that as a list as shown below: list(set(df['prime_genre'])) ['Entertainment', 'Utilities', 'Weather', 'Sports', 'Travel', 'Games', 'Photo & Video', 'Catalogs', 'Reference', 'Navigation', 'Music', 'Book', 'Health & Fitness', 'Shopping', 'Business', 'Lifestyle', 'Finance', 'Education', 'Food & Drink', 'Productivity', 'Social Networking', 'News', 'Medical'] By printing out the length of the list of unique genres, we can see how many categories we are dealing with. Below we use the new “f” string formatting syntax that was released in version 3.6 I believe. count = len(list(set(df['prime_genre']))) print(f'There are {count} unique app categories') There are 23 unique app categories As I was asked explicitly in the Facebook post to show some examples of “filtering” by multiple columns, I shall deal with that now. Now by “filtering”, the example was worded as “Please use pandas library to apply filtered result on two or three column at once. Like if we want to find how many male M from city_category A have purchase more than 7000″…now obviously that comment was referring to a different data set but we can adapt it to our use. Let’s give ourselves the challenge of identifying how many apps of the “Game” genre which scored an average user rating of exactly 4.0 were rated by more than 20,000 people (or at least rated more than 20,000 times). The below line of code gets us the subset of the DataFrame where all 3 of our conditions are met. df[(df['prime_genre'] == "Games") & (df['user_rating'] == 4.0) & (df['rating_count_tot'] > 20000)] We can then just wrap it in a “len” call to find out how many rows it contains and therefore how many apps meet our criteria – in our case 49 apps. len(df[(df['prime_genre'] == "Games") & (df['user_rating'] == 4.0) & (df['rating_count_tot'] > 20000)]) 49 Now imagine we want to find out the average rating, not for each individual app, but rather for each individual genre of app. We can use a simple “groupby”, passing the name of the column by which we wish to group as our first argument, and also setting an “aggregation function” – in our case it will be “np.mean” as we want the average value. Lastly, once we have run the groupby method, we then just extract the column we are interested in by using the standard bracket notation at the end to select that column. genre_rating = df.groupby('prime_genre').agg(np.mean)['user_rating'] If we now display the contents of the “genre_rating” variable we have the following: prime_genre Book 2.477679 Business 3.745614 Catalogs 2.100000 Education 3.376380 Entertainment 3.246729 Finance 2.432692 Food & Drink 3.182540 Games 3.685008 Health & Fitness 3.700000 Lifestyle 2.805556 Medical 3.369565 Music 3.978261 Navigation 2.684783 News 2.980000 Photo & Video 3.800860 Productivity 4.005618 Reference 3.453125 Shopping 3.540984 Social Networking 2.985030 Sports 2.982456 Travel 3.376543 Utilities 3.278226 Weather 3.597222 Name: user_rating, dtype: float64 Its really very simple to then go and plot a simple bar chart using matplotlib – it can be done in its simplest form as a 1 liner… plt.bar(x=genre_rating.index, height=genre_rating) plt.show() OK so technically it does the job and we have our app genre average rating bar chart. But its pretty darn ugly, and the x-axis labels are barely distinguishable as they are all mashed up together with lack of space. Let’s try to clean it up and make it look a little nicer. Firstly, we could increase the size of the overall figure, as its currently rather tightly packed. # set size of overall figure plt.figure(figsize=(20,10)) plt.bar(x=genre_rating.index, height=genre_rating) plt.show() That looks a bit better – but the x-axis labels are still bleeding into one another, making then hard to read. Let’s fix that. One simple way to do it, rather than try to fiddle around with sizing and placement of the x-axis labels, we could just convert our plot to a horizontal bar chart instead – the number of categories certainly justifies this approach. To convert our barchart to a horizontal bar chart,our arguments will change – now instead of passing in an “x” and a “heigh”, we now specify a “y” and a “width”. In effect our “x” becomes our “y” and our “height” becomes our “width”. We can see the result below. I have also sorted the data so that it appears more neatly in the plot. genre_rating = genre_rating.sort_values() # set size of overall figure plt.figure(figsize=(20,10)) plt.barh(y=genre_rating.index, width=genre_rating) plt.show() It’s starting to look a bit better but let’s give it a title and labels the x and y axis. #_6<< Now we could start to apply some of the built-in style sheets that comes packaged with matplotlib – just before we do that though, let’s store the current settings in a variable so we are able to revert back to the default as and when necessary. plt_default = plt.rcParams.copy() Now we can see which styles are available to use simply by using: print(plt.style.available) [‘bmh’, ‘classic’, ‘dark_background’, ‘fast’, ‘fivethirtyeight’, ‘ggplot’, ‘grayscale’, ‘seaborn-bright’, ‘seaborn-colorblind’, ‘seaborn-dark-palette’, ‘seaborn-dark’, ‘seaborn-darkgrid’, ‘seaborn-deep’, ‘seaborn-muted’, ‘seaborn-notebook’, ‘seaborn-paper’, ‘seaborn-pastel’, ‘seaborn-poster’, ‘seaborn-talk’, ‘seaborn-ticks’, ‘seaborn-white’, ‘seaborn-whitegrid’, ‘seaborn’, ‘Solarize_Light2’, ‘tableau-colorblind10’, ‘_classic_test’ To actually use one of the styles, we can use the following syntax: plt.style.use('ggplot') Now if we run our code to plot our bar chart again we see the output has changed somewhat dramatically! The background colour has changed to a light grey, a white grid has been added and the fonts have changed for both the title and the axis labels. That’s not bad going for running just one line of code to change the style. #_7<< If we prefer to use a different style, we have to complete 2 steps: firstly revert the setting to the default and then just run the original line of code again, this time with our preferred style name inserted instead… # reset styles to default plt.rcParams.update(plt_default) # set new style plt.style.use('seaborn-deep') #_8<< or to choose something quite different… # reset styles to default plt.rcParams.update(plt_default) # set new style plt.style.use('dark_background') #_9<< We can even style the plot ourselves if we aren’t satisfied with the built-in styles. Of course it takes longer to do this way, but matplotlib really does afford you control over every single little minutiae that you could hope to style. Let’s look at just the bottom 10 genres by average rating, and begin a new plot with custom styling this time – see what we can come up with. So firstly we extract just the bottom 10 genres. We could do this with a simple slice approach due to the fact we have already sorted our data by size, however to be more explicit I shall use the “nsmallest()” method. It does what it says on the tin and selects the “n” smallest data points, with “n” being the number passed to the method as an argument. So in our case, to extract the bottom 10 we will use: bottom_10 = genre_rating.nsmallest(10).sort_values() Now onto our custom plot! We will be referencing and updating some of the values held in the “rcParams” object – the “rcParams” is a dictionary type object which holds the global style settings for matplotlib. We are able to reference and change these values in order to change default plotting behaviour in so far as the colours, styles, shapes, fonts etc etc that are used. Note that we already have the orignal default settings saved in the “plt_default” variable to help us resest them if things start to go awry! To give an example, to set a new global default figure size, we would run the following: # reset styles to default plt.rcParams.update(plt_default) plt.rcParams["figure.figsize"] = (7, 5) Now if we run a plot again, this time without specifying a figure size: # set chart title plt.title('Average app User Rating by Genre') # set x-axis label plt.xlabel('User Rating') # set y-axis label plt.ylabel('Genre') plt.barh(y=bottom_10.index, width=bottom_10) plt.show() We can see from the above plot that the figure size has correctly conformed to the new global default of “(7, 5)” without having to specify it again in the specific plot code. So let’s move on to some more settings that we can update: # Update the default font used plt.rcParams['font.family'] = 'sans-serif' plt.rcParams['font.sans-serif'] = 'Helvetica' # set the style of the axes and the text color plt.rcParams['axes.edgecolor']='#002147' plt.rcParams['axes.linewidth']=0.8 plt.rcParams['xtick.color']='#002147' plt.rcParams['ytick.color']='#002147' plt.rcParams['text.color']='#002147' fig, ax = plt.subplots() # create an horizontal line that starts at x = 0 with the length # represented by the specific user_rating value for that genre. plt.hlines(y=bottom_10.index, xmin=0, xmax=bottom_10, color='#007ACC', alpha=0.2, linewidth=8) # create for each expense type a dot at the level of the expense percentage value plt.plot(bottom_10, bottom_10.index, "o", markersize=8, color='#007ACC', alpha=0.6) # set labels ax.set_xlabel('User Rating', fontsize=20, fontweight='black', color = '#002147') ax.set_ylabel('') # set axis ax.tick_params(axis='both', which='major', labelsize=16) # add an horizonal label for the y axis fig.text(-0.10, 0.96, 'Transaction Type', fontsize=20, fontweight='black', color = '#002147') # # change the style of the axis spines ax.spines['top'].set_color('none') ax.spines['right'].set_color('none') ax.spines['left'].set_smart_bounds(True) ax.spines['bottom'].set_smart_bounds(True) So that has ended up looking significantly different from what we would expect from using one of the base stylesheets. It’s a little long winded and I am not sure you would want to write out that much additional code for each and every figure you wanted to plot…in actual fact it’s not THAT hard to create your own style sheet and use it just as you would any of the already included base style sheets (more on that in another post one feels!). So now say we wanted to visualise the difference in each genres average score from the overall average score across all genres – how would we go about doing that? The first thing we need to do of course is calculate the data we want to plot so it’s back to Pandas for a monent. Luckily its just a super simple one liner to subtract the mean user rating value across all genres from each individual genre rating value – we are then left with the difference. We will cast the series into a DataFrame this time to allow us to append new columns on in a second. genre_rating_diff = (genre_rating - genre_rating.mean()).to_frame() We then create a new column to hold the colour name string depending on whether the value is positive (‘darkgreen’) or negative (‘red’). genre_rating_diff['colours'] = np.where(genre_rating_diff['user_rating'] < 0, 'red', 'darkgreen') genre_rating_diff.sort_values('user_rating',inplace=True) Below we code the actual plot – it involves the zipping up of and iterating through of, the user_rating data to create the data point labels and so forth. The borders of the plot are then lightened and a title and x-label are added. # Draw plot plt.figure(figsize=(10,10), dpi= 80) plt.scatter(genre_rating_diff['user_rating'], genre_rating_diff.index, s=350, alpha=.6, color=genre_rating_diff['colours']) for x, y, tex in zip(genre_rating_diff['user_rating'], genre_rating_diff.index, genre_rating_diff['user_rating']): t = plt.text(x, y, round(tex, 1), horizontalalignment='center', verticalalignment='center', fontdict={'color':'white'}) # Decorations # Lighten borders plt.gca().spines["top"].set_alpha(.3) plt.gca().spines["bottom"].set_alpha(.3) plt.gca().spines["right"].set_alpha(.3) plt.gca().spines["left"].set_alpha(.3) plt.yticks(genre_rating_diff.index, genre_rating_diff.index) plt.title('Diverging Dotplot of Genre User Rating', fontdict={'size':20}) plt.xlabel('User Rating') plt.grid(linestyle='--', alpha=0.5) plt.xlim(-1.5, 1.5) plt.show() Finally for this post (which I admit has turned out to be rather random in nature) I thought I would involve the use of Seaborn to illustrate how the package can be seamlesly combined with matplotlib to produce some rather pretty charts. We begin by importing the Seaborn module, followed by defining two “kdeplot”s to visualise the distribution of both “user_rating” and “user_rating_ver” which I believe represents the overall user ratings the app has received vs the user ratings the app has received for the latest version number/release. import seaborn as sns # Draw Plot plt.figure(figsize=(16,10), dpi= 80) sns.kdeplot(df['user_rating'], shade=True, color="g", label="User Rating", alpha=.7) sns.kdeplot(df['user_rating_ver'], shade=True, color="deeppink", label="User Rating Version", alpha=.7) # Decoration plt.title('Density Plot of User Ratings vs Latest Version Specific User Ratings', fontsize=22) plt.xlabel('User Rating') plt.legend() plt.show() I hope this post has gone some small way to illustrating how flexible matplotlib can be, even if it does take a little bit of effort to climb its (relatively steep) learning curve. I’ll leave it here for now as it’s turning into a bit of a mish mash!! Until next time… Thank you sir. It helped me very much for understanding My missing skills. Hi Anand – you’re very welcome, I hope it was at least somewhat useful!! If there are any other topics or areas you think I should cover then feel free to comment and make suggestions. 😉 Worth reading.Informative
https://www.pythonforfinance.net/2019/04/30/data-analysis-with-pandas-and-customised-visuals-with-matplotlib/
CC-MAIN-2020-29
en
refinedweb
mbrtowc (3) - Linux Man Pages mbrtowc: convert a multibyte sequence to a wide character NAME mbrtowc - convert a multibyte sequence to a wide character SYNOPSIS #include <wchar.h> size_t mbrtowc(wchar_t *pwc, const char *s, size_t n, mbstate_t *ps); DESCRIPTIONThe NULL,)); RETURN VALUEThe. ATTRIBUTESFor an explanation of the terms used in this section, see attributes(7). CONFORMING TOPOSIX.1-2001, POSIX.1-2008, C99. NOTESThe behavior of mbrtowc() depends on the LC_CTYPE category of the current locale. SEE ALSOmbsinit(3), mbsrtowcs(3) COLOPHONThis page is part of release 5.05 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux man pages generated by: SysTutorials
https://www.systutorials.com/docs/linux/man/3-mbrtowc/
CC-MAIN-2020-29
en
refinedweb
This page describes how to use the in-game editor overlay with its command line interface and documents many internal game variables (cvars). Index Editor Overlay Access the editor overlay either by going straight to Sandbox Mode from the Main Menu, or within gameplay by pressing “\”. The editor overlay is primarily a command driven, but has a number of direct keybindings which are described in this section. Mouse over the console window to enter commands. The overlay is always in either FLY, COMMAND, or EDIT mode. Read this Introduction to Editor Mode first. Keybindings - ‘1’ to switch to FLY mode (takes control of selected ship). Same controls as in-game flying. - ‘2’ to switch to COMMAND mode, which allows selecting and manipulating ships. Within command mode, left click to select ships and right click to set ship destinations. This is the default mode. - ‘3’ to switch to EDIT mode, which allows selecting and manipulating individual blocks. Same controls as the in-game ship editor. - WASD pans the camera in command and edit mode - Double click to move the deploy location, which appears as a blue circle overlaid with an X. Commands which spawn things will spawn at this location. - ‘p’ to freeze the game simulation - ‘o’ to single step the game simulation - ‘v’ hides the overlay (toggle) - ‘/’ repeats the last command - ESC, ‘~’, and ‘\’ exit command mode - Ctrl-s runs the “level_save” command - ‘[‘ and ‘]’ adjust the size of the console - ‘{‘ and ‘}’ adjust the time factor (speed up, slow down the game simulation) Selected ships have green boxes around them. Certain commands (export, ssave) only work on the primary selection, which additionally has a blue box around it. Standard and primary selection work in all modes – in fly mode the ship under control is considered selected. Sectors are delimited by green lines. Sandbox mode has a single large sector. In the main game, 9 sectors are loaded at a time. Commands which manipulate sectors (gen, region, leve_save, etc.) work on the sector in the middle of the screen. Cvars reference The CVARS are internal variables that control many aspects of Reassembly. Cvars can be modified immediately within the console via the “cvar” command, or by modifying the cvars.txt in the save directory. cvars.txt is loaded at game startup. This guide omits – for convenience – the ‘k’ which appears in front of all CVARS elements: eg. kBigTimeStep, kDamageReproduceCooldown, etc. Time Step: BigTimeStep, TimeStep and SuperTimeStep The timestep commands determine how often certain aspects of the AI runs. The time unit is in seconds and the smaller you make these amounts, the more taxing it will be on the processor. DamageDefendTime This represents the amount of time it takes for a damaged enemy to cease pursuing you. DamageReproduceCooldown This represents the amount of time it will take for an enemy to reproduce a child ship after being damaged. BlockBigTimeStep & BlockSuperTimeStep Controls how often the AI makes a decision with regards to blocks. This does not include Launcher or Tractor Beam blocks. BlockSuperTimeStep checks on Seed blocks only. Fleets: FleetComposeFuzzyP Scale at which to randomize between ships with similar P slightly when composing fleets. Sandbox ConsoleLines Sets how many lines are in the Console when loaded. SandboxSize Sets the radial size of the sandbox. GarbageCollectEnable Removes dead ships, and blocks that leave the sandbox bounding area. Reproduce: AIParentReproduceCooldown After a ship is created, this variable tells the AI how long to wait until the reproduce ability is enabled. AIParentPFleetRatio This variable determines how large of a fleet AI spawner ships will attempt to maintain. All excess ships spawned will be released as rogue ships, which will not be considered children of the fleet. The ratio works within this formula: Mothership P * kAIParentPFleetRatio = Fleet size Pathfinding: AIPathMaxQueries The AI Pathfinder will give up if a path is more than X number of turns and straight lines. Making this variable higher equates to more CPU calculations. AIPathTimeout This variable represents the time, in seconds, that a path will be considered as valid. It’s an upper bound, meaning that the AI will attempt to repath every X seconds. AI Targeting: SensorRangeMultiplier Allow globally increasing ship sensor range (set it to e.g. 2, 3). AITargetMin & AITargetThreshold This combination of variables determines what ships the AI targets. Both of these variables refer to a ship’s deadliness. The AI will not target anything beneath the AITargetMin unless the AITargetThreshold * Ship’s P is less than AITargetMin AutoTargetRadius This variable controls the units of distance around the cursor that a ship will be targeted. Decreasing this requires more accuracy from the player. AutofireSpreadWeight The larger this variable is, the more likely your ship’s turrets will target multiple targets over smarter choices. BadAimErrorAngle Angle, in radians, that controls the BAD AIM flag. Agents: The term “Agents” refers to a ship that is user-created, that will appear within your game as you play. AgentCount This represents the number of Agents that spawn when a new game is created. AgentMaxDeadly & AgentMaxShips & AgentMinDeadly These three variables control the size and deadliness of an agent fleet. When spawning an Agent fleet, the game will continue to produce ships until either the Max Ships or Max Deadly variable is reached. AgentMinSpawnDist During the initial map generation, an Agent fleet will not be placed any closer to the player’s ship than this variable. Because Agents tend to be more dangerous than the most of the game’s starter ships AgentSpeed Controls Agent movement speed in the universe. AgentDirectory Sets the directory from which to pull custom agents. Effects: BeamGlow & BeamHalo These two variables affect how lasers look. Both of these control units of distance (width of the beam), and each of these control different parts of the beam. The bigger the number, the bigger the laser visibility. BloomBlocks Controls the bloom effect for blocks BloomBlurRadius This is the size, in pixels, if the blur effect. BloomBrightness This variable controls the bloom brightness. BloomIntensity This variable (between 0 and 1) controls how much bloom will be on. BloomRadius This is the size, in pixels, of the bloom effect. BloomResFactor This controls the bloom associated with weaponry blocks. BloomScale This variable controls how many pixels should be used for the bloom effect. BloomTonemap Turns on/off tonemapping. BlurFactor & BlurMenuRadius These variables control the size of the blur used for background and menu. BlurMinDepth If objects in the z dimension meet or exceed this value, the blur effect will turn on. DisruptIndicatorAlpha & DisruptIndicatorSize Control the size of the effect that appears on the screen edges when the player ship is hit. DopplerFactor Controls the SFX Doppler effect. DiscoverDistance Controls the size of the dotted line around objective objects. Allows the player to activate objectives more easily. kParticleExplosionColor0 Allows you to set the colour of explosions. kParticleSmokeTime Allows you to set howlong smoke particles last. ( When blocks are damaged. ) kParticleFireColor0 Allows you to set the colour of fire. ( When blocks are damaged. ) Blocks: HealMaxDist Controls the maximum number of blocks through which a healing laser can heal from point of contact with entity. BlockElasticity & BlockFriction These two variables control the physics of blocks: namely the elasticity and friction as the names suggest. BlockExplodeChance This variable controls the probability that a block will create an explosion (upon its destruction), causing further damage. It operates on a “1 in X” chance-based system, meaning that the higher the number, the lower the chances of an explosion occurring. Setting this to 1 will pretty much cause chaos. BlockImpulseDamage & BlockSolveDamage These two variables control the impact damage for blocks. Impulse uses mass and velocity to calculate the damage to a block while Solve represents the minimum amount of impact damage that will be given. BlockOverlap This variable grants a certain amount of overlap to blocks during construction. Setting to 0 may break the game. CommandHaloDeadliness & CommandHaloSize These two variables control the light that emits from the Command block. Halo Deadliness controls the size as proportional to P value while Halo Size controls the size as proportional to the block size. Memory: BlockMemoryPoolSize This variable controls how much memory is allocated for block storage. The default, 400kb, should be more than sufficient. MemPoolMaxChain This controls the total number of memory pools. The default is more than the vast majority of players will need. Camera: CameraAutoZoom & CameraEnablePan Enable and disable the automatic camera movement. CameraPanTime This controls the auto pan feature. CameraWheelZSpeed Controls the mouse wheel sensitivity of the camera’s zoom movements. Taking Images: CleanBackground Turning this on allows for a black background. Useful for taking screenshots. ClusterImageSize This variable controls the size of images for DumpShipImages. DumpShipImages When turned on, the game will take and store a picture of every ship upon start. Resource Collection: CollectAnimationTime & CollectBlinkTime These control how long it takes the resource animation to complete. DeferredCollectInterval & DeferredCollectRadius Controls how often the resources are collected and how the grouping of collections works. Construction Editor ConstructorBlockLimit This number controls how many blocks can be placed in the editor. ConstructorViewBounds The view bounds control the minimum and maximum view sizes for the ship editor. Fonts DefaultFontFile, FallbackFontFile, MonoFontFile, SymbolFontFile & TitleFontFile Each of these allow you to specify a specific and game-recognizable font for the specified purpose. Utility kWriteJSON Writes blocks.json to the Reassembly data directory. WriteBlocks Writes blocks.lua to the Reassembly data directory. ModExportFactionStart Controls mod export faction ID start ModExportBlockStart Controls mod export block ID start PortRenderNormals Help for modders to debug port normals. Command Reference Commands help [COMMAND]: Lists documentation for commands. Usage Example > help add NOTE: This would list the two commands ADD and ADDBLOCKS, along with their variables and their proper syntax. find [SEARCH STRING]: List commands that match the search string you type. Also searches the command help string. sopen <NAME>: Open a block cluster by name. ssave [NAME] Save the selected block cluster using a name of your choosing. palette <FACTION>: Palette draws all unique blocks used to build a given faction. The primary difference between PALETTE and MINPALETTE is that PALETTE will also draw variants of the sized blocks. minpalette <FACTION>: Minpalette draws the minimum number of blocks for a given faction. The primary difference between PALETTE and MINPALETTE is that PALETTE will also draw variants of the sized blocks. fleetpalette <FACTION>: Type the command along with the faction number to see a row of deactivated ships, summing up all ships in your chosen faction. Protip: you can type “fleetp” as a shortcut. activate: This command turns on the AI for the selected ship. This requires a ship be present and selected to work and effectively makes it start moving, shooting, and reacting to other objects. deactivate: The opposite of the ACTIVATE command, this command turns off the AI for the selected ship. This requires a ship be present and selected to work and effectively makes it stop moving, shooting, and reacting to other objects. block Spawn a block at the cursor. Blocks are parsed in the same format as in blocks.lua. You can also specify just a block ID to spawn it directly. Usage Examble block 3 block { features=thruster, shape=octagon } command Modify command/ai fields. See the definition of “Command” and “ECommandFlags” appendix on the Docs page. Usage Example > command {flags=ATTACK, sensorRadius=9999, resources=9999} Modding Tools reload [VARIABLES] Reload various modding data files, including block definitions, cvars, and shaders.lua refaction <FACTION>: REFACTION takes a ship from one faction and reinterprets it, using the blocks of another faction. Will try its best to make the change, but will not do it if comparable blocks are not available. The command will attempt to suggest a compatible faction if your selected faction will not work. Usage Example > refaction 12 recolor <COLOR0> <COLOR1>: Recolor whatever object is selected to the two hand-picked colors you provide. Please use the following format for each color: 0xRRBBGG, the same as used in HTML. Usage Example > recolor 0x010199 0x990101 Sector/World fleet <FACTION> [P TOTAL] [COUNT]: At the cursor position, spawn a faction of ships meeting your set criteria of Power count and Number. The first variable tells which faction (by number of faction), the second variable is the P total (the approximate amount of P you’d like to see used to generate your fleet), and the number of ships you’d like to see appear. Note that your P is spread out among all the ship count, so a large ship count will build smaller ships against your P total. Usage Example > fleet 8 200000 30 ship <SHIP NAME>: Summon a ship by name. Use the TAB button to see what ships are available. add <SHIP REGEX> [COUNT]: Spawn and scatter ship objects throughout the sector. Requires a ship name, defaults to a number of 1. Usage Example > add 8_supercorvette 5 asteroid <SIZE> <SIDES 1> [SIDES 2] [SIDES 3] Generate asteroids using up to 3 different shape types. First determine the size (how many blocks the asteroid will be comprised of). The next three variables all determine the shapes of the blocks that make up the asteroid. The variables mean the number of sides. Usage Example > asteroid 50 3 5 8 tiling <SIZE> <TYPE> Generate a tiling asteroid as large as you’d like. Something to drive around, or through, depending on the mining abilities of your spaceship. The SIZE variable is the number of blocks. The TYPE variable represents programmed tile sets assigned numbers 0 through 6. Usage Example > tiling 100 3 plant <TYPE> [SUBTYPE] [SIZE] Nearly identical to the APLANT command with the following differences: you’re only creating a single plant and that plant doesn’t require a surface, it spawns at the cursor position. Usage Example > plant 2 3 50 aplant <TYPE> [SUBTYPE] [SIZE] [COUNT] Spawn a random assortment of plants using the following variables. Note that this command attempts to place plants given available surfaces in the active sector, but is not always successful due to surface area availability and the shape of the plant generated. Use of APLANT may require multiple attempts before a success is noticed. Note also that types 1, 2 and 3 correspond to green, blue and pink, respectively, in increasing order of resource generation. TYPE 1 (blue), 2 (pink), or 3 (green). These colors only distinguish how often resources are produced, from least to most frequent. By using higher-valued numbers, the variable defaults to a value of 1. SUBTYPE This represents the color of the flowering elements. SIZE This represents how many blocks the plant will have. Larger plants have a lower probability of successful generation. COUNT Number of plants to place in the play area. A higher count request will merely make more attempts to place a plant, and will not place copies of a specific plant. Usage Example > aplant 2 3 65 100 fill <FLAGS> <FILLPERC> <SIZE>: Fill your sector with a customized asteroid field. The first variable are the flags. Use the TAB button to see what asteroid types are available. The second variable is the sector fill percentage. The third variable is the size of each asteroid in number of blocks. Usage Example > fill EXPLOSIVE 20 10 penrose [BLOCK SCALE] [ITERATIONS]: Generate a nifty penrose asteroid. The penrose is, for all intents and purposes, circular in shape, but is comprised of a set of tiles placed in the very nifty Penrose tiling sequence. Note that both values should be kept to 9 or less. Usage Example > penrose 4 5 region <FACTION>: This command uses the world generation system to fill the current sector with ships and asteroids for the supplied faction based on the region specifications in regions.lua. target [SIZE] [HEALTH] [SHIELD]: Generate a test dummy to attack and taunt with all your muster. The HEALTH variable gives health PER BLOCK, so be careful not to generate a target dummy that will overpower your weak little ships. The SHIELD variable merely turns on (1) or off (0). Usage Example > target 100 10000 1 Level File Manipulation level_save [NUMBER]: Save your current level as a numbered LUA file. Defaults to the currently loaded level file, which in turn defaults to 0. revert [LEVEL]: Revert back to previous save or load a desired save state by specifying the level number. gen [LEVEL]: Use the GEN command to randomly generate a level (similar to going to a wormhole) or callout a saved file to load it. clear Deletes everything in the sector. Export export <NAME> [AUTHOR]: Export your selected ship with this command. Type the ship’s name as well as your name (or penname) to generate a LUA file with the ship’s information. The command line will display where you can go to retrieve your ship. Usage Example > export Bootsy Fernando import [PATH] Use the IMPORT command in conjunction with a ship file path (LUA file), to pop a ship into your level. IMPORT can also load fleet files (usually .lua.gz) as downloaded from the wormhole feed:. agent Spawn a complete agent fleet. The word “agent” refers to a complete fleet of ships designed by one player – parallel to the player’s own fleet. The command requires no further specifications. Use this command several times consecutively for an instant AI battle. upload This command will upload your fleet to the Reassembly server. They’ll fly out into the ether and begin destroying other Reassembly players, who themselves are completely unawares of what fate awaits them. This is similar to the functionality used when entering a wormhole. constructor [FACTION NAME] Open the ship constructor with the faction of your choice. Immediately begin editing ships in that faction. Usage Example > constructor 7 Utility options [NAME=VAL] Control the game’s options from the command line. Not nearly as easy as just going to the Options menu, but some folks like a challenge. Usage Example > options musicVolume=90 debug [TYPE] Use the debug command in conjunction with one or more of the available types to toggle the debug information on that type. Use the TAB button after typing “debug” to see available options. AI – toggle AI debugging overlay PROFILE|STUTTER – toggle performance graphs Usage Example > debug ai zonefeatures [TYPE] Toggle global flags for the current gameplay simulation zone. Available flags: - RESOURCES – zone has resource packets. Block regeneration rate is also reduced with resources disabled. - SPACERS – a physics optimization that replaces the collision shape for each block with a convex hull under certain conditions. See also “debug SPACERS”, - BACKGROUND – enable background stars and halo graphics. Will draw a grid when background graphics are disabled. - PARTICLES – enable particle effects (thruster trails, explosions, etc.). - UNDO – enable snapshoting of rectangles of zone state before editing operations to support undo (used in sandbox by default). - UNDO_ALL – enable snapshoting of entire zone on editing operations (used in ship constructor by default). - NODEBRIS – destroy all detached blocks when a block cluster splits instead of just disconnecting them. Usage Example > zonefeatures SPACERS screenshot Take a screenshot. Also you can use F2. Scripting and Utility exit The EXIT function is similar to the \ key in that it quits the command line. quit Quit to desktop. Do not pass go. Do not collect $200. freeze Very similar to simply pressing the “P” button on the keyboard, FREEZE stops the game in its tracks. cursor <X> <Y>: Set an absolute position for the cursor using x/y coordinates. Usage Example > cursor 100 250 rcursor <X> <Y>: Short for, relative cursor, it’s related to the CURSOR command. Move the cursor a precise amount of steps to a new location. Use the mouse and double-click feature if you like to live dangerously! repeat <TIMES> <COMMAND>: Repeat a single command multiple times. It could be useful… Usage Example > repeat 3 aplant 2 2 50 10 view [ENABLE] Turning this on (set to 1) will make the console disappear. Pressing any other key will reenable the console. The quickkey command of “v” will also open/close the console. God Powers explode [RADIUS] [DAMAGE] If your sector is looking a little too smug for its own good, the explode command is an excellent solution. If in the Sandbox, set your cursor position, else just define a radius and amount of damage to see some chaos in action. Usage Example > explode 1000 1000 god Use the GOD command to make your ship (or the selected ship) invincible. noclip Turn off collision detection for a selected ship. It’s like being a ghost ship!!!! Oooooooooooooooooh resource [QUANTITY] Adds a specified number of resources to the cursor position. wormhole Use the WORMHOLE command to slap a big ol’ swirling sucky thing on the screen… then fly your butt into it. reveal Sick of exploring? This nifty command shows where everything is on the map! Cheating is fun! conquer Unlocks all sectors in favor of your faction, but does not destroy all opposing ships. Essentially, this action redraws the map, hiding the possessors of all sectors from you. Fonts/Text write <CHARS> [NAME]: Always wanted to write terrible poetry in a font of purple triangular shapes with blue plants growing out of each letter? Well, now you can. Simply create your fancy font using the SFONT command and then type out your poorly selected words using atrocious spelling and grammar using the WRITE command. You’re welcome. Usage Example > write "It was a dark and stormy night" myfont sfont <CHARS> [NAME]: Create and save your own font using Reassembly blocks. It’s actually kinda nifty. Select an object to save it as a font. Usage Example > sfont a myfont wfile <filename> [FONT_NAME]: Always wanted to read The Great Gatsby written in a font of purple triangular shapes with blue plants growing out of each letter? Well, now you can. Simply create your fancy font using the SFONT command and then point the console in the direction of your great American novel using the WFILE command. Usage Example > wfile c:/mahbooks/bucketlist/tehgats.txt myfont Tournament pool Runs each ship against all other selected ships. bracket Start a single-elimination tournament bracket with all selected ships. There are some Cvars not shown here, primarily the physics controls and button options. Things like KSpeedOfSound, KhasDemoButton, KHeadlessMode and KSpinnerRate for example. Is there another full documentation or wiki i can refer to to sate my curiosity? The cvars that are not documented are not very interesting if you aren’t developing Reassembly :-D. It is possible to use a command to unlock all factions, including modded ones? Or do you have to go into configs to do that? You have to edit configs but it’s pretty simple. Just add the faction numbers to “unlock.lua” in the save directory. It should look something like {factions={2, 3, 4, 11, 12, 15}} Please expand on the functionality of each zoneFeature, thanks. Also, how would one go about spawning the gold farmer plants surrounding the asteroids spawned with asteroid/fill/region? For example I’m making a mod that uses plants and I’d like to test their resources in sandbox first. I added a description of each flag to the description for the zonefeature command. For testing plants, you can use the “add” command to a lot of a plant design quickly, using the same logic as the world generator. For example, “add 5_crop 100” will try to add 100 of the crop plants, placing them randomly. You can also use the “region” command to actually read your regions.lua file and fill the sector based on the description, which is ideal for testing a mod. how do you generate a level it says gen level 10 failed and so on for a few different numbers why wont it work? Try the “region” command – e.g. “region 8”. You can also generate custom levels by combining “fill”, “aplant”, “fleet”, and similar commands. How do I select another ship in console mode? Whatever, Got it. See the (newly added) Editor Overlay section. When I save my recolored ships the new colors aren’t saved. How do I save my new colors? How are you saving the ship, and which command are you using to recolor? I think he’s using ssave instead of export
http://www.anisopteragames.com/sandbox-console-docs/
CC-MAIN-2020-29
en
refinedweb
Some time back, I had a fair amount lot of free time and did watch things on you tube. A friend of mine showed a number of cool LED cubes that were trending at that time. It seemed so cool and considering our combined backgrounds a natural that we should try and create our own control board for our own LED cube. He was the hardware engineer so I left him to that task while I thought about patterns and programming the cube. Yet, despite not being the hardware type I thought I could create my own, albeit less sophisticated cube, with only a small bit of effort. Come on just how hard could it really be? Bring in the pie’s Part of the reason I thought this would be so simple is that I was also playing around with the “Raspberry Pi”. The specifications for the pi were not all that impressive when compared to my laptop but it was a truly amazingly affordable and small computer that also offered a comfortable Linux distribution for running it. The Raspberry Pi that I was using is basically just a small computer with the power of a smart phone from 6 years ago. - ARM11 700 mhz - SoC is a Broadcom BCM2835 - 512MB RAM - 2 USB ports - Ethernet port - 3.5mm jack for audio out - HDMI The operating system is a derivative of Debian. Communicating with the Raspberry Pi can be done just like any other computer over the Ethernet or perhaps by using a WiFi dongle. Communication between the Raspberry Pi and other external peripherals is done via the general purpose I/O pins. The GPIO is the break out of the functionality in the Broadcom BCM2835 system on a chip which in additional to other functionality also includes the support for I2C and SPI as well as UHF transmission. I2C and SPI are two different but common communication bus protocols for communication between micro-controllers and other components or peripherals. The protocol allows a component to send commands to a receiving component. This is a really convenient way to delegate tasks to other devices, but inter-device communication is not the only advantage of the GPIO. The other advantage of the GPIO pins is that it is possible to use it to turn on or off something connected to one of these pins. This can be used to enable or disable other devices or even simple things like turning on a motor or a LED. How LED cubes work LED cubes create neat images by turning on LED’s in rapid succession in order to creates letters or patterns. The power required to turn on all the lights becomes more and more considerable as the cubes get larger and larger. A 3x3x3 cube can easily be powered by a 9 volt battery but a 8x8x8 or 16x16x16 cube would require a an actual power supply. The thing that is the same no matter of the size is that the cubes only turn on a single led at a time and rely on persistence of vision to fool the eye into believing that all the LED’s in the pattern are on at the same time. Persistence of vision principle is essentially the same concept underpinning “moving pictures” aka cartoons. The eye cannot distinguish between changes that occur too quickly and so it will see it as a single picture. Thus when the different LED’s in a cube switch rapidly on and off the eye simply sees the LED’s were on. My own cubes logic The first thing that I needed to be consider was how to power each led. Four levels of 16 LED’s gives a total of 64 LED’s or 128 legs (anode and cathode) for the cube. The total number of GPIO pins available on my Raspberry Pi is 26. Even if I could use every single one of these pins this would be nowhere near enough pins for this small project. This is a very important point as not every one of these pins is even available to be used. Some of these pins are used for I2C or power or ground which leaves me severely lacking if the goal is that the Raspberry Pi is to drive everything directly. Yet, I cannot be the only person or manufacturer to have encountered this problem and indeed I was not. 23017 IO Extender The 23017 IO extender has 16 data legs which can be turned on or off. One of these chips alone won’t be enough to deal with 128 legs but there are some clever simplifications to the overall design that will allows me to use two of these chips to drive the rest of the cube. Because we only need to turn on one led at a time, I can use the IO extender for each of the LED columns. Thus if I provide power for each level, as needed, and allow the current to flow through the led controlled by each column then I can turn on or off any led at any spot on the cube. This means that I will need to use one of the 23017’s only to control the columns but then I need a way to control which levels get the power. This is done by using a second 23017 to control which level will receive the power. How this works is as follows. A light bulb or a LED cannot light unless it receives a path to power and ground. When this happens the electricity flows through lighting up the element. Controlling a light bulb is done using a normal wall switch, and this can be done for a LED as well. However, for a LED in a circuit another possibility is to prevent the power from flowing. This is done if either no current is provided for the input (anode) of the LED or no ground (cathode) provided to complete the circuit. If both of these pins are set to ground or both are provided voltage the current will not flow and the LED will not light up. This how my cube controls the individual LED’s. Once the program decides which LED to light up, it will provide current to that level and will set the cathode for that LED to ground. Well, this is how my circuit behaves, but the Raspberry Pi is not toggling the state of the LED’s directly but actually using I2C to send a command to both of my 23017’s to enable each LED. Each 23017 is given a different address so it can be controlled directly. This is done by setting the pins 15 – 17 to a unique value for the 23017 chips that you are using. MCP 23017 This table is intended to give a brief overview. To have a full understanding you will need to reference the specification sheet for this chip. Reading through the I2C specification of how exactly the bus functions is fascinating but perhaps a small it intimidating at the same time. The nice thing for the application developer is that, more likely than not, this is a low level function that you call with some parameters. In my case I was able to find a open source library on the internet which supported the BMC2835 chip which included an I2C support. Circuit diagram Despite how complex this circuit may appear it is simply two IO extender chips, three capacitors and twenty resisters. Somewhat contrary to what you might think, the circuit diagram wasn’t created so we could create the following board. We created the board layout in order to design an optimal layout for the individual connections. There is really very little needs to be described for this circuit. On the left side of the circuit is a simple five line connector connector. These lines will connect to the Raspberry Pi which will provide power, ground and the SCL and SDA in order to use I2C to communicate with the 23017 chips. Building the cube Building the actual LED cube is both the easiest and the hardest part of this project. layers On one hand it should not be overly difficult to solder together the LED’s. It is just a matter of soldering each LED to a wire. This should be made even easier by creating a template to hold all of the LED’s still in the proper relative positions to each other. Indeed this template makes it relatively easy to create a plane of LED’s. You can take as much time as you need and there is no need of any additional tools or assistants to make these planes. One minor inconvenience is to find your own way to create straight wires. If the wires are not straight then the rows or columns won’t be either. While preparing for this I have hit upon three different manners to prepare the wires. - pull the wires - use a drill - use a couple of boards The first two methods pretty much require a vice. Simply connect the wire to the vice and either pull the wire ever so slightly or put the other end into a drill and briefly run the drill. Both of these methods actually stretch the wire slightly which is what causes it to straighten out. I only saw a single video on the topic of using boards to straighten wires, and somehow I lost that link. The idea is pretty simple, you need a flat board and a small thin board that is at least as long as the wire and approximately six cm wide. Just place the wire on the flat board and set the smaller board on top of it. Rub the wire left and right as many times as it takes for it to straighten out. This is a very convenient method as long as the wires don’t need to be too long. Assemble the layers The most challenging part is the assembly of the different layers. I don’t think using the word trivial is correct for creating the layers but you can work and re-work them until you are happy. The hard task is the assembly. The reason this is so difficult is that this method of assembly requires that you use your soldering iron for each layer and you need to solder in the middle. This is fairly easy for the first one but gets more complex as more layers are added. In addition, you have to be extremely careful not to accidentally touch any of the wires with your soldering iron otherwise too much heat may be transmitted to the nearest solder joint. If the LED’s or wires have any pressure pushing them apart this heat may be all it takes to cause a previously soldered part to either come loose or become a weak joint / loose connection. It is very difficult to re-solder these connections once the cube is built. This would become even more problematic the larger the cube is. The level of difficulty assembling in this manner due in part to how densely packed the cube should be. This is both the horizontal and the vertical density. Two difficulties present themselves. The first difficulty is how much spacing to keep between each plane. The second and related problem is ensuring that the spacing is kept uniformly for each layer and for all subsequent layers. I didn’t do it with my cube, I used wires for most of the structure, but quite a few other cubes are built using the actual cathodes and anodes to build up the cube structure. If this were done it would help considerably with the spacing problems. Drawbacks of this design My father always said that you should use the correct tool for the job. In this case, I imagine that the right tool would have been an electrical engineer. As long as you already have a Raspberry Pi, my design is a low cost, low component count that can easily be put together either with a breadboard or with a small amount of soldering. Yet, the Raspberry Pi is not a power plant, there are limitations to how much power it can produce. The power required to turn on all the lights becomes more and more considerable as the cubes get larger . A 3x3x3 cube can easily be powered by a 9 volt battery but a 8x8x8 or 16x16x16 cube would require a an actual power supply. My solution is using I2C for communication with the LED’s via the 23017. The only problem is that I found this communication to be pretty slow. My LED’s were not as bright as a lot of the other LED cubes that you can see on youtube. It was because of the power limitations on how much current can be channeled through the MPC23017 that caused me in the end to take a conservative view and cause me to pick the low power LED’s. The good news is that if I turn on every LED in my cube at the same time it is pretty much within the limitations of the power that can be driven by the 23017. The cube looks good in a dark room but you cannot hardly see it in normal lighting conditions. I was also a bit disappointed in exactly how symmetrical my cube was. Not bad for a first attempt but I was hoping for better. Unfortunately, the actual LED cube is the product of its creator and the result is unrelated to using a kit or creating everything from scratch. The best results are most likely to be attributed to slow and steady work and a great deal of thought and preparation. In retrospect the biggest flaw in my design was originally its biggest strength. The nice thing about using the Raspberry Pi was I simply used secure shell to connect to the computer and wrote, compiled, and debugged the code in one easy step. The only problem is that the Raspberry Pi is a computer not some off the shelf device with a micro-controller. This means that when you turn it on, it takes a while before the computer is booted up and the application can start. The startup isn’t the problem, the problem when it comes down to shutting down the cube. The Raspberry Pi doesn’t doesn’t have an on / off button so to shut it off I need to open up a secure shell to the Raspberry Pi and then issue a shutdown. Once the device is shutdown then unplug the power supply. Because the Raspberry Pi isn’t really doing anything special it could have been replaced with either an Arduino or a smaller Arduino and my custom controller board. Alternatives The internet is full of instructional videos, instructions and photos of how to make your own LED cube. One of the difficulties that I mentioned was that building a cube from horizontal layers. The next cube that I would make would take a one of two different alternatives. The first is to simply solder together an entire column of LED’s and then start to solder them to the board and to each other. The second alternative is somewhat similar. Simply solder together vertical panels and solder them together. Code picube.c #include <bcm2835.h> cat picube.c #include <bcm2835.h> #include <stdlib.h> #include <stdio.h> #include <string.h> #include <unistd.h> #include <signal.h> #include "picube.h" uint cathode = 0x20; uint anode = 0x21; uint IOCON = 0x0A; uint IODIRA = 0x00; uint IODIRB = 0x01; uint OLATA = 0x14; int mapping[4][4]; int zmapping[4]; int debug = 0; int verbose = 0; void sigint_handler() { printf("unexpected request, shutting down program\n"); final_bcm2835(); } void init_mapping() { mapping[0][1] = 3; mapping[0][0] = 7; mapping[0][3] = 11; mapping[0][2] = 15; mapping[1][1] = 2; mapping[1][0] = 6; mapping[1][3] = 10; mapping[1][2] = 14; mapping[2][1] = 1; mapping[2][0] = 5; mapping[2][3] = 9; mapping[2][2] = 13; mapping[3][1] = 0; mapping[3][0] = 4; mapping[3][3] = 8; mapping[3][2] = 12; if (debug != 0) { int x,y; for (x = 0; x < 4; x++) { for (y = 0; y < 4; y++) printf("%02d ",mapping[x][y]); printf("\n"); } } zmapping[0] = ZDIM0; zmapping[1] = ZDIM1; zmapping[2] = ZDIM2; zmapping[3] = ZDIM3; } /* ** a bit of checking of return codes */ void check_retcode(int status) { switch (status) { case BCM2835_I2C_REASON_OK: //printf("Code: Ok\n"); break; case BCM2835_I2C_REASON_ERROR_NACK: printf("Code: Received a NACK\n"); break; case BCM2835_I2C_REASON_ERROR_CLKT: printf("Code: Received Clock Stretch Timeout\n"); break; case BCM2835_I2C_REASON_ERROR_DATA: printf("Code: Not all data is sent / received\n"); break; default: printf("Code: unknown\n"); break; } } // more for debugging than anything else. void turn_on_entire_cube() { if (verbose != 0) printf("turning on whole cube\n"); // turn on all cathodes bcm2835_i2c_setSlaveAddress(cathode); char cmd7[] = { OLATA, 0x00, 0x00}; check_retcode(bcm2835_i2c_write(cmd7,sizeof(cmd7))); // turn on all layers (anodes) char cmd9[] = { OLATA, 0xF0 }; bcm2835_i2c_setSlaveAddress(anode); check_retcode(bcm2835_i2c_write(cmd9,sizeof(cmd9))); sleep(1); } void turn_off_entire_cube() { if (debug != 0) printf("turning off whole cube\n"); // turn off all layers char cmd10[] = { OLATA, 0x00 }; bcm2835_i2c_setSlaveAddress(anode); check_retcode(bcm2835_i2c_write(cmd10,sizeof(cmd10))); } void init_bcm2835() { if (!bcm2835_init()) { printf("failed on init\n"); exit(1); } bcm2835_i2c_begin(); bcm2835_i2c_set_baudrate(400000); } void init_cathode() { // point library to cathode bcm2835_i2c_setSlaveAddress(cathode); // set bit 7 = 0 use consecutive mapping // set bit 5 = 1 (address increment) to on // all others dont care. char cmd[] = { IOCON, 0x20 }; check_retcode(bcm2835_i2c_write(cmd,sizeof(cmd))); // all pins output char cmd2[] = { IODIRA, 0x00, 0x00 }; check_retcode(bcm2835_i2c_write(cmd2,sizeof(cmd2))); #if 0 // all pins output char cmd3[] = { IODIRB, 0x00 }; check_retcode(bcm2835_i2c_write(cmd3,sizeof(cmd3))); #endif } void init_anode() { // point library to anode bcm2835_i2c_setSlaveAddress(anode); char cmd6[] = { IOCON, 0x20 }; check_retcode(bcm2835_i2c_write(cmd6,sizeof(cmd6))); // all pins output char cmd4[] = { IODIRA, 0x00, 0x00 }; check_retcode(bcm2835_i2c_write(cmd4,sizeof(cmd4))); #if 0 // all pins output char cmd5[] = { IODIRB, 0x00 }; check_retcode(bcm2835_i2c_write(cmd5,sizeof(cmd5))); #endif } void init_microprocessor() { init_cathode(); init_anode(); } void light_mask_z(int mask, int zmask) { int bitmask, invertmask, lowerx, upperx; /* ** first deal with setting up the cathodes */ if (debug != 0 ) printf("mask %04x z%2d\n",mask,zmask); // which bits should be on bitmask = mask; // then we invert bits invertmask = ~bitmask & 0xffff; // break it up for microprocessor lowerx = invertmask & 0x00ff; upperx = (invertmask & 0xff00) >> 8; = zmask; char cmd8[] = { OLATA, bitmask & 0xff }; check_retcode(bcm2835_i2c_write(cmd8,sizeof(cmd8))); } // x is 0 - 3 // y is 0 - 3 // z is 0 - 3 void light_x_y_z(int X, int Y, int Z) { int bitmask, invertmask, lowerx, upperx; int mappedpin; /* ** first deal with setting up the cathodes */ mappedpin = mapping[X][Y]; if (debug != 0) printf("x%2d(%d) y%2d z%2d\n",X,mappedpin,Y,Z); // first set the bit bitmask = ( 1 << mappedpin ); // then we invert bits invertmask = ~bitmask & 0xffff; // break it up for microprocessor lowerx = invertmask & 0x00ff; upperx = (invertmask & 0xff00) >> 8; // printf("%2d %04x %04x %02x %02x\n",mappedpin,bitmask,invertmask,upperx,lowerx); = ( 0x10 << Z ); char cmd8[] = { OLATA, bitmask & 0xff }; check_retcode(bcm2835_i2c_write(cmd8,sizeof(cmd8))); } void rotate_layers() { if (verbose != 0) printf("rotate layers\n"); // enable int idx; bcm2835_i2c_setSlaveAddress(anode); for (idx = 0; idx < 4; idx++) { // actually write something out, will turn on // one layer for any leds that are setup char cmd8[] = { OLATA, 0x10 << idx }; if (debug != 0) printf("cmd8 %x %x\n",idx, 0x10 << idx); check_retcode(bcm2835_i2c_write(cmd8,sizeof(cmd8))); sleep(2); } } void circle_chase_base(int speed, double divisor) { int x,y; int z = 1; int iterations; for (iterations = 0; iterations < 10; iterations++) { //printf("speed %d\n",speed); for (z = 0; z < 4; z++) { for (x = 0; x < 4; x++) { light_x_y_z(x,0,z); delay(speed); } for (y = 1; y < 4; y++) { light_x_y_z(3,y,z); delay(speed); } for (x = 2; x > 0;x--) { light_x_y_z(x,3,z); delay(speed); } for (y = 3; y > 0; y--) { light_x_y_z(0,y,z); delay(speed); } } speed = (int)(divisor * (double)speed); } } void fast_circle_chase() { if (verbose != 0) printf("fast circle chase\n"); circle_chase_base(15,1); } void circle_chase() { if (verbose != 0) printf("normal circle chase\n"); circle_chase_base(200,0.65); } void one_after_another() { int count,x,y,z; if (verbose != 0) printf("one led after another\n"); for (count = 0; count < 1; count++) { for (z = 0; z < 4; z++) for (y = 0; y < 4; y++) for (x = 0; x < 4; x++) { light_x_y_z(x,y,z); delay(30); } } } void top_bottom_rotate_side() { int counter; if (verbose != 0) printf("top bottom rotate\n"); for (counter = 0; counter < 5; counter++) { light_mask_z(YCOL0,ZDIM_TOPBOT); delay(250); light_mask_z(XROW3,ZDIM_TOPBOT); delay(250); light_mask_z(YCOL3,ZDIM_TOPBOT); delay(250); light_mask_z(XROW0,ZDIM_TOPBOT); delay(250); } } void side_corkscrew() { int counter; if (verbose != 0) printf("side corkscrew\n"); for (counter = 0; counter < 5; counter++) { // sides light_mask_z(YCOL0,ZDIM0); delay(250); light_mask_z(XROW3,ZDIM1); delay(250); light_mask_z(YCOL3,ZDIM2); delay(250); light_mask_z(XROW0,ZDIM3); delay(250); // column CA7 light_mask_z(CA07,ZDIM_ALL); delay(250); } } void rotate_takeoff() { if (verbose != 0) printf("rotate take off\n"); light_x_y_z(3, 0, 0); // A light_x_y_z(2, 1, 0); light_x_y_z(1, 2, 0); light_x_y_z(0, 3, 0); sleep (200); light_x_y_z(2, 0, 0); // B light_x_y_z(2, 1, 0); light_x_y_z(2, 2, 0); light_x_y_z(2, 3, 0); sleep (200); light_x_y_z(1, 0, 0); // C light_x_y_z(1, 1, 0); light_x_y_z(1, 2, 0); light_x_y_z(1, 3, 0); sleep (200); light_x_y_z(0, 1, 0); // D light_x_y_z(1, 1, 0); light_x_y_z(2, 1, 0); light_x_y_z(3, 1, 0); sleep (200); light_x_y_z(0, 2, 0); // E light_x_y_z(1, 2, 0); light_x_y_z(2, 2, 0); light_x_y_z(3, 2, 0); sleep (200); light_x_y_z(0, 0, 0); // F light_x_y_z(1, 1, 0); light_x_y_z(2, 2, 0); light_x_y_z(3, 3, 0); sleep (200); light_x_y_z(1, 1, 0); // G light_x_y_z(2, 1, 0); light_x_y_z(1, 2, 0); light_x_y_z(2, 2, 0); sleep (200); light_x_y_z(1, 1, 0); // G light_x_y_z(2, 1, 0); light_x_y_z(1, 2, 0); light_x_y_z(2, 2, 0); sleep (200); // H light_mask_z(OUTERRING,0); delay(250); // I light_mask_z(CORNERS,0); delay(250); } void fireworks() { int x, y; if (verbose != 0) printf("fireworks\n"); for (x = 1; x <= 2; x++) for (y = 1; y <= 2; y++) { light_x_y_z(x, y, 0); delay(100); light_x_y_z(x, y, 0); light_x_y_z(x, y, 1); delay(100); light_x_y_z(x, y, 0); light_x_y_z(x, y, 1); light_x_y_z(x, y, 2); delay(100); light_x_y_z(x, y, 0); light_x_y_z(x, y, 1); light_x_y_z(x, y, 2); light_x_y_z(x, y, 3); delay(100); light_x_y_z(x, y, 3); delay(25); light_mask_z(INNERRING,ZDIM3); delay(150); light_mask_z(OUTERRING,ZDIM3); delay(150); // falling pieces int zdim ; for (zdim = 3; zdim >= 0; zdim--) { if (zdim % 2 == 1) light_mask_z( CA05 | CA12 | CA09 ,zmapping[zdim]); else light_mask_z(CA07 | CA04 | CA08 | CA11 ,zmapping[zdim]); delay(75 + zdim * 25); } turn_off_entire_cube(); } } void helicopter() { int smallwait[] = {110, 90, 65, 40 }; int zdim,idx; if (verbose != 0) printf("helicopter\n"); for (zdim = 0; zdim < 4; zdim++) { for (idx = 0; idx <= zdim+4; idx++) { light_mask_z(PATTERN_A,zmapping[zdim]); delay(smallwait[zdim]); light_mask_z(PATTERN_B,zmapping[zdim]); delay(smallwait[zdim]); light_mask_z(PATTERN_C,zmapping[zdim]); delay(smallwait[zdim]); light_mask_z(PATTERN_F,zmapping[zdim]); delay(smallwait[zdim]); light_mask_z(PATTERN_D,zmapping[zdim]); delay(smallwait[zdim]); light_mask_z(PATTERN_E,zmapping[zdim]); delay(smallwait[zdim]); } } for (zdim = 0; zdim < 4; zdim++) { light_mask_z(PATTERN_G,zmapping[zdim]); delay(1000/(zdim+1)); light_mask_z(PATTERN_H,zmapping[zdim]); delay(750/(zdim+1)); } for (zdim = 3; zdim >= 0; zdim--) { light_mask_z(PATTERN_I,zmapping[zdim]); delay(300); } } void x_rotating_plate() { int idx,rep,smallwait = 10; if (verbose != 0) printf("x rotating plate\n"); for (idx = 0; idx < 10; idx++) { for (rep = 0; rep < 5; rep++) { light_mask_z(XROW3,ZDIM3); delay(smallwait); light_mask_z(XROW2,ZDIM2); delay(smallwait); light_mask_z(XROW1,ZDIM1); delay(smallwait); light_mask_z(XROW0,ZDIM0); delay(smallwait); } for (rep = 0; rep < 5; rep++) { light_mask_z(XROW3,ZDIM1); delay(smallwait); light_mask_z(XROW2,ZDIM1); delay(smallwait); light_mask_z(XROW1,ZDIM1); delay(smallwait); light_mask_z(XROW0,ZDIM1); delay(smallwait); } for (rep = 0; rep < 5; rep++) { light_mask_z(XROW3,ZDIM2); delay(smallwait); light_mask_z(XROW2,ZDIM2); delay(smallwait); light_mask_z(XROW1,ZDIM2); delay(smallwait); light_mask_z(XROW0,ZDIM2); delay(smallwait); } for (rep = 0; rep < 5; rep++) { light_mask_z(XROW0,ZDIM3); delay(smallwait); light_mask_z(XROW1,ZDIM2); delay(smallwait); light_mask_z(XROW2,ZDIM1); delay(smallwait); light_mask_z(XROW3,ZDIM0); delay(smallwait); } for (rep = 0; rep < 5; rep++) { light_mask_z(XROW1,ZDIM3); delay(smallwait); light_mask_z(XROW1,ZDIM2); delay(smallwait); light_mask_z(XROW1,ZDIM1); delay(smallwait); light_mask_z(XROW1,ZDIM0); delay(smallwait); } for (rep = 0; rep < 5; rep++) { light_mask_z(XROW2,ZDIM3); delay(smallwait); light_mask_z(XROW2,ZDIM2); delay(smallwait); light_mask_z(XROW2,ZDIM1); delay(smallwait); light_mask_z(XROW2,ZDIM0); delay(smallwait); } turn_off_entire_cube(); } } void y_rotating_plate() { int idx,rep,smallwait = 10; if (verbose != 0) printf("y rotating plate\n"); for (idx = 0; idx < 10; idx++) { for (rep = 0; rep < 5; rep++) { light_mask_z(YCOL3,ZDIM3); delay(smallwait); light_mask_z(YCOL2,ZDIM2); delay(smallwait); light_mask_z(YCOL1,ZDIM1); delay(smallwait); light_mask_z(YCOL0,ZDIM0); delay(smallwait); } for (rep = 0; rep < 5; rep++) { light_mask_z(YCOL3,ZDIM1); delay(smallwait); light_mask_z(YCOL2,ZDIM1); delay(smallwait); light_mask_z(YCOL1,ZDIM1); delay(smallwait); light_mask_z(YCOL0,ZDIM1); delay(smallwait); } for (rep = 0; rep < 5; rep++) { light_mask_z(YCOL3,ZDIM2); delay(smallwait); light_mask_z(YCOL2,ZDIM2); delay(smallwait); light_mask_z(YCOL1,ZDIM2); delay(smallwait); light_mask_z(YCOL0,ZDIM2); delay(smallwait); } for (rep = 0; rep < 5; rep++) { light_mask_z(YCOL0,ZDIM3); delay(smallwait); light_mask_z(YCOL1,ZDIM2); delay(smallwait); light_mask_z(YCOL2,ZDIM1); delay(smallwait); light_mask_z(YCOL3,ZDIM0); delay(smallwait); } for (rep = 0; rep < 5; rep++) { light_mask_z(YCOL1,ZDIM3); delay(smallwait); light_mask_z(YCOL1,ZDIM2); delay(smallwait); light_mask_z(YCOL1,ZDIM1); delay(smallwait); light_mask_z(YCOL1,ZDIM0); delay(smallwait); } for (rep = 0; rep < 5; rep++) { light_mask_z(YCOL2,ZDIM3); delay(smallwait); light_mask_z(YCOL2,ZDIM2); delay(smallwait); light_mask_z(YCOL2,ZDIM1); delay(smallwait); light_mask_z(YCOL2,ZDIM0); delay(smallwait); } turn_off_entire_cube(); } } // rotates around the z axis void z_rotating_plate() { int smallwait[] = {250, 200, 150, 100 }; int zdim,idx; if (verbose != 0) printf("z rotating plate\n"); for (idx = 0; idx < 4; idx++) for (zdim = 0; zdim < 4; zdim++) { for (idx = 0; idx <= zdim; idx++) { light_mask_z(PATTERN_A,ZDIM_ALL); delay(smallwait[zdim]); light_mask_z(PATTERN_B,ZDIM_ALL); delay(smallwait[zdim]); light_mask_z(PATTERN_C,ZDIM_ALL); delay(smallwait[zdim]); light_mask_z(PATTERN_F,ZDIM_ALL); delay(smallwait[zdim]); light_mask_z(PATTERN_D,ZDIM_ALL); delay(smallwait[zdim]); light_mask_z(PATTERN_E,ZDIM_ALL); delay(smallwait[zdim]); } } } void wire_frame() { int iter; if (verbose != 0) printf("wire frame\n"); for (iter = 0; iter < 200; iter++) { light_mask_z(OUTERRING,ZDIM0); delay(10); light_mask_z(CORNERS,ZDIM1); delay(10); light_mask_z(CORNERS,ZDIM2); delay(10); light_mask_z(OUTERRING,ZDIM3); delay(10); } } void shrinking_cube() { int iter,idx; int redraw = 20; if (verbose != 0) printf("shrinking cube\n"); for (idx = 0; idx < 5; idx++) { //); } // next biggest for (iter = 0; iter < redraw; iter++) { light_mask_z(CA07 | CA06 | CA03 | CA02, ZDIM0); delay(10); light_mask_z(CA07 | CA06 | CA03 | CA02, ZDIM1); delay(10); } // smallest cube, actually just a point for (iter = 0; iter < redraw; iter++) { light_mask_z(CA07, ZDIM0); delay(10); } delay(100); // next biggest for (iter = 0; iter < redraw; iter++) { light_mask_z(CA07 | CA06 | CA03 | CA02, ZDIM0); delay(10); light_mask_z(CA07 | CA06 | CA03 | CA02, ZDIM); } } } void floating_cube(int pause) { int iter; int count = 2; int cubes[] = { CA04 | CA00 | CA05 | CA01, // 0 CA05 | CA01 | CA06 | CA02, // 1 CA06 | CA02 | CA07 | CA03, // 2 CA02 | CA14 | CA03 | CA15, // 3 CA14 | CA10 | CA15 | CA11, // 4 CA13 | CA09 | CA14 | CA10, // 5 CA12 | CA08 | CA13 | CA09, // 6 CA00 | CA12 | CA01 | CA13, // 7 CA01 | CA13 | CA02 | CA14, // 8 }; int lvls[] = { ZDIM0 | ZDIM1, ZDIM1 | ZDIM2, ZDIM2 | ZDIM3 }; // next biggest for (iter = 0; iter < count; iter++) { light_mask_z(cubes[0],lvls[0]); delay(pause); light_mask_z(cubes[1],lvls[0]); delay(pause); light_mask_z(cubes[1],lvls[1]); delay(pause); light_mask_z(cubes[2],lvls[1]); delay(pause); light_mask_z(cubes[8],lvls[1]); delay(pause); light_mask_z(cubes[8],lvls[2]); delay(pause); light_mask_z(cubes[3],lvls[2]); delay(pause); light_mask_z(cubes[4],lvls[2]); delay(pause); light_mask_z(cubes[5],lvls[1]); delay(pause); light_mask_z(cubes[8],lvls[1]); delay(pause); light_mask_z(cubes[6],lvls[0]); delay(pause); light_mask_z(cubes[7],lvls[1]); delay(pause); light_mask_z(cubes[8],lvls[0]); delay(pause); light_mask_z(cubes[0],lvls[0]); delay(pause); light_mask_z(cubes[8],lvls[1]); delay(pause); light_mask_z(cubes[4],lvls[2]); delay(pause); light_mask_z(cubes[3],lvls[2]); delay(pause); light_mask_z(cubes[2],lvls[1]); delay(pause); light_mask_z(cubes[1],lvls[1]); delay(pause); light_mask_z(cubes[0],lvls[0]); delay(pause); } } void final_bcm2835() { // make sure we turn off the leds turn_off_entire_cube(); // shutdown in an orderly manner bcm2835_i2c_end(); bcm2835_close(); } int main(int argc, char **argv) { signal(SIGINT,sigint_handler); init_mapping(); init_bcm2835(); init_microprocessor(); int loopcounter ; for (loopcounter = 0; loopcounter < 10; loopcounter++) { //#if 0 loopcounter = 1; turn_on_entire_cube(); sleep(2); one_after_another(); sleep(1); floating_cube(700); sleep(1); circle_chase(); sleep(1); side_corkscrew(); sleep(1); wire_frame(); top_bottom_rotate_side(); sleep(1); fast_circle_chase(); sleep(1); helicopter(); sleep(1); fireworks(); sleep(1); x_rotating_plate(); floating_cube(250); y_rotating_plate(); shrinking_cube(); z_rotating_plate(); //printf("%d\n",loopcounter); //#endif shrinking_cube(); } final_bcm2835(); return 0; } picube.h #ifndef _PICUBE_H_ #define _PICUBE_H_ extern void final_bcm2835(); #define CA07 0x0080 #define CA06 0x0040 #define CA05 0x0020 #define CA04 0x0010 #define CA03 0x0008 #define CA02 0x0004 #define CA01 0x0002 #define CA00 0x0001 #define CA15 0x8000 #define CA14 0x4000 #define CA13 0x2000 #define CA12 0x1000 #define CA11 0x0800 #define CA10 0x0400 #define CA09 0x0200 #define CA08 0x0100 #define YCOL0 CA07 | CA03 | CA15 | CA11 #define YCOL1 CA06 | CA02 | CA14 | CA10 #define YCOL2 CA05 | CA01 | CA13 | CA09 #define YCOL3 CA04 | CA00 | CA12 | CA08 #define XROW0 CA07 | CA06 | CA05 | CA04 #define XROW1 CA03 | CA02 | CA01 | CA00 #define XROW2 CA15 | CA14 | CA13 | CA12 #define XROW3 CA11 | CA10 | CA09 | CA08 #define ZDIM_BOTTOM 0x10 #define ZDIM_TOP 0x80 #define ZDIM_TOPBOT 0x90 #define ZDIM0 0x10 #define ZDIM1 0x20 #define ZDIM2 0x40 #define ZDIM3 0x80 // a few patterns that can be used #define CORNERS CA07 | CA11 | CA04 | CA08 #define OUTERRING XROW0 | XROW3 | YCOL0 | YCOL3 #define INNERRING CA01 | CA02 | CA13 | CA14 #define PATTERN_A CA04 | CA01 | CA14 | CA11 #define PATTERN_B YCOL2 #define PATTERN_C YCOL1 #define PATTERN_D XROW1 #define PATTERN_E XROW2 #define PATTERN_F CA07 | CA02 | CA13 | CA08 #define PATTERN_G INNERRING #define PATTERN_H OUTERRING #define PATTERN_I CORNERS #define ZDIM_ALL 0xF0 #endif watchdog.sh This script actually shouldn’t be necessary at all. Occasionally the program stops. I had not been able to trace it down to either my libraries or my code, but in the end decided to simply write a small watchdog to ensure that if the program stops, it is restarted. #!/bin/bash PGM=/home/pi/picube/picube while [ 1 == 1 ] do CNT=`ps -ef | grep $PGM | grep -v grep | grep -v sudo | awk '{ print $8 }' | wc -l` NAME=`ps -ef | grep $PGM | grep -v grep | grep -v sudo | awk '{ print $8 }' ` ID=`ps -ef | grep $PGM | grep -v grep | grep -v sudo | awk '{ print $2 }' ` if [ $CNT -ne 1 ] then echo restarting $PGM & fi sleep 60 done Makefile PGM=picube SRC=$(PGM).c OBJ=$(SRC:.c=.o) CFLAGS=-Wall INCDIR=-I. LIBS=-lbcm2835 -lm LIBDIR= all: $(PGM) $(PGM): $(OBJ) $(PGM).h @echo building $(PGM) gcc $(INCDIR) $(OBJ) $(LIBS) $(LIBDIR) -o $(PGM) .c.o: @echo "compling $@" @gcc -c $(CFLAGS) $< -o $@ clean: @echo cleaning up build env @rm -f $(OBJ) $(PGM) .phony: run run: all sudo ./$(PGM) #dependancies $(PGM).o: $(PGM.h)
https://blog.paranoidprofessor.com/index.php/2016/09/24/just-making-it-a-4x4x4-led-cube/
CC-MAIN-2020-29
en
refinedweb
Teaching Your Computer As I have written in my last two articles (Machine Learning Everywhere and Preparing Data for Machine Learning), machine learning is influencing our lives in numerous ways. As a consumer, you've undoubtedly experienced machine learning, whether you know it or not—from recommendations for what products you should buy from various online stores, to the selection of postings that appear (and don't) on Facebook, to the maddening voice-recognition systems that airlines use, to the growing number of companies that offer to select clothing, food and wine for you based on your personal preferences.. In "supervised learning", the computer is trained to categorize data based on inputs that humans had previously categorized. In "unsupervised learning", you ask the computer to categorize data on your behalf. In my last article, I started exploring a data set created by Scott Cole, a data scientist (and neuroscience PhD student) who measured burritos in a variety of California restaurants. I looked at the different categories of data that Cole and his fellow eater-researchers gathered and considered a few ways one could pare down the data set to something more manageable, as well as reasonable. Here I describe how to take this smaller data set, consisting solely of the features that were deemed necessary, and use it to train the computer by creating a machine-learning model. Machine-Learning Models Let's say that the quality of a burrito is determined solely by its size. Thus, the larger the burrito, the better it is; the smaller the burrito, the worse it is. If you describe the size as a matrix X, and the resulting quality score as y, you can describe this mathematically as: y = qX where q is a factor describing the relationship between X and y. Of course, you know that burrito quality has to do with more than just the size. Indeed, in Cole's research, size was removed from the list of features, in part because not every data point contained size information. Moreover, this example model will need to take several factors—not just one—into consideration, and may have to combine them in a sophisticated way in order to predict the output value accurately. Indeed, there are numerous algorithms that can be used to create models; determining which one is appropriate, and then tuning it in the right way, is part of the game. The goal here, then, will be to combine the burrito data and an algorithm to create a model for burrito tastiness. The next step will be to see if the model can predict the tastiness of a burrito based on its inputs. But, how do you create such a model? In theory, you could create it from scratch, reading the appropriate statistical literature and implementing it all in code. But because I'm using Python, and because Python's scikit-learn has been tuned and improved over several years, there are a variety of model types to choose from that others already have created. Before starting with the model building, however, let's get the data into the necessary format. As I mentioned in my last article and alluded to above, Python's machine-learning package (scikit-learn) expects that when training a supervised-learning model, you'll need a set of sample inputs, traditionally placed in a two-dimensional matrix called X (yes, uppercase X), and a set of sample outputs, traditionally placed in a vector called y (lowercase). You can get there as follows, inside the Jupyter notebook: %pylab inline import pandas as pd # load pandas with an alias from pandas import Series, DataFrame # load useful Pandas classes df = pd.read_csv('burrito.csv') # read into a data frame Once you have loaded the CSV file containing burrito data, you'll keep only those columns that contain the features of interest, as well as the output score: burrito_data = df[range(11,24)] You'll then remove the columns that are highly correlated to one another and/or for which a great deal of data is missing. In this case, it means removing all of the features having to do with burrito size: burrito_data.drop(['Circum', 'Volume', 'Length'], axis=1, ↪inplace=True) Let's also drop any of the samples (that is, rows) in which one or more values is NaN ("not a number"), which will throw off the values: burrito_data.dropna(inplace=True, axis=0) Once you've done this, the data frame is ready to be used in a model. Separate out the X and y values: y = burrito_data['overall'] X = burrito_data.drop(['overall'], axis=1) The goal is now to create a model that describes, as best as possible, the way the values in X lead to a value in y. In other words, if you look at X.iloc[0] (that is, the input values for the first burrito sample) and at y.iloc[0] (that is, the output value for the first burrito sample), it should be possible to understand how those inputs map to those outputs. Moreover, after training the computer with the data, the computer should be able to predict the overall score of a burrito, given those same inputs. Creating a Model Now that the data is in order, you can build a model. But which algorithm (sometimes known as a "classifier") should you use for the model? This is, in many ways, the big question in machine learning, and is often answerable only via a combination of experience and trial and error. The more machine-learning problems you work to solve, the more of a feel you'll get for the types of models you can try. However, there's always the chance that you'll be wrong, which is why it's often worth creating several different types of models, comparing them against one another for validity. I plan to talk more about validity testing in my next article; for now, it's important to understand how to build a model. Different algorithms are meant for different kinds of machine-learning problems. In this case, the input data already has been ranked, meaning that you can use a supervised learning model. The output from the model is a numeric score that ranges from 0 to 5, which means that you'll have to use a numeric model, rather than a categorical one. The difference is that a categorical model's outputs will (as the name implies) indicate into which of several categories, identified by integers, the input should be placed. For example, modern political parties hire data scientists who try to determine which way someone will vote based on input data. The result, namely a political party, is categorical. In this case, however, you have numeric data. In this kind of model, you expect the output to vary along a numeric range. A pricing model, determining how much someone might be willing to pay for a particular item or how much to charge for an advertisement, will use this sort of model. I should note that if you want, you can turn the numeric data into categorical data simply by rounding or truncating the floating-point y values, such that you get integer values. It is this sort of transformation that you'll likely need to consider—and try, and test—in a machine-learning project. And, it's this myriad of choices and options that can lead to a data-science project being involved, and to incorporate your experience and insights, as well as brute-force tests of a variety of possible models. Let's assume you're going to keep the data as it is. You cannot use a purely categorical model, but rather will need to use one that incorporates the statistical concept of "regression", in which you attempt to determine which of your input factors cause the output to correlate linearly with the outputs—that is, assume that the ideal is something like the "y = qX" that you saw above; given that this isn't the case, how much influence did meat quality have vs. uniformity vs. temperature? Each of those factors affected the overall quality in some way, but some of them had more influence than others. One of the easiest to understand, and most popular, types of models uses the K Network Neighbors (KNN) algorithm. KNN basically says that you'll take a new piece of data and compare its features with those of existing, known, categorized data. The new data is then classified into the same category as its K closest neighbors, where K is a number that you must determine, often via trial and error. However, KNN works only for categories; this example is dealing with a regression problem, which can't use KNN. Except, Python's scikit-learn happens to come with a version of KNN that is designed to work with regression problems—the KNeighborsRegressor classifier. So, how do you use it? Here's the basic way in which all supervised learning happens in scikit-learn: Import the Python class that implements the classifier. Create a model—that is, an instance of the classifier. Train the model using the "fit" method. Feed data to the model and get a prediction. Let's try this with the data. You already have an X and a y, which you can plug in to the standard sklearn pattern: from sklearn.neighbors import KNeighborsRegressor # import classifier KNR = KNeighborsRegressor() # create a model KNR.fit(X, y) # train the model Without the dropna above (in which I removed any rows containing one or more NaN values), you still would have "dirty" data, and sklearn would be unable to proceed. Some classifiers can handle NaN data, but as a general rule, you'll need to get rid of NaN values—either to satisfy the classifier's rules, or to ensure that your results are of high quality, or even (in some cases) valid. With the trained model in place, you now can ask it: "If you have a burrito with really great ingredients, how highly will it rank?" All you have to do is create a new, fake sample burrito with all high-quality ingredients: great_ingredients = np.ones(X.iloc[0].count()) * 5 In the above line of code, I took the first sample from X (that is, X.iloc[0]), and then counted how many items it contained. I then multiplied the resulting NumPy array by 5, so that it contained all 5s. I now can ask the model to predict the overall quality of such a burrito: KNR.predict([great_ingredients]) I get back a result of: array([ 4.86]) meaning that the burrito would indeed score high—not a 5, but high nonetheless. What if you create a burrito with absolutely awful ingredients? Let's find the predicted quality: terrible_ingredients = np.zeros(X.iloc[0].count()) In the above line of code, I created a NumPy array containing zeros, the same length as the X's list of features. If you now ask the model to predict the score of this burrito, you get: array([ 1.96]) The good news is that you have now trained the computer to predict the quality of a burrito from a set of rated ingredients. The other good news is that you can determine which ingredients are more influential and which are less influential. At the same time, there is a problem: how do you know that KNN regression is the best model you could use? And when I say "best", I ask whether it's the most accurate at predicting burrito quality. For example, maybe a different classifier will have a higher spread or will describe the burritos more accurately. It's also possible that the classifier is a good one, but that one of its parameters—parameters that you can use to "tune" the model—wasn't set correctly. And I suspect that you indeed could do better, since the best burrito actually sampled got a score of 5, and the worst burrito had a score of 1.5. This means that the model is not a bad start, but that it doesn't quite handle the entire range that one would have expected. One possible solution to this problem is to adjust the parameters that you hand the classifier when creating the model. In the case of any KNN-related model, one of the first parameters you can try to tune is n_neighbors. By default, it's set to 5, but what if you set it to higher or to lower? A bit of Python code can establish this for you: for k in range(1,10): print(k) KNR = KNeighborsRegressor(n_neighbors=k) KNR.fit(X, y) print("\tTerrible: {0}".format(KNR.predict([terrible_ingredients]))) print("\tBest: {0}".format(KNR.predict([great_ingredients]))) After running the above code, it seems like the model that has the highest high and the lowest low is the one in which n_neighbors is equal to 1. It's not quite what I would have expected, but that's why it's important to try different models. And yet, this way of checking to see which value of n_neighbors is the best is rather primitive and has lots of issues. In my next article, I plan to look into checking the models, using more sophisticated techniques than I used here. Using Another Classifier So far, I've described how you can create multiple models from a single classifier, but scikit-learn comes with numerous classifiers, and it's usually a good idea to try several. So in this case, let's also try a simple regression model. Whereas KNN uses existing, known data points in order to decide what outputs to predict based on new inputs, regression uses good old statistical techniques. Thus, you can use it as follows: from sklearn.linear_model import LinearRegression LR = LinearRegression() LR.fit(X, y) print("\tTerrible: {0}".format(KNR.predict([terrible_ingredients]))) print("\tBest: {0}".format(KNR.predict([great_ingredients]))) Once again, I want to stress that just because you don't cover the entire spread of output values, from best to worst, you can't discount this model. And, a model that works with some data sets often will not work with other data sets. But as you can see, scikit-learn makes it easy—almost trivially easy, in fact—to create and experiment with different models. You can, thus, try different classifiers, and types of classifiers, in order to create a model that describes your data. Now that you've created several models, the big question is which one is the best? Which one not only describes the data, but also does so well? Which one will give the most predictive power moving forward, as you encounter an ever-growing number of burritos? What ingredients should a burrito-maker stress in order to maximize eater satisfaction, while minimizing costs? In order to answer these questions, you'll need to have a way of testing your models. In my next article, I'll look at how to test your models, using a variety of techniques to check the validity of a model and even compare numerous classifier types against one another. e-mail list is "KDNuggets" You also should consider the "Data Science Weekly" newsletter and "This Week in Data", describing the latest data sets available to the public. I am a big fan of podcasts and.
https://www.linuxjournal.com/content/teaching-your-computer
CC-MAIN-2020-29
en
refinedweb
KWindowSystem netwm_def.h 132 struct NETExtendedStrut { 136 NETExtendedStrut() : left_width(0), left_start(0), left_end(0), 143 int left_width, left_start, left_end; 148 int right_width, right_start, right_end; 158 int bottom_width, bottom_start, bottom_end; 212 struct NETFullscreenMonitors { 217 NETFullscreenMonitors() : top(-1), bottom(0), left(0), right(0) { } 286 enum WindowType { 318 // cannot deprecate to compiler: used both by clients & manager, later needs to keep supporting it for now 378 enum WindowTypeMask { 402 Q_DECLARE_FLAGS(WindowTypes, WindowTypeMask) 408 static bool typeMatchesMask(WindowType type, WindowTypes mask); 553 enum MappingState { 680 Q_DECLARE_FLAGS(Properties, Property) 753 Q_DECLARE_FLAGS(Properties2, Property2) 765 enum RequestSource { 783 enum Orientation { 791 enum DesktopLayoutCorner { 831 Q_DECLARE_OPERATORS_FOR_FLAGS(NET::Properties) 832 Q_DECLARE_OPERATORS_FOR_FLAGS(NET::Properties2) 833 Q_DECLARE_OPERATORS_FOR_FLAGS(NET::WindowTypes) Simple icon class for NET classes. Definition: netwm_def.h:102 Definition: netwm_def.h:173 Protocol Protocols supported by the client. Definition: netwm_def.h:805 int bottom Bottom border of the strut. Definition: netwm_def.h:197 int bottom_width Bottom border of the strut, width and range. Definition: netwm_def.h:158 Property2 Supported properties. Definition: netwm_def.h:716 unsigned char * data Image data for the icon. Definition: netwm_def.h:120 Simple multiple monitor topology class for NET classes. Definition: netwm_def.h:212 int right Right border of the strut. Definition: netwm_def.h:187 int y y coordinate Definition: netwm_def.h:35 NETStrut() Constructor to initialize this struct to 0,0,0,0. Definition: netwm_def.h:177 State Window state. Definition: netwm_def.h:427 NETPoint() Constructor to initialize this point to 0,0. Definition: netwm_def.h:30 Partial strut class for NET classes. Definition: netwm_def.h:132 int bottom Monitor index whose bottom border defines the bottom edge of the topology. Definition: netwm_def.h:227 int left Left border of the strut. Definition: netwm_def.h:182 Orientation Orientation. Definition: netwm_def.h:783 Direction Direction for WMMoveResize. Definition: netwm_def.h:532 NETIcon() Constructor to initialize this icon to 0x0 with data=0. Definition: netwm_def.h:106 MappingState Client window mapping state. Definition: netwm_def.h:553 Role Application role. Definition: netwm_def.h:271 int right Monitor index whose right border defines the right edge of the topology. Definition: netwm_def.h:237 NETSize size Size of the rectangle. Definition: netwm_def.h:88 WindowType Window type. Definition: netwm_def.h:286 int x x coordinate. Definition: netwm_def.h:35 NETSize size Size of the icon. Definition: netwm_def.h:113 Property Supported properties. Definition: netwm_def.h:638 bool isSet() const Convenience check to make sure that we are not holding the initial (invalid) values. Definition: netwm_def.h:245 NETPoint pos Position of the rectangle. Definition: netwm_def.h:81 Simple point class for NET classes. Definition: netwm_def.h:26 int top Monitor index whose top border defines the top edge of the topology. Definition: netwm_def.h:222 int top_width Top border of the strut, width and range. Definition: netwm_def.h:153 NETExtendedStrut() Constructor to initialize this struct to 0,0,0,0. Definition: netwm_def.h:136 int left Monitor index whose left border defines the left edge of the topology. Definition: netwm_def.h:232 Simple rectangle class for NET classes. Definition: netwm_def.h:75 Base namespace class. Definition: netwm_def.h:263 NETSize() Constructor to initialize this size to 0x0. Definition: netwm_def.h:56 int width Width. Definition: netwm_def.h:61 int right_width Right border of the strut, width and range. Definition: netwm_def.h:148 NETFullscreenMonitors() Constructor to initialize this struct to -1,0,0,0 (an initialized, albeit invalid, topology). Definition: netwm_def.h:217 Action Actions that can be done with a window (_NET_WM_ALLOWED_ACTIONS). Definition: netwm_def.h:574 int left_width Left border of the strut, width and range. Definition: netwm_def.h:143 indicates that the application is a client application. Definition: netwm_def.h:275 WindowTypeMask Values for WindowType when they should be OR'ed together, e.g. Definition: netwm_def.h:378 DesktopLayoutCorner Starting corner for desktop layout. Definition: netwm_def.h:791 int top Top border of the strut. Definition: netwm_def.h:192 RequestSource Source of the request. Definition: netwm_def.h:765 Simple size class for NET classes. Definition: netwm_def.h:52 This file is part of the KDE documentation. Documentation copyright © 1996-2020 The KDE developers. Generated on Wed Jul 1 2020 22:39:04 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006 Documentation copyright © 1996-2020 The KDE developers. Generated on Wed Jul 1 2020 22:39:04 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
https://api.kde.org/frameworks/kwindowsystem/html/netwm__def_8h_source.html
CC-MAIN-2020-29
en
refinedweb
Before we start you will need platform for C/C++, you can use Code::Blocks is a free IDE(Integrated development environment) for C/C++, or Visual Studio, I think they have a free version too. Next, I will give a few examples of how C look like, and I will try to describe how he works. Number 1 #include "stdafx.h" int main() { printf("This is just a test"); getchar(); return 0; } We include stdafx.h is the library that helps the compiler to understand the code. #include "stdafx.h" int main() { } This is the main function, I think this is obvious. Next the printf function will show whatever we want on the console. In this case “This is just a test” is what you will see when you run this application. The getchar(); function will allow the console to stay open until I press ENTER. And finally return 0;means that we finished the program without any error. This is what you will see: Number 2 #include "stdafx.h" int main() { int a,b; a = 10; b = 20; printf("%d + %d = %d",a , b , a + b); printf(" "); printf("%d * %d = %d",a , b , a * b); printf(" "); printf("%d - %d = %d",a , b , a - b); printf(" "); printf("%d : %d = %d",a , b , a / b); getchar(); return 0; } In the main function I make 2 variables a and b, next, I make a = 10 and b = 20. As you know from the previous example printf() function will show what ever we want on the console. Now, lets see what this line of code do: printf(“%d + %d = %d”,a , b , a + b); . If you read the previous tutorial you know that %d indicate the position where a decimal integer should be. We have 3 of %d there, so we need 3 variable, the first one is a, the second one is b and the third one is a + b or a * b etc… You can use Constant Characters to show every operation on a new line or whatever you want. The final result should be: You see that 10 : 20 = 0 that is because our variables are int. The int type does not support 0.5, if you make a and b float, you will have your answer, and of course change %d to %f. Number 3 #include "stdafx.h" #include <stdio.h> int main() { float a = 0,b = 0,c = 0,r = 0; printf("a:"); scanf("%f",&a); printf("b:"); scanf("%f",&b); printf("c:"); scanf("%f",&c); r = a + b - c; printf("r = %f", r); getchar(); return 0; } In this example a new function appears scanf();. Well, let’s take a look and see what we have here:scanf(“%f”,&a);. The scanf() function in this case has 2 parameters, the first one is the type of data in this case float, and the second one is an address. More specific is the address where the a variable it will be stored. & means address. Number 4 #include "stdafx.h" #include <stdio.h> void main() { char c; int i; float f; double d; c = 'C'; i = 1195; f = 123.4567; d = 11212.33E3; printf("character = %c", c); printf(" "); printf("int = %d", i); printf(" "); printf("float = %f", f ); printf(" "); printf("double = %e", d); getchar(); } This is a very simple example, shows you how to use every type of fundamental data, so you can experiment with C as you want. One Reply to “Getting started with C [2]”
https://horiacondrea.com/getting-started-with-c-2/
CC-MAIN-2020-29
en
refinedweb
Your regression may only need one gradient step. Really. I’ve been rethinking gradient descent over the weekend. It struck me that calculating the gradient is typically way more expensive than taking the step that follows it. I ran the numbers and found that about 80% of the training loop is spent calculating a gradient. This led me to some fun hacking and I want to demonstrate the findings in this document. In particular I would like to highlight some ideas that had insightful results; These results are done on artifical data, so the results deserve to be taken with a grain of salt, but the ideas are intertraining nonetheless. Suppose that I want to optimise a function, say \(f(x)\). You could calculate the gradient and take a step but after taking this step you’d need to calculate the gradient again. Figure 1: Gradient Descent 101. It would work, but it may take a while. Especially if calculating the gradient is expensive (it usually is). So how about we do some extra work to do a calculated step instead. Figure 2: Can’t we do this? A calculated step might be more expensive to calculate, but this is offset by all the small steps we would otherwise do. Figure 3: We may be on to something here. This is where calculus can help us. In particular, taylor series! Suppose that we have some value \(x\) and we’d like to estimate what \(f(x + t)\) is then we can approximate this by; \[f\left(x+t\right) \approx f\left(x\right)+f^{\prime}\left(x\right) t+\frac{1}{2} f^{\prime \prime}\left(x\right) t^{2}\] If I know the derivatives of the function \(f\) then I can approximate what \(f(x+t)\) might be. But we can go a step further. Suppose now that I am interested in finding a minimum for this function. Then we can rewrite the expression to represent iteration. \[f\left(x_{k}+t_k\right) \approx f\left(x_{k}\right)+f^{\prime}\left(x_{k}\right) t_k+\frac{1}{2} f^{\prime \prime}\left(x_{k}\right) t_k^{2}\] Here \(x_k\) represents \(x\) at iteration time \(k\) and the next value \(x_{k+1}\) will be \(x_{k} + t_k\). The question now becomes; how can I choose \(t_k\) such that travel to the minimum as fast as possible? It turns out to be; \[ x_{k+1}=x_{k}+t_k=x_{k}-\frac{f^{\prime}\left(x_{k}\right)}{f^{\prime \prime}\left(x_{k}\right)} \] This formula can be used for functions with a single parameter, but with some linear algebra tricks we can also extend it to functions with many inputs using a hessian matrix. Derivation. If the second derivative is positive, the quadratic approximation is a convex function of \(t\), and its minimum can be found by setting the derivative to zero. Note that; \[ 0=\frac{\mathrm{d}}{\mathrm{d} t_k}\left(f\left(x_{k}\right)+f^{\prime}\left(x_{k}\right) t_k+\frac{1}{2} f^{\prime \prime}\left(x_{k}\right) t_k^{2}\right)=f^{\prime}\left(x_{k}\right)+f^{\prime \prime}\left(x_{k}\right)t_k\] Thus the minimum is achieved for \(t_k=-\frac{f^{\prime}\left(x_{k}\right)}{f^{\prime \prime}\left(x_{k}\right)}\) Putting everything together, Newton’s method performs the iteration; \[ x_{k+1}=x_{k}+t=x_{k}-\frac{f^{\prime}\left(x_{k}\right)}{f^{\prime \prime}\left(x_{k}\right)} \] Now, this formula works for single parameter functions, but we can also express this result in linear algebra terms. \[ x_{k+1}=x_{k}+t=x_{k}- [f^{\prime \prime}\left(x_{k}\right)]^{-1} f^{\prime}\left(x_{k}\right) \]Here \(x_k\) is a vector, \([f^{\prime \prime}\left(x_{k}\right)]^{-1}\) is the Hessian matrix and \(f^{\prime}\left(x_{k}\right)\) is the gradient vector. This, to me, was a great excuse to play with jax. It has a couple of like-able features but a main one is that it is an autograd library that also features a hessian. You can just-in-time compile derivate functions and it will also run on GPU’s and TPU’s. This is how you might implement a linear regression; import jax.numpy as np from jax import grad, jit def predict(params, inputs): return inputs @ params def mse(params, inputs, targets): preds = predict(params, inputs) return np.mean((preds - targets)**2) grad_fun = jit(grad(mse)) # compiled gradient evaluation function The grad_fun is now a compiled gradient function that has two input parameters left; inputs and targets and it will return the gradiet of the mse function. That means that I can use it in a learning loop. So here’s an implementation of linear regression; import tqdm import numpy as np import matplotlib.pylab as plt # generate random regression data n, k = 1_000_000, 10 both = [np.ones((n, 1)), np.random.normal(0, 1, (n, k))] X = np.concatenate(both, axis=1) true_w = np.random.normal(0, 5, (k + 1,)) y = X @ true_w np.random.seed(42) W = np.random.normal(0, 1, (k + 1,)) stepsize = 0.02 n_step = 100 hist_gd = np.zeros((n_step,)) for i in tqdm.tqdm(range(n_step)): hist_gd[i] = mse(W, inputs=X, targets=y) dW = grad_fun(W, inputs=X, targets=y) W -= dW*stepsize This is what the mean squared error looks like over the epochs. Figure 4: Looks converging. Let’s now do the same thing, but use the calculus trick. from jax import hessian # use same data, but reset the found weights np.random.seed(42) W = np.random.normal(0, 1, (k + 1,)) n_step = 100 hist_hess = np.zeros((n_step,)) for i in tqdm.tqdm(range(n_step)): hist_hess[i] = mse(W, inputs=X, targets=y) inv_hessian = np.linalg.inv(hessian(mse)(W, X, y)) dW = inv_hessian @ grad_fun(W, inputs=X, targets=y) W -= dW Want to see something cool? This is the new result. Figure 5: You barely need the second epoch. The shocking thing is that this graph always has the same shape, no matter the rows or columns. When I first ran this I could barely believe it. By using the hessian trick we predict how big of a step we need to make and it hits bullseye. There’s reason for this bullseye but it is a bit mathematical. Derivation of Mathematical Bullseye. Let’s rewrite the loss for linear regression in matrix terms. \[ L(\beta) = (y - X \beta)^T (y - X \beta) \] If we simply differentiate then the gradient vector is; \[ \nabla L (\beta) = - X^T (y - X \beta) \] And the Hessian matrix is; \[ \nabla^2 L (\beta) = X^T X \] Let’s remind ourselves of newtons method. \[ x_{k+1}=x_{k}+t=x_{k}-\frac{f^{\prime}\left(x_{k}\right)}{f^{\prime \prime}\left(x_{k}\right)} \] That means that our stepsize (see earlier derivation) needs to be; \[ \begin{equation} \begin{split} t & = -[\nabla^2 L (\beta)]^{-1} \nabla L (\beta) \\ & = -(X^TX)^{-1}X^T(y - X \beta) \\ & = -(X^TX)^{-1}X^Ty + (X^TX)^{-1}X^TX \beta \\ & = \beta -(X^TX)^{-1}X^Ty \end{split} \end{equation} \] When we start our regression we start with \(\beta_k\) and then the update rule becomes; \[ \begin{equation} \begin{split} \beta_{k+1} & = \beta_k - t\\ & = \beta_k - \beta_k + (X^TX)^{-1}X^Ty \\ & = (X^TX)^{-1}X^Ty \end{split} \end{equation} \] And this is a bit of a coincidence, but \((X^TX)^{-1}X^Ty\) is the closed form solution for linear regression. This means that using newtons method for a single iteration on standard linear regression is equivalent to using the close form method.This does not mean that this is the fastest way to perform linear regression. You can benchmark it yourself, scikit-learn is faster. We should not expect something similar to happen with neural networks though. This made me wonder, can we do something similar in spirit for neural networks? Well, maybe we should go for the other extreme. Instead of doing few steps, let’s do many! Consider what we usually do. Figure 6: This is base SGD. If we briefly ignore the details of adam/momentum then the gradient descent idea does the two things calculating a gradient (thick dot) and moving a step (line). But what if we don’t stop stepping? Figure 7: First determine the direction. Then keep walking until results get worse. Only then do another gradient. Repeat. A dash here represents moving forward without re-evaluating the gradient. Once we notice that a step makes the score worse, we stop and check the gradient again. It is well possible that the general direction that you’re moving in is a good one, do you really need to stop moving? Do we really need to calculate a gradient? Or can we just keep on stepping? Checking if the next step is making it worse is a forward pass, not a backward one. If a gradient calculation is about 80% of the compute power for training then this might be a neat idea. There are two hyperparameters to this idea; nsteps we check for another gradient To check the merits of this idea I figured it’d be fun to write my own optimiser for pytorch. Implementation of KeepStepping Optimizer import torch from torch.optim.optimizer import Optimizer, required class KeepStepping(Optimizer): """ KeepStepping - PyTorch Optimizer Inputs: lr = learning rate, ie. the minimum stepsize max_steps = the maximum number of steps that will be performed before calculating another gradient scale_i = to what degree do we scale our impatience """ def __init__(self, params, lr=required, max_steps=20, scale_i=0): if lr is not required and lr < 0.0: raise ValueError("Invalid learning rate: {}".format(lr)) defaults = dict(lr=lr) super().__init__(params, defaults) self.max_steps = max_steps self.lr_orig = lr self.scale_i = scale_i def mini_step(self, i): for group in self.param_groups: for p in group['params']: if p.grad is None: continue d_p = p.grad.data scale = - group['lr'] * self.scale_i * np.sqrt(i) p.data.add_(-group['lr'] - scale, d_p) def step(self, closure): """Performs a single optimization step.""" old_loss = closure() i = 0 self.mini_step(i) new_loss = closure() while (new_loss < old_loss) & (i < self.max_steps): self.mini_step(i) old_loss = new_loss new_loss = closure() i += 1 return new_loss Before testing this, I consdering taking the previous idea and combining it with the idea before. I am doing less gradients, sure, but I am still taking lots of steps. Can I instead perhaps calculate how big the stepsize should be? There might be something adaptive that we can do here? Given a direction that we’re supposed to move in, you could consider that we’re back in a one dimensional domain again and that we merely need to find the right stepsize. Figure 8: Note that as far as the stepsize is concerned, we merely need to move in one direction. So it’s back to one dimensional-land. So I made an implementation that takes this direction, numerical estimates of \(f'(x_{\text{direction}})\) and \(f''(x_{\text{direction}})\) and tries to adaptively estimate an appropriate stepsize. Implementation of KeepVaulting Optimizer import torch from torch.optim.optimizer import Optimizer, required class KeepVaulting(Optimizer): """ KeepVaulting - PyTorch Optimizer Inputs: lr = learning rate, ie. the minimum stepsize max_steps = the maximum number of steps that will be performed before calculating another gradient """ def __init__(self, params, lr=required, max_steps=20): if lr is not required and lr < 0.0: raise ValueError("Invalid learning rate: {}".format(lr)) defaults = dict(lr=lr) super().__init__(params, defaults) self.max_steps = max_steps self.lr_orig = lr def mini_step(self, jumpsize=1): for group in self.param_groups: for p in group['params']: if p.grad is None: continue d_p = p.grad.data p.data.add_(-(group['lr'] * float(jumpsize)), d_p) def step(self, closure): """Performs a single optimization step.""" old_loss = closure() i = 0 self.mini_step() new_loss = closure() losses = [old_loss.item(), new_loss.item()] while (new_loss < old_loss) & (i < self.max_steps): # we're using the secant method here to # approximate the second order derivative # first_order_grad1 = (losses[-1] - losses[-2])/self.lr_orig second_order_grad = (losses[-1] - first_order_grad1)/self.lr_orig stepsize = -second_order_grad/first_order_grad1 * self.lr_orig self.mini_step(stepsize) old_loss = new_loss new_loss = closure() losses.append(new_loss.item()) i += 1 return new_loss A meaningful benchmark was hard to come up with so I just generated an artificial regression task with some deep layers. I don’t want to suggest that the following counts as “general performance” but they are interesting to think about. I’ll list a few results below. Implementation of generate_new_dataset def generate_new_dataset(n_row, dim_in, dim_hidden, n_layers): torch.manual_seed(0) x = torch.randn(n_row, dim_in) y = x.sum(axis=1).reshape(-1, 1) model = torch.nn.Sequential( torch.nn.Linear(dim_in, dim_hidden), *[torch.nn.Linear(dim_hidden, dim_hidden) for _ in range(n_layers)], torch.nn.Linear(dim_hidden, 1), ) loss_fn = torch.nn.MSELoss(reduction='mean') def loss_closure(): y_pred = model(x) return loss_fn(y_pred, y) return model, loss_fn, loss_closure, x, y Implementation of Data Collection results = {} learning_rate = 1e-3 optimisers = { 'KS_50_0': lambda p: KeepStepping(p, lr=learning_rate, max_steps=50, scale_i=0), 'KS_50_2': lambda p: KeepStepping(p, lr=learning_rate, max_steps=50, scale_i=2), 'KS_10_0': lambda p: KeepStepping(p, lr=learning_rate, max_steps=10, scale_i=0), 'KS_10_2': lambda p: KeepStepping(p, lr=learning_rate, max_steps=10, scale_i=2), 'KV_10': lambda p: KeepVaulting(p, lr=learning_rate, max_steps=10), 'KV_50': lambda p: KeepVaulting(p, lr=learning_rate, max_steps=50), 'SGD': lambda p: torch.optim.SGD(p, lr=learning_rate), 'ADAM': lambda p: torch.optim.Adam(p, lr=learning_rate), } for name, alg in optimisers.items(): model, loss_fn, loss_closure, x, y = generate_new_dataset() n_steps = 1000 if not 'K' in name else 100 results[name] = np.zeros((n_steps, 2)) optimizer = alg(model.parameters()) tic = time() for t in tqdm.tqdm(range(n_steps)): y_pred = model(x) loss = loss_fn(y_pred, y) optimizer.zero_grad() loss.backward() optimizer.step(loss_closure) results[name][t, :] = [loss.item(), time() - tic] plt.figure(figsize=(16, 4)) for name, hist in results.items(): score, times = hist[:, 0], hist[:, 1] plt.plot(times, score, label=name) plt.xlabel("time (s)") plt.ylabel("mean squared error") plt.legend(); Figure 9: This run contained 10K datapoints with 20 columns as input while the network had 5 layers with 10 hidden units. The initial learning rate was 1e-3. Figure 10: This run contained 10K datapoints with 20 columns as input while the network had 5 layers with 10 hidden units. The initial learning rate was 1e-4. This run was the same as the previous one but has a much smaller initial stepsize. Everything looks a loot smoother, but also becomes a lot slower. Figure 11: This run contained 100K datapoints with 100 columns as input while the network had 5 layers with 20 hidden units. The initial learning rate was 1e-4. In this run we increased the number of columns. Figure 12: This run contained 100K datapoints with 10 columns as input while the network had 3 hidden layers with 10 units each. We also added Relu layers here. The initial learning rate was 1e-4. Figure 13: This run contained 100K datapoints with 100 columns as input while the network had zero hidden layers The initial learning rate was 1e-4. It seems the impatient approach with a max step of 50 is beating Adam when it comes to convergence speed. The idea of vaulting is not performing as well as I had hoped but this may be due to numerical sensitivity. The idea seems to have merit to it but this work should not be seen as a proper benchmark. Also … that last run is just a linear regression. The fastest way to optimise that is to use the hessian trick that we started with. So what does all of this mean? Well … it suggests that there may be a valid trade off between doing more work such that you need to do less gradient evaluations. Either you spend more time preparing a step such that you reduce the number of steps needed (like the hessian approach for the linear regression) or you just do a whole lot more of them without checking the gradient all the time. I’d be interested in hearing stories from folks who benchmark this, so feel free to try it out and let me know if it does (or does not) work on your dataset. For attribution, please cite this work as Warmerdam (2020, April 10). koaning.io: More Descent, Less Gradient. Retrieved from BibTeX citation @misc{warmerdam2020more, author = {Warmerdam, Vincent}, title = {koaning.io: More Descent, Less Gradient}, url = {}, year = {2020} }
https://koaning.io/posts/more-descent-less-gradient/
CC-MAIN-2020-29
en
refinedweb
I finished the first prototype for the MMS Photo-inator Xsi a few months back. It will take a picture when someone presses the button. Then, it displays the picture on the screen for a few seconds. Next, it flashes the lights on the green and red buttons and waits for the person to press one. If the person presses the red button, the program deletes the picture and resets. If the person presses the green button, the program keeps the picture and resets. One thing that I need the Photo-inator to do is stop displaying the picture if the person presses either the red or green button. I have not been able to figure that out, until now. I tried sending a keystroke to the session but that did not work. I also tried killing the process but that did not work. I tried other software and solutions but could not find one. I spent the last few months trying to figure this out. It has been extremely frustrating. Here is how I finally resolved the problem. I heard that the uinput module for python will allow me to send keystrokes to a session. I downloaded the source code from here: Before I could install the source code, I had to install a supporting program by typing sudo apt-get install libudev.so Next, I installed the source code by running sudo python setup.py buildsudo python setup.py install I could not get fbi to work in the python program. Tried it from the terminal and it will not work either. I keep getting an error message that says, "ioctl VT_GETSTATE: Inappropriate ioctl for device (not a linux console?)" Used Ctrl-Alt-F1 to switch to tty1 . Executing fbi on tty1 works perfectly. If I have the python program execute this: "fbi -a -T 1 -t 1 BBB.jpeg", fbi will display the image. The -T 1 tells fbi to display the program on tty1. I need to specify this on my PC but it apparently did that by default when I ran the program on the Raspberry Pi. Next, I wrote a small python program to display a picture with fbi, wait a few seconds, then send a q to the uinput subsystem on Linux to simulate a user pressing the letter q. This worked perfectly. The test code looked like this: #!/usr/bin/python import subprocessimport psutilimport timeimport uinput subprocess.call("fbi -a -T 1 -1 /home/ken/Pictures/BBB.jpeg", shell=True)time.sleep(3) #Uinput testsevents = ( uinput.KEY_Q, uinput.KEY_H, uinput.KEY_L, uinput.KEY_O, ) device = uinput.Device(events)time.sleep(1) device.emit_click(uinput.KEY_Q) Finally, I modified the photo-inator program to use the uinput code. I need to fire up the whole system to test it out. I'll do that some other day.
http://beckermaker.blogspot.com/2015/10/fbi-exit-problem-fixed.html
CC-MAIN-2017-22
en
refinedweb